Marius Mosbach
Marius Mosbach
McGill University, Mila - Quebec AI Institute
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines
M Mosbach, M Andriushchenko, D Klakow
International Conference on Learning Representations, 2021
Adapting pre-trained language models to African languages via multilingual adaptive fine-tuning
JO Alabi, DI Adelani, M Mosbach, D Klakow
Proceedings of the 29th International Conference on Computational Linguistics, 2022
Logit pairing methods can fool gradient-based attacks
M Mosbach, M Andriushchenko, T Trost, M Hein, D Klakow
arXiv preprint arXiv:1810.12042, 2018
Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation
M Mosbach, T Pimentel, S Ravfogel, D Klakow, Y Elazar
arXiv preprint arXiv:2305.16938, 2023
On the interplay between fine-tuning and sentence-level probing for linguistic knowledge in pre-trained transformers
M Mosbach, A Khokhlova, MA Hedderich, D Klakow
Findings of the Association for Computational Linguistics: EMNLP 2020, 2020
Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
Y Elazar, N Kassner, S Ravfogel, A Feder, A Ravichander, M Mosbach, ...
arXiv preprint arXiv:2207.14251, 2022
MCSE: Multimodal Contrastive Learning of Sentence Embeddings
M Zhang, M Mosbach, DI Adelani, MA Hedderich, D Klakow
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
Fusion Models for Improved Image Captioning
M Kalimuthu, A Mogadala, M Mosbach, D Klakow
Pattern Recognition. ICPR International Workshops and Challenges: Virtual …, 2021
Graph-based argument quality assessment
E Saveleva, V Petukhova, M Mosbach, D Klakow
Proceedings of the International Conference on Recent Advances in Natural …, 2021
Do Acoustic Word Embeddings Capture Phonological Similarity? An Empirical Study
BM Abdullah, M Mosbach, I Zaitova, B Möbius, D Klakow
Interspeech 2021, 2021
incom. py-A Toolbox for Calculating Linguistic Distances and Asymmetries between Related Languages
M Mosbach, I Stenger, T Avgustinova, D Klakow
Proceedings of the International Conference on Recent Advances in Natural …, 2019
Weaker Than You Think: A Critical Look at Weakly Supervised Learning
D Zhu, X Shen, M Mosbach, A Stephan, D Klakow
arXiv preprint arXiv:2305.17442, 2023
Some steps towards the generation of diachronic WordNets
Y Bizzoni, M Mosbach, D Klakow, S Degaetano-Ortlieb
Proceedings of the 22nd Nordic conference on computational linguistics, 55-64, 2019
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
P BehnamGhader, V Adlakha, M Mosbach, D Bahdanau, N Chapados, ...
arXiv preprint arXiv:2404.05961, 2024
StereoKG: Data-Driven Knowledge Graph Construction for Cultural Knowledge and Stereotypes
A Deshpande, D Ruiter, M Mosbach, D Klakow
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), 2022
Multilingual language model adaptive fine-tuning: A study on african languages
JO Alabi, DI Adelani, M Mosbach, D Klakow
3rd Workshop on African Natural Language Processing, 2022
On the Security Relevance of Initial Weights in Deep Neural Networks
K Grosse, TA Trost, M Mosbach, M Backes, D Klakow
Artificial Neural Networks and Machine Learning–ICANN 2020: 29th …, 2020
Artefact retrieval: Overview of NLP models with knowledge base access
V Zouhar, M Mosbach, D Biswas, D Klakow
arXiv preprint arXiv:2201.09651, 2022
Adversarial initialization-when your network performs the way I want
K Grosse, TA Trost, M Mosbach, M Backes
ArXiv e-prints, 2019
A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English
M Mosbach, S Degaetano-Ortlieb, MP Krielke, BM Abdullah, D Klakow
Proceedings of the 28th International Conference on Computational Linguistics, 2020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20