Folgen
Joe Stacey
Joe Stacey
Bestätigte E-Mail-Adresse bei imperial.ac.uk
Titel
Zitiert von
Zitiert von
Jahr
Supervising Model Attention with Human Explanations for Robust Natural Language Inference
J Stacey, Y Belinkov, M Rei
AAAI 2022, 2022
42*2022
Avoiding the hypothesis-only bias in natural language inference via ensemble adversarial training
J Stacey, P Minervini, H Dubossarsky, S Riedel, T Rocktäschel
EMNLP 2020, 2020
40*2020
Logical Reasoning with Span-Level Predictions for Interpretable and Robust NLI Models
J Stacey, P Minervini, H Dubossarsky, M Rei
EMNLP 2022, 3809-3823, 2022
9*2022
Logical reasoning for natural language inference using generated facts as atoms
J Stacey, P Minervini, H Dubossarsky, OM Camburu, M Rei
arXiv preprint arXiv:2305.13214, 2023
52023
Improving Robustness in knowledge distillation using domain-targeted data augmentation
J Stacey, M Rei
arXiv preprint arXiv:2305.13067, 2023
32023
When and Why Does Bias Mitigation Work?
A Ravichander, J Stacey, M Rei
The 2023 Conference on Empirical Methods in Natural Language Processing, 2023
12023
LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues
J Stacey, J Cheng, J Torr, T Guigue, J Driesen, A Coca, M Gaynor, ...
arXiv preprint arXiv:2403.00462, 2024
2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–7