Follow
Shaomu Tan
Title
Cited by
Cited by
Year
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
B Liao, S Tan, C Monz
Advances in Neural Information Processing Systems 36, 2024
8*2024
Towards a Better Understanding of Variations in Zero-Shot Neural Machine Translation Performance
S Tan, C Monz
arXiv preprint arXiv:2310.10385, 2023
32023
Neuron Specialization: Leveraging intrinsic task modularity for multilingual machine translation
S Tan, D Wu, C Monz
arXiv preprint arXiv:2404.11201, 2024
2024
How Far Can 100 Samples Go? Unlocking Overall Zero-Shot Multilingual Translation via Tiny Multi-Parallel Data
D Wu, S Tan, Y Meng, D Stap, C Monz
arXiv preprint arXiv:2401.12413, 2024
2024
UvA-MT's Participation in the WMT23 General Translation Shared Task
D Wu, S Tan, D Stap, A Araabi, C Monz
arXiv preprint arXiv:2310.09946, 2023
2023
Towards leveraging latent knowledge and Dialogue context for real-world conversational question answering
S Tan, D Paperno
arXiv preprint arXiv:2212.08946, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–6