Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 539 | 2023 |
PICARD: Parsing incrementally for constrained auto-regressive decoding from language models T Scholak, N Schucher, D Bahdanau arXiv preprint arXiv:2109.05093, 2021 | 253 | 2021 |
Feature-wise transformations V Dumoulin, E Perez, N Schucher, F Strub, H Vries, A Courville, Y Bengio Distill 3 (7), e11, 2018 | 191* | 2018 |
The power of prompt tuning for low-resource semantic parsing N Schucher, S Reddy, H de Vries arXiv preprint arXiv:2110.08525, 2021 | 31 | 2021 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 15 | 2024 |
Decovac: Design of experiments with controlled variability components T Boquet, L Delisle, D Kochetkov, N Schucher, P Atighehchian, ... arXiv preprint arXiv:1909.09859, 2019 | 1 | 2019 |
System for software module development T Boquet, N Schucher, J Fonseca US Patent App. 17/499,472, 2022 | | 2022 |
On the Compute and Parameter Efficient Fine-Tuning of Large Language Models N Schucher McGill University (Canada), 2022 | | 2022 |