Folgen
Oleksiy Ostapenko
Oleksiy Ostapenko
Bestätigte E-Mail-Adresse bei umontreal.ca
Titel
Zitiert von
Zitiert von
Jahr
Learning to remember: A synaptic plasticity driven framework for continual learning
O Ostapenko, M Puscas, T Klein, P Jahnichen, M Nabi
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2019
2802019
Online fast adaptation and knowledge accumulation (osaka): a new approach to continual learning
M Caccia, P Rodriguez, O Ostapenko, F Normandin, M Lin, ...
Advances in Neural Information Processing Systems 33, 16532-16545, 2020
138*2020
Continual learning via local module composition
O Ostapenko, P Rodriguez, M Caccia, L Charlin
Advances in Neural Information Processing Systems 34, 30298-30312, 2021
502021
Continual Learning with Foundational Models: An Empirical Study of Latent Replay
O Ostapenko, T Lesort, P Rodríguez, MR Arefin, A Douillard, I Rish, ...
CoLLAs 2022, 2022
39*2022
Self-paced adversarial training for multimodal few-shot learning
F Pahde, O Ostapenko, PJ Hnichen, T Klein, M Nabi
2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 218-226, 2019
182019
Sequoia: A software framework to unify continual learning research
F Normandin, F Golemo, O Ostapenko, P Rodriguez, MD Riemer, ...
arXiv preprint arXiv:2108.01005, 2021
152021
Pruning at a glance: Global neural pruning for model compression
A Salama, O Ostapenko, T Klein, M Nabi
arXiv preprint arXiv:1912.00200, 2019
152019
Prune your neurons blindly: Neural network compression through structured class-blind pruning
A Salama, O Ostapenko, T Klein, M Nabi
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
132019
Scaling the number of tasks in continual learning
T Lesort, O Ostapenko, D Misra, MR Arefin, P Rodrıguez, L Charlin, I Rish
arXiv preprint arXiv:2207.04543 2, 2022
62022
Challenging Common Assumptions about Catastrophic Forgetting and Knowledge Accumulation
T Lesort, O Ostapenko, P Rodríguez, D Misra, MR Arefin, L Charlin, I Rish
Conference on Lifelong Learning Agents, 43-65, 2023
32023
Attention for Compositional Modularity
O Ostapenko, P Rodriguez, A Lacoste, L Charlin
NeurIPS'22 Workshop on All Things Attention: Bridging Different Perspectives …, 2022
32022
Sequoia-towards a systematic organization of continual learning research
F Normandin, F Golemo, O Ostapenko, M Riemer, P Rodriguez, J Hurtado, ...
Github repository, 2021
32021
Pruning at a glance: A structured class-blind pruning technique for model compression
A Salama, O Ostapenko, M Nabi, T Klein
32018
Online fast adaptation and knowledge accumulation: A new approach to continual learning. arXiv 2020
M Caccia, P Rodriguez, O Ostapenko, F Normandin, M Lin, L Caccia, ...
arXiv preprint arXiv:2003.05856, 0
3
Guiding language model reasoning with planning tokens
X Wang, L Caccia, O Ostapenko, X Yuan, A Sordoni
arXiv preprint arXiv:2310.05707, 2023
22023
Self-paced adversarial training for multimodal and 3D model few-shot learning
F Pahde, O Ostapenko, T Klein, M Nabi, M Puscas
US Patent 10,990,848, 2021
22021
Challenging Common Assumptions about Catastrophic Forgetting
T Lesort, O Ostapenko, D Misra, MR Arefin, P Rodríguez, L Charlin, I Rish
arXiv preprint arXiv:2207.04543, 2022
12022
A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts
O Ostapenko, L Caccia, Z Su, N Le Roux, L Charlin, A Sordoni
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
2023
From IID to the Independent Mechanisms assumption in continual learning
O Ostapenko, P Rodríguez, A Lacoste, L Charlin
AAAI Bridge Program on Continual Causality, 25-29, 2023
2023
Generative adversarial network with dynamic capacity expansion for continual learning
M Puscas, M Nabi, T Klein, O Ostapenko
US Patent 11,544,532, 2023
2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20