Rohan Anil
Rohan Anil
Principal Engineer, Google Brain
Verified email at
Cited by
Cited by
Wide & deep learning for recommender systems
HT Cheng, L Koc, J Harmsen, T Shaked, T Chandra, H Aradhye, ...
Proceedings of the 1st workshop on deep learning for recommender systems, 7-10, 2016
Large scale distributed neural network training through online distillation
R Anil, G Pereyra, AT Passos, R Ormandi, G Dahl, G Hinton
Sixth International Conference on Learning Representations, 2018
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
Tf-ranking: Scalable tensorflow library for learning-to-rank
RK Pasumarthi, S Bruch, X Wang, C Li, M Bendersky, M Najork, J Pfeifer, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
Robust bi-tempered logistic loss based on bregman divergences
E Amid, MK Warmuth, R Anil, T Koren
2019 Conference on Neural Information Processing Systems, 2019
Scalable Second Order Optimization for Deep Learning
R Anil, V Gupta, T Koren, K Regan, Y Singer
arXiv preprint arXiv:2002.09018, 2020, 2020
Memory-efficient adaptive optimization for large-scale learning
R Anil, V Gupta, T Koren, Y Singer
2019 Conference on Neural Information Processing Systems, 2019
Efficiently Identifying Task Groupings for Multi-Task Learning
C Fifty, E Amid, Z Zhao, T Yu, R Anil, C Finn
2021 Conference on Neural Information Processing Systems, Spotlight, 2021
Knowledge distillation: A good teacher is patient and consistent
L Beyer, X Zhai, A Royer, L Markeeva, R Anil, A Kolesnikov
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
Large-Scale Differentially Private BERT
R Anil, B Ghazi, V Gupta, R Kumar, P Manurangsi
Privacy Preserving Machine Learning, 2021
Disentangling adaptive gradient methods from learning rates
N Agarwal, R Anil, E Hazan, T Koren, C Zhang
arXiv preprint arXiv:2002.11803, 2020
Wide and deep machine learning models
T Shaked, R Anil, HB Aradhye, G Anderson, W Chai, ML Koc, J Harmsen, ...
US Patent 10,762,422, 2020
A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes
Z Nado, JM Gilmer, CJ Shallue, R Anil, GE Dahl
arXiv preprint arXiv:2102.06356, 2021
Stochastic Optimization with Laggard Data Pipelines
N Agarwal, R Anil, T Koren, K Talwar, C Zhang
2020 Conference on Neural Information Processing Systems, 2020
Locoprop: Enhancing backprop via local loss optimization
E Amid, R Anil, MK Warmuth
The 25th International Conference on Artificial Intelligence and Statistics …, 2021
Step-size Adaptation Using Exponentiated Gradient Updates
E Amid, R Anil, C Fifty, MK Warmuth
ICML’20 Workshop on “Beyond First Order Methods in ML", Spotlight, 2020
Learning from Randomly Initialized Neural Network Features
E Amid, R Anil, W Kotłowski, MK Warmuth
arXiv preprint arXiv:2202.06438, 2022
N-Grammer: Augmenting Transformers with latent n-grams
A Roy, R Anil, G Lai, B Lee, J Zhao, S Zhang, S Wang, Y Zhang, S Wu, ...
arXiv preprint arXiv:2207.06366, 2022
Learning Rate Grafting: Transferability of Optimizer Tuning
N Agarwal, R Anil, E Hazan, T Koren, C Zhang
Distributed computing pipeline processing
R Anil, B Bayarsaikhan, E Taropa
US Patent WO2021177976A1, 2021
The system can't perform the operation now. Try again later.
Articles 1–20