Issam H. Laradji
Issam H. Laradji
McGill + Element AI
Verified email at - Homepage
Cited by
Cited by
Software defect prediction using ensemble learning on selected features
IH Laradji, M Alshayeb, L Ghouti
Information and Software Technology 58, 388-402, 2015
Coordinate descent converges faster with the gauss-southwell rule than random selection
J Nutini, M Schmidt, I Laradji, M Friedlander, H Koepke
International Conference on Machine Learning (ICML), 1632-1641, 2015
Where are the blobs: Counting by localization with point supervision
IH Laradji, N Rostamzadeh, PO Pinheiro, D Vazquez, M Schmidt
Proceedings of the European Conference on Computer Vision (ECCV), 547-562, 2018
Painless stochastic gradient: Interpolation, line-search, and convergence rates
S Vaswani, A Mishkin, I Laradji, M Schmidt, G Gidel, S Lacoste-Julien
Advances in Neural Information Processing Systems, 3732-3745, 2019
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
J Nutini, I Laradji, M Schmidt
arXiv preprint arXiv:1712.08859, 2017
Perceptual hashing of color images using hypercomplex representations
IH Laradji, L Ghouti, EH Khiari
2013 IEEE International Conference on Image Processing, 4402-4406, 2013
Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph
J Nutini, B Sepehry, I Laradji, M Schmidt, H Koepke, A Virani
arXiv preprint arXiv:1612.07838, 2016
M-ADDA: Unsupervised domain adaptation with deep metric learning
IH Laradji, R Babanezhad
Domain Adaptation for Visual Understanding, 17-31, 2020
Where are the masks: Instance segmentation with image-level supervision
IH Laradji, D Vazquez, M Schmidt
arXiv preprint arXiv:1907.01430, 2019
Convergence rates for greedy Kaczmarz algorithms
J Nutini, B Sepehry, A Virani, I Laradji, M Schmidt, H Koepke
Conference on Uncertainty in Artificial Intelligence, 2016
Stochastic polyak step-size for SGD: An adaptive learning rate for fast convergence
N Loizou, S Vaswani, I Laradji, S Lacoste-Julien
arXiv preprint arXiv:2002.10542, 2020
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
M Caccia, P Rodriguez, O Ostapenko, F Normandin, M Lin, L Caccia, ...
arXiv preprint arXiv:2003.05856, 2020
Instance segmentation with point supervision
IH Laradji, N Rostamzadeh, PO Pinheiro, D Vázquez, M Schmidt
arXiv preprint arXiv:1906.06392, 2019
Fast and furious convergence: Stochastic second order methods under interpolation
SY Meng, S Vaswani, IH Laradji, M Schmidt, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 1375-1386, 2020
Embedding Propagation: Smoother Manifold for Few-Shot Classification
P Rodríguez, I Laradji, A Drouin, A Lacoste
arXiv preprint arXiv:2003.04151, 2020
ESNet: An Efficient Symmetric Network for Real-Time Semantic Segmentation
Y Wang, Q Zhou, J Xiong, X Wu, X Jin
Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 41-52, 2019
Masaga: a linearly-convergent stochastic first-order method for optimization on manifolds
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2018
Learning data augmentation with online bilevel optimization for image classification
S Mounsaveng, I Laradji, IB Ayed, D Vazquez, M Pedersoli
arXiv preprint arXiv:2006.14699, 2020
Adaptive Gradient Methods Converge Faster with Over-Parameterization (and you can do a line-search)
S Vaswani, F Kunstner, I Laradji, SY Meng, M Schmidt, S Lacoste-Julien
arXiv preprint arXiv:2006.06835, 2020
Class-Based Styling: Real-time Localized Style Transfer with Semantic Segmentation
L Kurzman, D Vazquez, I Laradji
Proceedings of the IEEE International Conference on Computer Vision …, 2019
The system can't perform the operation now. Try again later.
Articles 1–20