Folgen
Elad Hoffer
Elad Hoffer
PhD, Research @ Habana Labs
Bestätigte E-Mail-Adresse bei habana.ai - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Deep metric learning using triplet network
E Hoffer, N Ailon
Similarity-Based Pattern Recognition: Third International Workshop, SIMBAD …, 2015
23732015
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
E Hoffer, I Hubara, D Soudry
Advances in neural information processing systems 30, 2017
9272017
The implicit bias of gradient descent on separable data
D Soudry, E Hoffer, MS Nacson, S Gunasekar, N Srebro
Journal of Machine Learning Research 19 (70), 1-57, 2018
8912018
Scalable methods for 8-bit training of neural networks
R Banner, I Hubara, E Hoffer, D Soudry
Advances in neural information processing systems 31, 2018
3542018
Augment your batch: Improving generalization through instance repetition
E Hoffer, T Ben-Nun, I Hubara, N Giladi, T Hoefler, D Soudry
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
280*2020
Norm matters: efficient and accurate normalization schemes in deep networks
E Hoffer, R Banner, I Golan, D Soudry
Advances in Neural Information Processing Systems 31, 2018
1762018
Bayesian gradient descent: Online variational Bayes learning with increased robustness to catastrophic forgetting and weight pruning
C Zeno, I Golan, E Hoffer, D Soudry
arXiv preprint arXiv:1803.10123, 2018
116*2018
Fix your classifier: the marginal value of training the last weight layer
E Hoffer, I Hubara, D Soudry
arXiv preprint arXiv:1801.04540, 2018
1032018
The knowledge within: Methods for data-free model compression
M Haroush, I Hubara, E Hoffer, D Soudry
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
1012020
Exponentially vanishing sub-optimal local minima in multilayer neural networks
D Soudry, E Hoffer
arXiv preprint arXiv:1702.05777, 2017
1012017
Aciq: Analytical clipping for integer quantization of neural networks
R Banner, Y Nahshan, E Hoffer, D Soudry
732018
Neural gradients are lognormally distributed: understanding sparse and quantized training
B Chmiel, L Ben-Uri, M Shkolnik, E Hoffer, R Banner, D Soudry
arXiv, 2020
48*2020
Task-agnostic continual learning using online variational bayes with fixed-point updates
C Zeno, I Golan, E Hoffer, D Soudry
Neural Computation 33 (11), 3139-3177, 2021
47*2021
Semi-supervised deep learning by metric embedding
E Hoffer, N Ailon
arXiv preprint arXiv:1611.01449, 2016
392016
Deep unsupervised learning through spatial contrasting
E Hoffer, I Hubara, N Ailon
arXiv preprint arXiv:1610.00243, 2016
322016
Mix & match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
E Hoffer, B Weinstein, I Hubara, T Ben-Nun, T Hoefler, D Soudry
arXiv preprint arXiv:1908.08986, 2019
232019
Logarithmic unbiased quantization: Practical 4-bit training in deep learning
B Chmiel, R Banner, E Hoffer, HB Yaacov, D Soudry
21*2021
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?
N Giladi, MS Nacson, E Hoffer, D Soudry
arXiv preprint arXiv:1909.12340, 2019
182019
Quantized back-propagation: Training binarized neural networks with quantized gradients
I Hubara, E Hoffer, D Soudry
62018
Accurate neural training with 4-bit matrix multiplications at standard formats
B Chmiel, R Banner, E Hoffer, H Ben-Yaacov, D Soudry
The Eleventh International Conference on Learning Representations, 2022
22022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20