Folgen
Eran Malach
Eran Malach
School of Computer Science, Hebrew University
Bestätigte E-Mail-Adresse bei mail.huji.ac.il - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Decoupling" when to update" from" how to update"
E Malach, S Shalev-Shwartz
Advances in neural information processing systems 30, 2017
4002017
SGD learns over-parameterized networks that provably generalize on linearly separable data
A Brutzkus, A Globerson, E Malach, S Shalev-Shwartz
arXiv preprint arXiv:1710.10174, 2017
2362017
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
1572020
Learning parities with neural networks
A Daniely, E Malach
Advances in Neural Information Processing Systems 33, 20356-20365, 2020
402020
Is deeper better only when shallow is good?
E Malach, S Shalev-Shwartz
Advances in Neural Information Processing Systems 32, 2019
342019
Quantifying the benefit of using differentiable learning over tangent kernels
E Malach, P Kamath, E Abbe, N Srebro
International Conference on Machine Learning, 7379-7389, 2021
252021
Decoupling gating from linearity
J Fiat, E Malach, S Shalev-Shwartz
arXiv preprint arXiv:1906.05032, 2019
232019
A provably correct algorithm for deep learning that actually works
E Malach, S Shalev-Shwartz
arXiv preprint arXiv:1803.09522, 2018
212018
Computational separation between convolutional and fully-connected networks
E Malach, S Shalev-Shwartz
arXiv preprint arXiv:2010.01369, 2020
162020
ID3 learns juntas for smoothed product distributions
A Brutzkus, A Daniely, E Malach
Conference on Learning Theory, 902-915, 2020
152020
On the optimality of trees generated by id3
A Brutzkus, A Daniely, E Malach
arXiv preprint arXiv:1907.05444, 2019
122019
Hidden progress in deep learning: Sgd learns parities near the computational limit
B Barak, BL Edelman, S Goel, S Kakade, E Malach, C Zhang
arXiv preprint arXiv:2207.08799, 2022
112022
On the power of differentiable learning versus PAC and SQ learning
E Abbe, P Kamath, E Malach, C Sandon, N Srebro
Advances in Neural Information Processing Systems 34, 24340-24351, 2021
102021
The connection between approximation, depth separation and learnability in neural networks
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
Conference on Learning Theory, 3265-3295, 2021
102021
Learning Boolean circuits with neural networks
E Malach, S Shalev-Shwartz
arXiv preprint arXiv:1910.11923, 2019
52019
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data.(2017)
A Brutzkus, A Globerson, E Malach, S Shalev-Shwartz
arXiv preprint cs.LG/1710.10174, 2017
52017
The implications of local correlation on learning some deep functions
E Malach, S Shalev-Shwartz
Advances in Neural Information Processing Systems 33, 1322-1332, 2020
42020
When hardness of approximation meets hardness of learning
E Malach, S Shalev-Shwartz
Journal of Machine Learning Research 23 (91), 1-24, 2022
32022
Knowledge Distillation: Bad Models Can Be Good Role Models
G Kaplun, E Malach, P Nakkiran, S Shalev-Shwartz
arXiv preprint arXiv:2203.14649, 2022
22022
Full image detection
Y Shambik, O Chitrit, E Malach, D Kufra
US Patent App. 17/499,291, 2022
22022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20