Folgen
Soufiane Hayou
Soufiane Hayou
Simons Institute for the Theory of Computing, UC Berkeley | PhD @ University of Oxford
Bestätigte E-Mail-Adresse bei berkeley.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
On the impact of the activation function on deep neural networks training
S Hayou, A Doucet, J Rousseau
36th International Conference on Machine Learning (ICML 2019), 2019
2722019
On the selection of initialization and activation function for deep neural networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1805.08266, 2018
942018
Stable ResNet
S Hayou, E Clerico, B He, G Deligiannidis, A Doucet, J Rousseau
24th International Conference on Artificial Intelligence and Statistics …, 2021
622021
Robust Pruning at Initialization
S Hayou, JF Ton, A Doucet, YW Teh
International Conference on Learning Representations (ICLR 2021), 2021
512021
LoRA+: Efficient Low Rank Adaptation of Large Models
S Hayou, N Ghosh, B Yu
ICML 2024 (arXiv:2402.12354), 2024
432024
Efficient low rank adaptation of large models
S Hayou, N Ghosh, BY LoRA
arXiv preprint arXiv:2402.12354 1, 2024
382024
Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
302019
On the infinite-depth limit of finite-width neural networks
S Hayou
Transactions on Machine Learning Research (arXiv:2210.00688), 2022
252022
Tensor Programs VI: Feature Learning in Infinite-Depth Neural Networks
G Yang, D Yu, C Zhu, S Hayou
ICLR 2024, 2023
242023
Feature learning and signal propagation in deep neural networks
Y Lou, CE Mingard, S Hayou
International Conference on Machine Learning, 14248-14282, 2022
172022
On the impact of the activation function on deep neural networks training 2019
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1902.06853, 1902
16*1902
How Bad is Training on Synthetic Data? A Statistical Analysis of Language Model Collapse
MEA Seddik, SW Chen, S Hayou, P Youssef, M Debbah
arXiv preprint arXiv:2404.05090, 2024
152024
Pruning untrained neural networks: Principles and analysis
S Hayou, JF Ton, A Doucet, Y Whye Teh
arXiv e-prints, arXiv: 2002.08797, 2020
152020
Width and Depth Limits Commute in Residual Networks
S Hayou, G Yang
ICML 2023 (arXiv preprint arXiv:2302.00453), 2023
142023
Training dynamics of deep networks using stochastic gradient descent via neural tangent kernel
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
122019
Regularization in ResNet with Stochastic Depth
S Hayou, F Ayed
NeurIPS 2021, arXiv:2106.03091, 2021
102021
Feature Learning in Infinite Depth Neural Networks
G Yang, D Yu, C Zhu, S Hayou
The Twelfth International Conference on Learning Representations, 2023
92023
Leave-one-out distinguishability in machine learning
J Ye, A Borovykh, S Hayou, R Shokri
ICLR 2024, arXiv preprint arXiv:2309.17310, 2023
82023
Data pruning and neural scaling laws: fundamental limitations of score-based algorithms
F Ayed, S Hayou
Transactions on Machine Learning Research 2023. (arXiv preprint arXiv:2302 …, 2023
72023
The curse of depth in kernel regime
S Hayou, A Doucet, J Rousseau
I (Still) Can't Believe It's Not Better! Workshop at NeurIPS 2021, 41-47, 2022
62022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20