Lechao Xiao
Lechao Xiao
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Wide neural networks of any depth evolve as linear models under gradient descent
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
arXiv preprint arXiv:1902.06720, 2019
2902019
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks
L Xiao, Y Bahri, J Sohl-Dickstein, S Schoenholz, J Pennington
International Conference on Machine Learning, 5393-5402, 2018
1522018
Bayesian Deep Convolutional Neural Networks with Many Channels are Gaussian Processes
R Novak, L Xiao, Y Bahri, J Lee, G Yang, DA Abolafia, J Pennington, ...
124*2018
Neural tangents: Fast and easy infinite neural networks in python
R Novak, L Xiao, J Hron, J Lee, AA Alemi, J Sohl-Dickstein, ...
arXiv preprint arXiv:1912.02803, 2019
362019
Uniform estimates for bilinear Hilbert transforms and bilinear maximal functions associated to polynomials
X Li, L Xiao
American Journal of Mathematics 138 (4), 907-962, 2016
232016
Provable benefit of orthogonal initialization in optimizing deep linear networks
W Hu, L Xiao, J Pennington
arXiv preprint arXiv:2001.05992, 2020
222020
Maximal decay inequalities for trilinear oscillatory integrals of convolution type
PT Gressman, L Xiao
Journal of Functional Analysis 271 (12), 3695-3726, 2016
172016
Endpoint estimates for one-dimensional oscillatory integral operators
L Xiao
Advances in Mathematics 316, 255-291, 2017
152017
Disentangling trainability and generalization in deep learning
L Xiao, J Pennington, S Schoenholz
142019
Bilinear Hilbert transforms associated with plane curves
J Guo, L Xiao
The Journal of Geometric Analysis 26 (2), 967-995, 2016
122016
Finite versus infinite neural networks: an empirical study
J Lee, SS Schoenholz, J Pennington, B Adlam, L Xiao, R Novak, ...
arXiv preprint arXiv:2007.15801, 2020
112020
Sharp estimates for trilinear oscillatory integrals and an algorithm of two-dimensional resolution of singularities
L Xiao
arXiv preprint arXiv:1311.3725, 2013
9*2013
Higher decay inequalities for multilinear oscillatory integrals
M Gilula, PT Gressman, L Xiao
arXiv preprint arXiv:1612.00050, 2016
82016
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. arXiv e-prints, art
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
arXiv preprint arXiv:1902.06720, 2019
52019
Neural tangents: Fast and easy infinite neural networks in python, 2019
R Novak, L Xiao, J Hron, J Lee, AA Alemi, J Sohl-Dickstein, ...
URL http://github. com/google/neural-tangents, 0
5
Wide neural networks of any depth evolve as linear models under gradient descent
J Lee, L Xiao, SS Schoenholz, Y Bahri, R Novak, J Sohl-Dickstein, ...
Journal of Statistical Mechanics: Theory and Experiment 2020 (12), 124002, 2020
42020
The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
W Hu, L Xiao, B Adlam, J Pennington
arXiv preprint arXiv:2006.14599, 2020
22020
Disentangling Trainability and Generalization in Deep Neural Networks
L Xiao, J Pennington, S Schoenholz
International Conference on Machine Learning, 10462-10472, 2020
12020
Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
B Adlam, J Lee, L Xiao, J Pennington, J Snoek
arXiv preprint arXiv:2010.07355, 2020
12020
Oscillatory Loomis-Whitney and Projections of Sublevel Sets
M Gilula, K O'Neill, L Xiao
arXiv preprint arXiv:1903.12300, 2019
12019
The system can't perform the operation now. Try again later.
Articles 1–20