Follow
Yeonjong Shin
Title
Cited by
Cited by
Year
Dying ReLU and Initialization: Theory and Numerical Examples
L Lu, Y Shin, Y Su, GE Karniadakis
Communications in Computational Physics 28 (5), 1671-1706, 2020
5132020
On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs
Y Shin, J Darbon, GE Karniadakis
Communications in Computational Physics 28 (5), 2042-2074, 2020
341*2020
Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
AD Jagtap, Y Shin, K Kawaguchi, GE Karniadakis
Neurocomputing 468, 165-180, 2022
1112022
Error Estimates of Residual Minimization using neural networks for Linear PDEs
Y Shin, Z Zhang, GE Karniadakis
Journal of Machine Learning for Modeling and Computing 4 (4), 73-101, 2023
742023
Nonadaptive quasi-optimal points selection for least squares linear regression
Y Shin, D Xiu
SIAM Journal on Scientific Computing 38 (1), A385-A411, 2016
662016
Approximation rates of DeepONets for learning operators arising from advection–diffusion equations
B Deng, Y Shin, L Lu, Z Zhang, GE Karniadakis
Neural Networks 153, 411-426, 2022
64*2022
Sparse Approximation using Minimization and Its Application to Stochastic Collocation
L Yan, Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (1), A229-A254, 2017
502017
GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems
Z Zhang, Y Shin, G Em Karniadakis
Philosophical Transactions of the Royal Society A 380 (2229), 20210207, 2022
422022
Trainability of ReLU networks and Data-dependent Initialization
Y Shin, GE Karniadakis
Journal of Machine Learning for Modeling and Computing 1 (Issue 1), 39-74, 2020
27*2020
On a near optimal sampling strategy for least squares polynomial regression
Y Shin, D Xiu
Journal of Computational Physics 326, 931-946, 2016
262016
S-OPT: A points selection algorithm for hyper-reduction in reduced order models
JT Lauzon, SW Cheung, Y Shin, Y Choi, DM Copeland, K Huynh
arXiv preprint arXiv:2203.16494, 2022
232022
Plateau phenomenon in gradient descent training of RELU networks: Explanation, quantification, and avoidance
M Ainsworth, Y Shin
SIAM Journal on Scientific Computing 43 (5), A3438-A3468, 2021
172021
A randomized algorithm for multivariate function approximation
Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (3), A983-A1002, 2017
162017
Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks
Y Shin
Analysis and Applications 20 (01), 73-119, 2022
152022
Accelerating gradient descent and Adam via fractional gradients
Y Shin, J Darbon, GE Karniadakis
Neural Networks 161, 185-201, 2023
12*2023
Correcting data corruption errors for multivariate function approximation
Y Shin, D Xiu
SIAM Journal on Scientific Computing 38 (4), A2492-A2511, 2016
122016
A randomized tensor quadrature method for high dimensional polynomial approximation
K Wu, Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (5), A1811-A1833, 2017
102017
Sequential function approximation with noisy data
Y Shin, K Wu, D Xiu
Journal of Computational Physics 371, 363-381, 2018
72018
On the training and generalization of deep operator networks
S Lee, Y Shin
arXiv preprint arXiv:2309.01020, 2023
52023
Active Neuron Least Squares: A training method for multivariate rectified neural networks
M Ainsworth, Y Shin
SIAM Journal on Scientific Computing 44 (4), A2253-A2275, 2022
52022
The system can't perform the operation now. Try again later.
Articles 1–20