Follow
Yeonjong Shin
Yeonjong Shin
Verified email at kaist.ac.kr - Homepage
Title
Cited by
Cited by
Year
Dying ReLU and Initialization: Theory and Numerical Examples
L Lu, Y Shin, Y Su, GE Karniadakis
Communications in Computational Physics 28 (5), 1671-1706, 2020
2532020
On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs
Y Shin, J Darbon, GE Karniadakis
Communications in Computational Physics 28 (5), 2042-2074, 2020
141*2020
Nonadaptive quasi-optimal points selection for least squares linear regression
Y Shin, D Xiu
SIAM Journal on Scientific Computing 38 (1), A385-A411, 2016
462016
Sparse Approximation using Minimization and Its Application to Stochastic Collocation
L Yan, Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (1), A229-A254, 2017
392017
Error estimates of residual minimization using neural networks for linear PDEs
Y Shin, Z Zhang, GE Karniadakis
arXiv preprint arXiv:2010.08019, 2020
282020
Convergence rate of DeepONets for learning operators arising from advection-diffusion equations
B Deng, Y Shin, L Lu, Z Zhang, GE Karniadakis
arXiv preprint arXiv:2102.10621, 2021
172021
A randomized algorithm for multivariate function approximation
Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (3), A983-A1002, 2017
162017
On a near optimal sampling strategy for least squares polynomial regression
Y Shin, D Xiu
Journal of Computational Physics 326, 931-946, 2016
162016
Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
AD Jagtap, Y Shin, K Kawaguchi, GE Karniadakis
Neurocomputing 468, 165-180, 2022
152022
Trainability of ReLU networks and Data-dependent Initialization
Y Shin, GE Karniadakis
Journal of Machine Learning for Modeling and Computing 1 (Issue 1), 39-74, 2020
14*2020
Correcting data corruption errors for multivariate function approximation
Y Shin, D Xiu
SIAM Journal on Scientific Computing 38 (4), A2492-A2511, 2016
122016
GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems
Z Zhang, Y Shin, G Em Karniadakis
Philosophical Transactions of the Royal Society A 380 (2229), 20210207, 2022
102022
A randomized tensor quadrature method for high dimensional polynomial approximation
K Wu, Y Shin, D Xiu
SIAM Journal on Scientific Computing 39 (5), A1811-A1833, 2017
92017
Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks
Y Shin
Analysis and Applications 20 (01), 73-119, 2022
62022
Sequential function approximation with noisy data
Y Shin, K Wu, D Xiu
Journal of Computational Physics 371, 363-381, 2018
62018
Plateau phenomenon in gradient descent training of RELU networks: Explanation, quantification, and avoidance
M Ainsworth, Y Shin
SIAM Journal on Scientific Computing 43 (5), A3438-A3468, 2021
52021
S-OPT: A Points Selection Algorithm for Hyper-Reduction in Reduced Order Models
JT Lauzon, SW Cheung, Y Shin, Y Choi, DM Copeland, K Huynh
arXiv preprint arXiv:2203.16494, 2022
12022
A caputo fractional derivative-based algorithm for optimization
Y Shin, J Darbon, GE Karniadakis
arXiv preprint arXiv:2104.02259, 2021
12021
Approximation rates of DeepONets for learning operators arising from advection-diffusion equations
B Deng, Y Shin, L Lu, Z Zhang, GE Karniadakis
Neural Networks, 2022
2022
Identification of Corrupted Data via k-Means Clustering for Function Approximation
J Hou, Y Shin, D Xiu
CSIAM Trans. Appl. Math. 2, pp. 81-107, 2021
2021
The system can't perform the operation now. Try again later.
Articles 1–20