Follow
Zhe Wang
Zhe Wang
Verified email at osu.edu
Title
Cited by
Cited by
Year
SpiderBoost and momentum: Faster variance reduction algorithms
Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh
arXiv preprint arXiv:1810.10690, 2018
261*2018
Improving sample complexity bounds for (natural) actor-critic algorithms
T Xu, Z Wang, Y Liang
Advances in Neural Information Processing Systems 33, 4358-4369, 2020
1182020
Improved zeroth-order variance reduced algorithms and analysis for nonconvex optimization
K Ji, Z Wang, Y Zhou, Y Liang
International conference on machine learning, 3100-3109, 2019
792019
Non-asymptotic convergence analysis of two time-scale (natural) actor-critic algorithms
T Xu, Z Wang, Y Liang
arXiv preprint arXiv:2005.03557, 2020
662020
Stochastic variance-reduced cubic regularization for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
632019
Reanalysis of variance reduced temporal difference learning
T Xu, Z Wang, Y Zhou, Y Liang
arXiv preprint arXiv:2001.01898, 2020
522020
Cubic regularization with momentum for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
Uncertainty in Artificial Intelligence, 313-322, 2020
312020
Gradient free minimax optimization: Variance reduction and faster convergence
T Xu, Z Wang, Y Liang, HV Poor
arXiv preprint arXiv:2006.09361, 2020
282020
Enhanced first and zeroth order variance reduced algorithms for min-max optimization
T Xu, Z Wang, Y Liang, HV Poor
262020
Convergence of cubic regularization for nonconvex optimization under KL property
Y Zhou, Z Wang, Y Liang
Advances in Neural Information Processing Systems 31, 2018
262018
Spectral algorithms for community detection in directed networks
Z Wang, Y Liang, P Ji
Journal of Machine Learning Research, 2020
242020
History-gradient aided batch size adaptation for variance reduced algorithms
K Ji, Z Wang, B Weng, Y Zhou, W Zhang, Y Liang
International Conference on Machine Learning, 4762-4772, 2020
20*2020
A note on inexact gradient and Hessian conditions for cubic regularized Newton’s method
Z Wang, Y Zhou, Y Liang, G Lan
Operations Research Letters 47 (2), 146-149, 2019
20*2019
Momentum schemes with stochastic variance reduction for nonconvex composite optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:1902.02715, 2019
132019
Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:2002.11582, 2020
112020
ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
X Huang, R Xu, H Zhou, Z Wang, Z Liu, L Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (9), 7857-7864, 2021
12021
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
X Huang, H Zhou, R Xu, Z Wang, L Li
arXiv preprint arXiv:2006.07037, 2020
12020
The system can't perform the operation now. Try again later.
Articles 1–17