SpiderBoost and momentum: Faster variance reduction algorithms Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh arXiv preprint arXiv:1810.10690, 2018 | 261* | 2018 |
Improving sample complexity bounds for (natural) actor-critic algorithms T Xu, Z Wang, Y Liang Advances in Neural Information Processing Systems 33, 4358-4369, 2020 | 118 | 2020 |
Improved zeroth-order variance reduced algorithms and analysis for nonconvex optimization K Ji, Z Wang, Y Zhou, Y Liang International conference on machine learning, 3100-3109, 2019 | 79 | 2019 |
Non-asymptotic convergence analysis of two time-scale (natural) actor-critic algorithms T Xu, Z Wang, Y Liang arXiv preprint arXiv:2005.03557, 2020 | 66 | 2020 |
Stochastic variance-reduced cubic regularization for nonconvex optimization Z Wang, Y Zhou, Y Liang, G Lan The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 63 | 2019 |
Reanalysis of variance reduced temporal difference learning T Xu, Z Wang, Y Zhou, Y Liang arXiv preprint arXiv:2001.01898, 2020 | 52 | 2020 |
Cubic regularization with momentum for nonconvex optimization Z Wang, Y Zhou, Y Liang, G Lan Uncertainty in Artificial Intelligence, 313-322, 2020 | 31 | 2020 |
Gradient free minimax optimization: Variance reduction and faster convergence T Xu, Z Wang, Y Liang, HV Poor arXiv preprint arXiv:2006.09361, 2020 | 28 | 2020 |
Enhanced first and zeroth order variance reduced algorithms for min-max optimization T Xu, Z Wang, Y Liang, HV Poor | 26 | 2020 |
Convergence of cubic regularization for nonconvex optimization under KL property Y Zhou, Z Wang, Y Liang Advances in Neural Information Processing Systems 31, 2018 | 26 | 2018 |
Spectral algorithms for community detection in directed networks Z Wang, Y Liang, P Ji Journal of Machine Learning Research, 2020 | 24 | 2020 |
History-gradient aided batch size adaptation for variance reduced algorithms K Ji, Z Wang, B Weng, Y Zhou, W Zhang, Y Liang International Conference on Machine Learning, 4762-4772, 2020 | 20* | 2020 |
A note on inexact gradient and Hessian conditions for cubic regularized Newton’s method Z Wang, Y Zhou, Y Liang, G Lan Operations Research Letters 47 (2), 146-149, 2019 | 20* | 2019 |
Momentum schemes with stochastic variance reduction for nonconvex composite optimization Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh arXiv preprint arXiv:1902.02715, 2019 | 13 | 2019 |
Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh arXiv preprint arXiv:2002.11582, 2020 | 11 | 2020 |
ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization X Huang, R Xu, H Zhou, Z Wang, Z Liu, L Li Proceedings of the AAAI Conference on Artificial Intelligence 35 (9), 7857-7864, 2021 | 1 | 2021 |
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs X Huang, H Zhou, R Xu, Z Wang, L Li arXiv preprint arXiv:2006.07037, 2020 | 1 | 2020 |