Folgen
Ruosong Wang
Ruosong Wang
Assistant Professor, Peking University
Bestätigte E-Mail-Adresse bei pku.edu.cn
Titel
Zitiert von
Zitiert von
Jahr
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
S Arora, S Du, W Hu, Z Li, R Wang
International Conference on Machine Learning, 322-332, 2019
10492019
On exact computation with an infinitely wide neural net
S Arora, SS Du, W Hu, Z Li, RR Salakhutdinov, R Wang
Advances in neural information processing systems 32, 2019
9892019
Graph neural tangent kernel: Fusing graph neural networks with graph kernels
SS Du, K Hou, RR Salakhutdinov, B Poczos, R Wang, K Xu
Advances in neural information processing systems 32, 2019
2922019
Is a good representation sufficient for sample efficient reinforcement learning?
SS Du, SM Kakade, R Wang, LF Yang
arXiv preprint arXiv:1910.03016, 2019
2422019
Bilinear classes: A structural framework for provable generalization in rl
S Du, S Kakade, J Lee, S Lovett, G Mahajan, W Sun, R Wang
International Conference on Machine Learning, 2826-2836, 2021
2302021
Reinforcement learning with general value function approximation: Provably efficient approach via bounded eluder dimension
R Wang, RR Salakhutdinov, L Yang
Advances in Neural Information Processing Systems 33, 6123-6135, 2020
230*2020
What are the statistical limits of offline RL with linear function approximation?
R Wang, DP Foster, SM Kakade
arXiv preprint arXiv:2010.11895, 2020
1842020
Harnessing the power of infinitely wide deep nets on small-data tasks
S Arora, SS Du, Z Li, R Salakhutdinov, R Wang, D Yu
arXiv preprint arXiv:1910.01663, 2019
1822019
Optimism in reinforcement learning with generalized linear function approximation
Y Wang, R Wang, SS Du, A Krishnamurthy
arXiv preprint arXiv:1912.04136, 2019
1722019
Enhanced convolutional neural tangent kernels
Z Li, R Wang, D Yu, SS Du, W Hu, R Salakhutdinov, S Arora
arXiv preprint arXiv:1911.00809, 2019
1312019
On reward-free reinforcement learning with linear function approximation
R Wang, SS Du, L Yang, RR Salakhutdinov
Advances in neural information processing systems 33, 17816-17826, 2020
1232020
Provably efficient Q-learning with function approximation via distribution shift error checking oracle
SS Du, Y Luo, R Wang, H Zhang
Advances in Neural Information Processing Systems 32, 2019
1052019
Is long horizon rl more difficult than short horizon rl?
R Wang, SS Du, L Yang, S Kakade
Advances in Neural Information Processing Systems 33, 9075-9085, 2020
72*2020
Agnostic -learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity
SS Du, JD Lee, G Mahajan, R Wang
Advances in Neural Information Processing Systems 33, 22327-22337, 2020
63*2020
Preference-based reinforcement learning with finite-time guarantees
Y Xu, R Wang, L Yang, A Singh, A Dubrawski
Advances in Neural Information Processing Systems 33, 18784-18794, 2020
602020
Nearly optimal sampling algorithms for combinatorial pure exploration
L Chen, A Gupta, J Li, M Qiao, R Wang
Conference on Learning Theory, 482-534, 2017
582017
Exponential separations in the energy complexity of leader election
YJ Chang, T Kopelowitz, S Pettie, R Wang, W Zhan
ACM Transactions on Algorithms (TALG) 15 (4), 1-31, 2019
552019
An exponential lower bound for linearly realizable mdp with constant suboptimality gap
Y Wang, R Wang, S Kakade
Advances in Neural Information Processing Systems 34, 9521-9533, 2021
512021
Instabilities of offline rl with pre-trained neural representation
R Wang, Y Wu, R Salakhutdinov, S Kakade
International Conference on Machine Learning, 10948-10960, 2021
512021
Tight Bounds for ℓ1 Oblivious Subspace Embeddings
R Wang, DP Woodruff
ACM Transactions on Algorithms (TALG) 18 (1), 1-32, 2022
402022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20