Follow
Ruosong Wang
Ruosong Wang
Verified email at andrew.cmu.edu - Homepage
Title
Cited by
Cited by
Year
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
S Arora, S Du, W Hu, Z Li, R Wang
International Conference on Machine Learning, 322-332, 2019
9402019
On exact computation with an infinitely wide neural net
S Arora, SS Du, W Hu, Z Li, RR Salakhutdinov, R Wang
Advances in neural information processing systems 32, 2019
8872019
Graph neural tangent kernel: Fusing graph neural networks with graph kernels
SS Du, K Hou, RR Salakhutdinov, B Poczos, R Wang, K Xu
Advances in neural information processing systems 32, 2019
2522019
Is a good representation sufficient for sample efficient reinforcement learning?
SS Du, SM Kakade, R Wang, LF Yang
arXiv preprint arXiv:1910.03016, 2019
2142019
Reinforcement learning with general value function approximation: Provably efficient approach via bounded eluder dimension
R Wang, RR Salakhutdinov, L Yang
Advances in Neural Information Processing Systems 33, 6123-6135, 2020
208*2020
Bilinear classes: A structural framework for provable generalization in rl
S Du, S Kakade, J Lee, S Lovett, G Mahajan, W Sun, R Wang
International Conference on Machine Learning, 2826-2836, 2021
1982021
Harnessing the power of infinitely wide deep nets on small-data tasks
S Arora, SS Du, Z Li, R Salakhutdinov, R Wang, D Yu
arXiv preprint arXiv:1910.01663, 2019
1722019
What are the statistical limits of offline RL with linear function approximation?
R Wang, DP Foster, SM Kakade
arXiv preprint arXiv:2010.11895, 2020
1682020
Optimism in reinforcement learning with generalized linear function approximation
Y Wang, R Wang, SS Du, A Krishnamurthy
arXiv preprint arXiv:1912.04136, 2019
1552019
Enhanced convolutional neural tangent kernels
Z Li, R Wang, D Yu, SS Du, W Hu, R Salakhutdinov, S Arora
arXiv preprint arXiv:1911.00809, 2019
1202019
On reward-free reinforcement learning with linear function approximation
R Wang, SS Du, L Yang, RR Salakhutdinov
Advances in neural information processing systems 33, 17816-17826, 2020
1062020
Provably efficient Q-learning with function approximation via distribution shift error checking oracle
SS Du, Y Luo, R Wang, H Zhang
Advances in Neural Information Processing Systems 32, 2019
962019
Is long horizon rl more difficult than short horizon rl?
R Wang, SS Du, L Yang, S Kakade
Advances in Neural Information Processing Systems 33, 9075-9085, 2020
64*2020
Agnostic -learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity
SS Du, JD Lee, G Mahajan, R Wang
Advances in Neural Information Processing Systems 33, 22327-22337, 2020
57*2020
Nearly optimal sampling algorithms for combinatorial pure exploration
L Chen, A Gupta, J Li, M Qiao, R Wang
Conference on Learning Theory, 482-534, 2017
562017
Exponential separations in the energy complexity of leader election
YJ Chang, T Kopelowitz, S Pettie, R Wang, W Zhan
ACM Transactions on Algorithms (TALG) 15 (4), 1-31, 2019
502019
An exponential lower bound for linearly realizable mdp with constant suboptimality gap
Y Wang, R Wang, S Kakade
Advances in Neural Information Processing Systems 34, 9521-9533, 2021
482021
Preference-based reinforcement learning with finite-time guarantees
Y Xu, R Wang, L Yang, A Singh, A Dubrawski
Advances in Neural Information Processing Systems 33, 18784-18794, 2020
432020
Instabilities of offline rl with pre-trained neural representation
R Wang, Y Wu, R Salakhutdinov, S Kakade
International Conference on Machine Learning, 10948-10960, 2021
422021
k-regret minimizing set: Efficient algorithms and hardness
W Cao, J Li, H Wang, K Wang, R Wang, R Chi-Wing Wong, W Zhan
20th International Conference on Database Theory (ICDT 2017), 2017
392017
The system can't perform the operation now. Try again later.
Articles 1–20