Παρακολούθηση
Ruiqi Zhong
Ruiqi Zhong
Η διεύθυνση ηλεκτρονικού ταχυδρομείου έχει επαληθευτεί στον τομέα berkeley.edu - Αρχική σελίδα
Τίτλος
Παρατίθεται από
Παρατίθεται από
Έτος
Incoder: A generative model for code infilling and synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
arXiv preprint arXiv:2204.05999, 2022
3532022
Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models
T Xie, CH Wu, P Shi, R Zhong, T Scholak, M Yasunaga, CS Wu, M Zhong, ...
arXiv preprint arXiv:2201.05966, 2022
223*2022
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
R Zhong, K Lee, Z Zhang, D Klein
EMNLP 2021, Findings, 2021
1442021
DS-1000: A natural and reliable benchmark for data science code generation
Y Lai, C Li, Y Wang, T Zhang, R Zhong, L Zettlemoyer, W Yih, D Fried, ...
International Conference on Machine Learning, 18319-18345, 2023
882023
Meta-learning via language model in-context tuning
Y Chen, R Zhong, S Zha, G Karypis, H He
arXiv preprint arXiv:2110.07814, 2021
872021
Semantic evaluation for text-to-sql with distilled test suites
R Zhong, T Yu, D Klein
EMNLP 2020, 2020
732020
Fine-grained sentiment analysis with faithful attention
R Zhong, S Shao, K McKeown
arXiv preprint arXiv:1908.06870, 2019
482019
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
R Zhong, D Ghosh, D Klein, J Steinhardt
ACL 2021, Findings, 2021
412021
Subspace embedding and linear regression with orlicz norm
A Andoni, C Lin, Y Sheng, P Zhong, R Zhong
International Conference on Machine Learning, 224-233, 2018
362018
Approximating how single head attention learns
C Snell, R Zhong, D Klein, J Steinhardt
arXiv preprint arXiv:2103.07601, 2021
252021
Learning by distilling context
C Snell, D Klein, R Zhong
arXiv preprint arXiv:2209.15189, 2022
232022
Describing differences between text distributions with natural language
R Zhong, C Snell, D Klein, J Steinhardt
International Conference on Machine Learning, 27099-27116, 2022
23*2022
Detecting gang-involved escalation on social media using context
S Chang, R Zhong, E Adams, FT Lee, S Varia, D Patton, W Frey, C Kedzie, ...
EMNLP 2018, 2018
202018
Do models explain themselves? counterfactual simulatability of natural language explanations
Y Chen, R Zhong, N Ri, C Zhao, H He, J Steinhardt, Z Yu, K McKeown
arXiv preprint arXiv:2307.08678, 2023
162023
Semantic scaffolds for pseudocode-to-code generation
R Zhong, M Stern, D Klein
ACL 2020, 2020
152020
Goal driven discovery of distributional differences via language descriptions
R Zhong, P Zhang, S Li, J Ahn, D Klein, J Steinhardt
Advances in Neural Information Processing Systems 36, 2024
142024
GAIA-A Multi-media Multi-lingual Knowledge Extraction and Hypothesis Generation System.
T Zhang, A Subburathinam, G Shi, L Huang, D Lu, X Pan, M Li, B Zhang, ...
TAC, 2018
142018
Goal-driven explainable clustering via language descriptions
Z Wang, J Shang, R Zhong
arXiv preprint arXiv:2305.13749, 2023
72023
Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL
R Zhong, C Snell, D Klein, J Eisner
EMNLP 2023, 2023
6*2023
The effect of model size on worst-group generalization
A Pham, E Chan, V Srivatsa, D Ghosh, Y Yang, Y Yu, R Zhong, ...
arXiv preprint arXiv:2112.04094, 2021
42021
Δεν είναι δυνατή η εκτέλεση της ενέργειας από το σύστημα αυτή τη στιγμή. Προσπαθήστε ξανά αργότερα.
Άρθρα 1–20