From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao arXiv preprint arXiv:2308.12032, 2023 | 35 | 2023 |
Instructzero: Efficient instruction optimization for black-box large language models L Chen*, J Chen*, T Goldstein, H Huang, T Zhou arXiv preprint arXiv:2306.03082, 2023 | 31 | 2023 |
When do you need chain-of-thought prompting for chatgpt? J Chen, L Chen, H Huang, T Zhou arXiv preprint arXiv:2304.03262, 2023 | 31 | 2023 |
A closer look at distribution shifts and out-of-distribution generalization on graphs M Ding*, K Kong*, J Chen*, J Kirchenbauer, M Goldblum, D Wipf, ... NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021 | 30* | 2021 |
GOAT: A Global Transformer on Large-scale Graphs K Kong, J Chen, J Kirchenbauer, R Ni, CB Bruss, T Goldstein International Conference on Machine Learning 2023, 2023 | 20 | 2023 |
Gaussian process assisted active learning of physical laws J Chen, L Kang, G Lin Technometrics 63 (3), 329-342, 2021 | 20 | 2021 |
Quantifying uncertainty in answers from any language model via intrinsic and extrinsic confidence assessment J Chen, J Mueller arXiv preprint arXiv:2308.16175, 2023 | 17* | 2023 |
How Many Demonstrations Do You Need for In-context Learning? J Chen, L Chen, C Zhu, T Zhou Empirical Methods in Natural Language Processing 2023, 2023 | 17* | 2023 |
Particle-based energetic variational inference Y Wang, J Chen, C Liu, L Kang Statistics and Computing 31, 1-17, 2021 | 17 | 2021 |
Why Propagate Alone Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ... Parallel Use of Labels and Features on Graphs. In, 2020 | 17* | 2020 |
Does your graph need a confidence boost? Convergent boosted smoothing on graphs with tabular node features J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf International Conference on Learning Representations (ICLR) 2022, 2021 | 14 | 2021 |
Reflection-tuning: Recycling data for better instruction-tuning M Li, L Chen, J Chen, S He, T Zhou NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023 | 7* | 2023 |
Why propagate alone? parallel use of labels and features on graphs Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ... arXiv preprint arXiv:2110.07190, 2021 | 6 | 2021 |
Why propagate alone? parallel use of labels and features on graphs Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ... arXiv preprint arXiv:2110.07190, 2021 | 6 | 2021 |
Convergent boosted smoothing for modeling graph data with tabular node features J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf International Conference on Learning Representations (ICLR) 2022, 2021 | 5 | 2021 |
Understanding the role of self-supervised learning in out-of-distribution detection task J Chen, C Zhu, B Dai arXiv preprint arXiv:2110.13435, 2021 | 4 | 2021 |
ODIN: Disentangled Reward Mitigates Hacking in RLHF L Chen, C Zhu, D Soselia, J Chen, T Zhou, T Goldstein, H Huang, ... arXiv preprint arXiv:2402.07319, 2024 | 3 | 2024 |
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer L Chen, J Chen, H Huang, M Cheng Empirical Methods in Natural Language Processing 2023, 2023 | 3 | 2023 |
Automated Data Curation for Robust Language Model Fine-Tuning J Chen, J Mueller arXiv preprint arXiv:2403.12776, 2024 | 1 | 2024 |
Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements M Li, J Chen, L Chen, T Zhou arXiv preprint arXiv:2402.10614, 2024 | 1 | 2024 |