Pre-training With Whole Word Masking for Chinese BERT Y Cui, W Che, T Liu, B Qin, Z Yang IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 29 …, 2021 | 1564 | 2021 |
Revisiting Pre-Trained Models for Chinese Natural Language Processing Y Cui, W Che, T Liu, B Qin, S Wang, G Hu Findings of EMNLP 2020, 657–668, 2020 | 829 | 2020 |
Attention-over-Attention Neural Networks for Reading Comprehension Y Cui, Z Chen, S Wei, S Wang, T Liu, G Hu ACL 2017, 593–602, 2017 | 542 | 2017 |
CLUE: A Chinese Language Understanding Evaluation Benchmark L Xu, X Zhang, L Li, H Hu, C Cao, W Liu, J Li, Y Li, K Sun, Y Xu, Y Cui, ... COLING 2020, 4762–4772, 2020 | 351 | 2020 |
Efficient and effective text encoding for chinese llama and alpaca Y Cui, Z Yang, X Yao arXiv preprint arXiv:2304.08177, 2023 | 239 | 2023 |
A Span-Extraction Dataset for Chinese Machine Reading Comprehension Y Cui, T Liu, L Xiao, Z Chen, W Ma, W Che, S Wang, G Hu EMNLP-IJCNLP 2019, 593–602, 2018 | 216 | 2018 |
Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting S Chen, Y Hou, Y Cui, W Che, T Liu, X Yu EMNLP 2020, 7870-–7881, 2020 | 196 | 2020 |
Exploiting Persona Information for Diverse Generation of Conversational Responses H Song, WN Zhang, Y Cui, D Wang, T Liu IJCAI-19, 5190–5196, 2019 | 141 | 2019 |
Consensus Attention-based Neural Networks for Chinese Reading Comprehension Y Cui, T Liu, Z Chen, S Wang, G Hu COLING 2016, 1777–1786, 2016 | 111 | 2016 |
CharBERT: Character-aware Pre-trained Language Model W Ma, Y Cui, C Si, T Liu, S Wang, G Hu COLING 2020, 39–50, 2020 | 108 | 2020 |
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension X Duan, B Wang, Z Wang, W Ma, Y Cui, D Wu, S Wang, T Liu, T Huo, Z Hu, ... CCL 2019, 439–451, 2019 | 86 | 2019 |
Is Graph Structure Necessary for Multi-hop Question Answering? N Shao, Y Cui, T Liu, S Wang, G Hu EMNLP 2020, 7187-7192, 2020 | 66 | 2020 |
Cross-Lingual Machine Reading Comprehension Y Cui, W Che, T Liu, B Qin, S Wang, G Hu EMNLP-IJCNLP 2019, 1586–1595, 2019 | 61 | 2019 |
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution T Liu, Y Cui, Q Yin, S Wang, W Zhang, G Hu ACL 2017, 102–111, 2017 | 58 | 2017 |
Context-Sensitive Generation of Open-Domain Conversational Responses W Zhang, Y Cui, Y Wang, Q Zhu, L Li, L Zhou, T Liu COLING 2018, 2437–2447, 2018 | 55 | 2018 |
LSTM Neural Reordering Feature for Statistical Machine Translation Y Cui, S Wang, J Li NAACL 2016, 977–982, 2016 | 53 | 2016 |
PERT: pre-training BERT with permuted language model Y Cui, Z Yang, T Liu arXiv preprint arXiv:2203.06906, 2022 | 52 | 2022 |
CINO: A Chinese Minority Pre-trained Language Model Z Yang, Z Xu, Y Cui, B Wang, M Lin, D Wu, Z Chen COLING 2022, 3937–3949, 2022 | 45 | 2022 |
TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing Z Yang, Y Cui, Z Chen, W Che, T Liu, S Wang, G Hu ACL 2020: System Demonstrations, 9–16, 2020 | 45 | 2020 |
LERT: A Linguistically-motivated Pre-trained Language Model Y Cui, W Che, S Wang, T Liu arXiv preprint arXiv:2211.05344, 2022 | 43 | 2022 |