Linjie Li 李琳婕
Linjie Li 李琳婕
Senior Researcher, Microsoft
Verified email at
Cited by
Cited by
UNITER: Learning UNiversal Image-TExt Representations
YC Chen, L Li, L Yu, AE Kholy, F Ahmed, Z Gan, Y Cheng, J Liu
ECCV 2020, 2020
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Z Gan, YC Chen, L Li, C Zhu, Y Cheng, J Liu
NeurIPS 2020, 2020
Relation-aware graph attention network for visual question answering
L Li, Z Gan, Y Cheng, J Liu
ICCV 2019, 2019
Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
J Lei, L Li, L Zhou, Z Gan, TL Berg, M Bansal, J Liu
CVPR 2021, 2021
HERO: Hierarchical Encoder for Video+ Language Omni-representation Pre-training
L Li, YC Chen, Y Cheng, Z Gan, L Yu, J Liu
EMNLP 2020, 2020
Multi-step reasoning via recurrent dual attention for visual dialog
Z Gan, Y Cheng, AEI Kholy, L Li, J Liu, J Gao
ACL 2019, 2019
Graph Optimal Transport for Cross-Domain Alignment
L Chen, Z Gan, Y Cheng, L Li, L Carin, J Liu
ICML 2020, 2020
LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
S Sun, YC Chen, L Li, S Wang, Y Fang, J Liu
NAACL 2021, 2021
VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling
TJ Fu, L Li, Z Gan, K Lin, WY Wang, L Wang, Z Liu
arXiv preprint arXiv:2111.12681, 2021
VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
L Li, J Lei, Z Gan, L Yu, YC Chen, R Pillai, Y Cheng, L Zhou, XE Wang, ...
NeurIPS 2021 Data and Benchmark Track, 2021
Meta Module Network for Compositional Visual Reasoning
W Chen, Z Gan, L Li, Y Cheng, W Wang, J Liu
WACV 2021, 2019
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
M Zhou, L Zhou, S Wang, Y Cheng, L Li, Z Yu, J Liu
CVPR 2021, 2021
GIT: A Generative Image-to-text Transformer for Vision and Language
J Wang, Z Yang, X Hu, L Li, K Lin, Z Gan, Z Liu, C Liu, L Wang
TMLR, 2022
Playing Lottery Tickets with Vision and Language
Z Gan, YC Chen, L Li, T Chen, Y Cheng, S Wang, J Liu
AAAI 2022, 2021
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models
L Li, J Lei, Z Gan, J Liu
ICCV 2021, 2021
SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
K Lin, L Li, CC Lin, F Ahmed, Z Gan, Z Liu, Y Lu, L Wang
CVPR 2022, 2021
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
L Li, Z Gan, J Liu
arXiv preprint arXiv:2012.08673, 2020
Extracting human face similarity judgments: Pairs or triplets?
L Li, VL Malave, A Song, A Yu
CogSci 2016, 2016
Learning to See People like People: Predicting Social Perceptions of Faces.
A Song, L Li, C Atalla, G Cottrell
CogSci 2017, 2017
LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling
L Li, Z Gan, K Lin, CC Lin, Z Liu, C Liu, L Wang
arXiv preprint arXiv:2206.07160, 2022
The system can't perform the operation now. Try again later.
Articles 1–20