Visualbert: A simple and performant baseline for vision and language LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang arXiv preprint arXiv:1908.03557, 2019 | 1880 | 2019 |
Men also like shopping: Reducing gender bias amplification using corpus-level constraints J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1707.09457, 2017 | 1145 | 2017 |
Neural motifs: Scene graph parsing with global context R Zellers, M Yatskar, S Thomson, Y Choi Proceedings of the IEEE conference on computer vision and pattern …, 2018 | 1081 | 2018 |
Gender bias in coreference resolution: Evaluation and debiasing methods J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1804.06876, 2018 | 959 | 2018 |
QuAC: Question answering in context E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer arXiv preprint arXiv:1808.07036, 2018 | 926 | 2018 |
Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations T Wang, J Zhao, M Yatskar, KW Chang, V Ordonez Proceedings of the IEEE/CVF international conference on computer vision …, 2019 | 486 | 2019 |
Don't take the easy way out: Ensemble based methods for avoiding known dataset biases C Clark, M Yatskar, L Zettlemoyer arXiv preprint arXiv:1909.03683, 2019 | 485 | 2019 |
Gender bias in contextualized word embeddings J Zhao, T Wang, M Yatskar, R Cotterell, V Ordonez, KW Chang arXiv preprint arXiv:1904.03310, 2019 | 437 | 2019 |
Neural amr: Sequence-to-sequence models for parsing and generation I Konstas, S Iyer, M Yatskar, Y Choi, L Zettlemoyer arXiv preprint arXiv:1704.08381, 2017 | 359 | 2017 |
Situation Recognition: Visual Semantic Role Labeling for Image Understanding M Yatskar, L Zettlemoyer, A Farhadi Conference on Computer Vision and Pattern Recognition, 2016 | 296 | 2016 |
Robothor: An open simulation-to-real embodied ai platform M Deitke, W Han, A Herrasti, A Kembhavi, E Kolve, R Mottaghi, J Salvador, ... Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 244 | 2020 |
For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia M Yatskar, B Pang, C Danescu-Niculescu-Mizil, L Lee arXiv preprint arXiv:1008.1986, 2010 | 216 | 2010 |
What does BERT with vision look at? LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang Proceedings of the 58th annual meeting of the association for computational …, 2020 | 153 | 2020 |
Language in a bottle: Language model guided concept bottlenecks for interpretable image classification Y Yang, A Panagopoulou, S Zhou, D Jin, C Callison-Burch, M Yatskar Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 146 | 2023 |
A qualitative comparison of CoQA, SQuAD 2.0 and QuAC M Yatskar arXiv preprint arXiv:1809.10735, 2018 | 112 | 2018 |
Grounded situation recognition S Pratt, M Yatskar, L Weihs, A Farhadi, A Kembhavi Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020 | 97 | 2020 |
Stating the obvious: Extracting visual common sense knowledge M Yatskar, V Ordonez, A Farhadi Proceedings of the 2016 Conference of the North American Chapter of the …, 2016 | 69 | 2016 |
Visual semantic role labeling for video understanding A Sadhu, T Gupta, M Yatskar, R Nevatia, A Kembhavi Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 67 | 2021 |
See No Evil, Say No Evil: Description Generation from Densely Labeled Images M Yatskar, M Galley, L Vanderwende, L Zettlemoyer Lexical and Computational Semantics (* SEM 2014), 110, 2014 | 63 | 2014 |
Visualbert: A simple and performant baseline for vision and language. arXiv 2019 LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang arXiv preprint arXiv:1908.03557 2, 0 | 62 | |