Follow
Yu Ding
Yu Ding
Director of AI R&D Center, Happy Elements, China
Verified email at happyelements.com
Title
Cited by
Cited by
Year
Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset
Z Zhang, L Li, Y Ding, C Fan
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
1822021
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
S Wang, L Li, Y Ding, C Fan, X Yu
International Joint Conference on Artificial Intelligence (IJCAI-21), 2021
1082021
Freenet: Multi-identity face reenactment
J Zhang, X Zeng, M Wang, Y Pan, L Liu, Y Liu, Y Ding, C Fan
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
1022020
Transformer-based multimodal information fusion for facial expression analysis
W Zhang, F Qiu, S Wang, H Zeng, Z Zhang, R An, B Ma, Y Ding
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
872022
One-shot talking face generation from single-speaker audio-visual correlation learning
S Wang, L Li, Y Ding, X Yu
Proceedings of the AAAI Conference on Artificial Intelligence 36 (3), 2531-2539, 2022
712022
Write-a-speaker: Text-based emotional and rhythmic talking-head generation
L Li, S Wang, Z Zhang, Y Ding, Y Zheng, X Yu, C Fan
Proceedings of the AAAI conference on artificial intelligence 35 (3), 1911-1920, 2021
622021
Learning a facial expression embedding disentangled from identity
W Zhang, X Ji, K Chen, Y Ding, C Fan
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
622021
Laughter animation synthesis
Y Ding, K Prepin, J Huang, C Pelachaud, T Artières
Proceedings of the 2014 international conference on Autonomous agents and …, 2014
612014
Modeling multimodal behaviors from speech prosody
Y Ding, C Pelachaud, T Artieres
International Conference on Intelligent Virtual Agents, 217-228, 2013
412013
Prior aided streaming network for multi-task affective recognitionat the 2nd abaw2 competition
W Zhang, Z Guo, K Chen, L Li, Z Zhang, Y Ding
arXiv preprint arXiv:2107.03708, 2021
332021
Faceswapnet: Landmark guided many-to-many face reenactment
J Zhang, X Zeng, Y Pan, Y Liu, Y Ding, C Fan
arXiv preprint arXiv:1905.11805 2, 3, 2019
332019
StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Y Ma, S Wang, Z Hu, C Fan, T Lv, Y Ding, Z Deng, X Yu
AAAI 2023, 2023
312023
Rhythmic body movements of laughter
R Niewiadomski, M Mancini, Y Ding, C Pelachaud, G Volpe
Proceedings of the 16th international conference on multimodal interaction …, 2014
272014
Speech-driven eyebrow motion synthesis with contextual markovian models
Y Ding, M Radenen, T Artieres, C Pelachaud
2013 IEEE International Conference on Acoustics, Speech and Signal …, 2013
242013
Implementing and evaluating a laughing virtual character
M Mancini, B Biancardi, F Pecune, G Varni, Y Ding, C Pelachaud, G Volpe, ...
ACM Transactions on Internet Technology (TOIT) 17 (1), 1-22, 2017
222017
Laughing with a Virtual Agent.
F Pecune, M Mancini, B Biancardi, G Varni, Y Ding, C Pelachaud, G Volpe, ...
AAMAS, 1817-1818, 2015
212015
One-shot voice conversion using star-gan
R Wang, Y Ding, L Li, C Fan
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
202020
DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Z Zhang, Z Hu, W Deng, C Fan, T Lv, Y Ding
AAAI 2023, 2023
172023
A multifaceted study on eye contact based speaker identification in three-party conversations
Y Ding, Y Zhang, M Xiao, Z Deng
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
172017
A generic magnetic microsphere platform with “clickable” ligands for purification and immobilization of targeted proteins
J Zheng, Y Li, Y Sun, Y Yang, Y Ding, Y Lin, W Yang
ACS Applied Materials & Interfaces 7 (13), 7241-7250, 2015
172015
The system can't perform the operation now. Try again later.
Articles 1–20