Emotional 3D Humans

Evaluation of Generative Models for Emotional 3D Animation Generation in VR

Evaluation of generative Models for emotional 3D animation in VR
Evaluation of generative Models for emotional 3D animation in VR. Participants interact with a virtual character using VR headsets in a modular setup supporting various TTS models and speech-driven 3D animation methods. The setup tracks participant positions via base stations, uses tablets for input recording, and renders real-time VR interactions through Blender (OpenXR).

Abstract

Social interactions incorporate various nonverbal signals to convey emotions alongside speech, including facial expressions and body gestures. Generative models have demonstrated promising results in creating full-body nonverbal animations synchronized with speech; however, evaluations using statistical metrics in 2D settings fail to fully capture user-perceived emotions, limiting our understanding of the effectiveness of these models. To address this, we evaluate emotional 3D animation generative models within an immersive Virtual Reality (VR) environment, emphasizing user-centric metrics—emotional arousal realism, naturalness, enjoyment, diversity, and interaction quality—in a real-time human–agent interaction scenario. Through a user study (N=48), we systematically examine perceived emotional quality for three state-of-the-art speech-driven 3D animation methods across two specific emotions: happiness (high arousal) and neutral (mid arousal). Additionally, we compare these generative models against real human expressions obtained via a reconstruction-based method to assess both their strengths and limitations and how closely they replicate real human facial and body expressions. Our results demonstrate that methods explicitly modeling emotions lead to higher recognition accuracy compared to those focusing solely on speech-driven synchrony. Users rated the realism and naturalness of happy animations significantly higher than those of neutral animations, highlighting the limitations of current generative models in handling subtle emotional states. Generative models underperformed compared to reconstruction-based methods in facial expression quality, and all methods received relatively low ratings for animation enjoyment and interaction quality, emphasizing the importance of incorporating user-centric evaluations into generative model development. Finally, participants positively recognized animation diversity across all generative models.

Video

Supplementary video overview. VR-based interaction demonstrations with state-of-the-art gesture generation methods, method comparisons across HEA/NEA/DV conditions, quality analysis focusing on HEA condition, side-by-side HEA vs NEA emotion comparison, and reconstruction sequences from real human driving video.

Qualitative Evaluation

Qualitative evaluation
Qualitative evaluation. Top: Animation frames from EMAGE, TalkSHOW, and AMUSE+FaceFormer methods. Bottom: Reconstruction-based baseline workflow using PIXIE+DECA for pose parameters, normal maps, and textures from driving video input.

Citation

@article{chhatre2025evaluation, title={Evaluation of Generative Models for Emotional 3D Animation Generation in VR}, author={Chhatre, Kiran and Guarese, Renan and Matviienko, Andrii and Peters, Christopher Edward}, journal={Frontiers in Computer Science}, volume={7}, pages={1598099}, year={2025}, publisher={Frontiers} } @inproceedings{chhatre2025evaluating, title={Evaluating Speech and Video Models for Face-Body Congruence}, author={Chhatre, Kiran and Guarese, Renan and Matviienko, Andrii and Peters, Christopher}, booktitle={Companion Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games}, pages={1--3}, year={2025} }

Additional Related Projects

Synthetically Expressive
Synthetically Expressive: Evaluating gesture and voice for emotion and empathy in VR and 2D scenarios
ACM International Conference on Intelligent Virtual Agents (IVA), 2025
website / arxiv / video /
AMUSE
AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
website / arxiv / code / video /
EMOTE
EMOTE: Emotional Speech-Driven Animation with Content-Emotion Disentanglement
ACM SIGGRAPH Asia Conference Papers, 2023
website / arxiv / code / video /
Spatio-temporal priors
Spatio-temporal priors in 3D human motion
Anna Deichler*, Kiran Chhatre*, Christopher Peters, Jonas Beskow
(* denotes equal contribution)
IEEE International Conference on Development and Learning (StEPP) workshop, 2021
website / paper /

Acknowledgments

We thank Peiyang Zheng and Julian Magnus Ley for their support with the technical setup of the user study. We also thank Tairan Yin for insightful discussions, proofreading, and valuable feedback. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 860768 (CLIPE project).