CVPR 2026

Talking Together: Synthesizing Co-Located 3D Conversations from Audio

Mengyi Shan1  Shouchieh Chang2Ziqian Bai2Shichen Liu2Yinda Zhang2
Luchuan Song3Rohit Pandey2Sean Fanello2Zeng Huang2

1University of Washington   2Google   3University of Rochester

Teaser figure

Abstract

We tackle the challenging task of generating complete 3D facial animations for two interacting, co-located participants from a mixed audio stream. While existing methods often produce disembodied “talking heads” akin to a video conference call, our work is the first to explicitly model the dynamic 3D spatial relationship—including relative position, orientation, and mutual gaze—that is crucial for realistic in-person dialogues. Our system synthesizes the full performance of both individuals, including precise lip-sync, and uniquely allows their relative head poses to be controlled via textual descriptions. To achieve this, we propose a dual-stream architecture where each stream is responsible for one participant’s output. We employ speaker’s role embeddings and inter-speaker cross-attention mechanisms designed to disentangle the mixed audio and model the interaction. Furthermore, we introduce a novel eye gaze loss to promote natural, mutual eye contact. To power our data-hungry approach, we introduce a novel pipeline to curate a large-scale conversational dataset consisting of over 2 million dyadic pairs from in-the-wild videos. Our method generates fluid, controllable, and spatially aware dyadic animations suitable for immersive applications in VR and telepresence, significantly outperforming existing baselines in perceived realism and interaction coherence.

Key Contributions

Method pipeline
  • An automated pipeline to curate a large-scale dataset of dyadic conversations from in-the-wild videos, along with a high-fidelity single-speaker corpus for robust lip-sync training.
  • A dual-stream diffusion architecture with a shared U-Net backbone, cross-attention, and FiLM conditioning to effectively model speaker interaction and disentangle a single mixed audio track.
  • A mixed-data training strategy—pre-training on massive real conversational data then fine-tuning on high-quality synthetic data—ensuring precise lip articulation and natural interactive behaviors.
  • Intuitive scene control through a few-shot, LLM-based text-to-3D spatial translation mechanism, and a targeted auxiliary eye gaze loss applied on a curated subset to promote realistic mutual eye contact.

Qualitative Comparisons

We compare against SelfTalk and DualTalk baselines. Our full model generates spatially aware, co-located 3D conversations with natural mutual gaze and controllable spatial layout.


SelfTalk DualTalk Ours (Stage-1) Ours (Full)