Hi-Reco: High-Fidelity Real-Time Conversational Digital Humans

Hongbin Huang1, Junwei Li1, Tianxin Xie1, Zhuang Li2, Cekai Weng1, Yaodong Yang1, Yue Luo1, Li Liu1, Jing Tang1, Zhijing Shao1,2, Zeyu Wang1,3
The Hong Kong University of Science and Technology (Guangzhou)1, Prometheus Vision Technology Co., Ltd.2, The Hong Kong University of Science and Technology3

Abstract

High-fidelity digital humans are increasingly used in interactive applications, yet achieving both visual realism and real-time responsiveness remains a major challenge. We present a high-fidelity, real-time conversational digital human system that seamlessly combines a visually realistic 3D avatar, persona-driven expressive speech synthesis, and knowledge-grounded dialogue generation. To support natural and timely interaction, we introduce an asynchronous execution pipeline that coordinates multi-modal components with minimal latency. The system supports advanced features such as wake word detection, emotionally expressive prosody, and highly accurate, context-aware response generation. It leverages novel retrieval-augmented methods, including history augmentation to maintain conversational flow and intent-based routing for efficient knowledge access. Together, these components form an integrated system that enables responsive and believable digital humans, suitable for immersive applications in communication, education, and entertainment.

PDF BibTeX
BibTeX copied to clipboard