Introducing the Sonic Open-Source Digital Human Model!

Exciting News: Introducing the Sonic Open-Source Digital Human Model!

Hello everyone! Today, I’m thrilled to share with you an exciting open-source digital human model – Sonic!

Sonic is a cutting-edge audio-driven portrait animation framework developed in collaboration between Tencent and Zhejiang University. It is designed to seamlessly blend audio with static images to generate vivid dynamic videos.

Key Features:

  • Precise Lip Syncing: Sonic excels at perfectly aligning the audio with lip movements, ensuring that the spoken content matches the mouth shapes accurately.

  • Rich Facial Expressions and Head Movements: The model generates diverse and natural facial expressions and head movements, making animations more lively and expressive.

  • Multiple Style Supports: Sonic supports a range of styles, from animation to sculpting and realistic rendering, catering to a variety of creative needs.

Additionally, the open-source implementation of Sonic has been released on GitHub, allowing developers and enthusiasts to download, explore, and contribute to the code.

For those interested in integrating Sonic with ComfyUI, I’ve published a tutorial that makes it easier to get started with the model. Here’s my original workflow and tutorial to help guide you through the process.

In summary, Sonic’s release marks a significant breakthrough in the field of digital humans, opening up new creative possibilities.

If you’re curious about the results, here’s a showcase of my original examples

Whether you’re a developer, an artist, or an AI enthusiast, this powerful tool is not to be missed!