Bring your MetaHuman characters to life with real-time, cross-platform lip synchronization!
Transform your MetaHuman characters with seamless, real-time lip synchronization that works completely offline and cross-platform! Watch as your digital humans respond naturally to speech input, creating immersive and believable conversations with minimal setup.
Quick links:
Packaged Demo Project (Windows)
Documentation
YouTube video demonstration
Discord support chat
Custom Development: solutions@georgy.dev (tailored solutions for teams & organizations)
Key features:
- Real-time Lip Sync from microphone input
- Offline Processing - no internet connection required
- Cross-platform Compatibility: Windows, Mac, Android, and MetaQuest
- Multiple Audio Sources:
- Live microphone input (via Runtime Audio Importer’s capturable sound wave)
- Captured audio playback (via Runtime Audio Importer’s capturable sound wave)
- Synthesized speech (via Runtime Text To Speech)
- Custom audio data in float PCM format
How it works:
The plugin processes audio input to generate visemes (visual representations of phonemes) that drive your MetaHuman’s facial animations in real-time, creating natural-looking speech movements that match the audio perfectly.
Perfect for:
- Interactive NPCs and digital humans
- Virtual assistants and guides
- Cutscene dialogue automation
- Live character performances
- VR/AR experiences
- Educational applications
- Accessibility solutions
Works great with:
- Runtime Audio Importer - For microphone capture and audio processing
- Runtime Text To Speech - For synthesized speech generation
1 post - 1 participant