Web-based live speech-driven lip-sync
Web-based live speech-driven lip-sync
Citació
- Llorach G, Evans A, Blat J, Grimm G, Hohmann V. Web-based live speech-driven lip-sync. In: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games); 2016 Sept. 7-9; Barcelona (Spain). [place unknown]: IEEE; 2016. [4 p.] DOI: 10.1109/VS-GAMES.2016.7590381
Enllaç permanent
Descripció
Resum
Virtual characters are an integral part of many games and virtual worlds. The ability to accurately synchronize lip movement to audio speech is an important aspect in the believability of the character. In this paper we propose a simple rule-based lip-syncing algorithm for virtual agents using the web browser. It works in real-time with live input, unlike most current lip-syncing proposals, which may require considerable amounts of computation, expertise and time to set up. Our method generates reliable speech animation based on live speech using three blend shapes and no training, and it only needs manual adjustment of three parameters for each speaker (sensitivity, smoothness and vocal tract length). Our proposal is based on the limited real-time audio processing functions of the client web browser (thus, the algorithm needs to be simple), but this facilitates the use of web based embodied conversational agents.Descripció
Comunicació presentada a: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games), celebrada a Barcelona del 7 al 9 de setembre de 2016.