Web-based live speech-driven lip-sync
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Llorach, Gerardca
- dc.contributor.author Evans, Alun Thomasca
- dc.contributor.author Blat, Josepca
- dc.contributor.author Grimm, Gisoca
- dc.contributor.author Hohmann, Volkerca
- dc.date.accessioned 2017-02-24T14:58:08Z
- dc.date.available 2017-02-24T14:58:08Z
- dc.date.issued 2016
- dc.description Comunicació presentada a: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games), celebrada a Barcelona del 7 al 9 de setembre de 2016.ca
- dc.description.abstract Virtual characters are an integral part of many games and virtual worlds. The ability to accurately synchronize lip movement to audio speech is an important aspect in the believability of the character. In this paper we propose a simple rule-based lip-syncing algorithm for virtual agents using the web browser. It works in real-time with live input, unlike most current lip-syncing proposals, which may require considerable amounts of computation, expertise and time to set up. Our method generates reliable speech animation based on live speech using three blend shapes and no training, and it only needs manual adjustment of three parameters for each speaker (sensitivity, smoothness and vocal tract length). Our proposal is based on the limited real-time audio processing functions of the client web browser (thus, the algorithm needs to be simple), but this facilitates the use of web based embodied conversational agents.en
- dc.description.sponsorship This research has been partially funded by the Spanish Ministry of Economy and Competitiveness (RESET TIN2014-53199-C3-3-R), by the DFG research grant FOR173 and by the European Commission under the contract number H2020-645012-RIA (KRISTINA).en
- dc.format.mimetype application/pdfca
- dc.identifier.citation Llorach G, Evans A, Blat J, Grimm G, Hohmann V. Web-based live speech-driven lip-sync. In: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games); 2016 Sept. 7-9; Barcelona (Spain). [place unknown]: IEEE; 2016. [4 p.] DOI: 10.1109/VS-GAMES.2016.7590381
- dc.identifier.doi http://dx.doi.org/10.1109/VS-GAMES.2016.7590381
- dc.identifier.uri http://hdl.handle.net/10230/28139
- dc.language.iso eng
- dc.publisher Institute of Electrical and Electronics Engineers (IEEE)ca
- dc.relation.ispartof 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games); 2016 Sept. 7-9; Barcelona (Spain). [place unknown]: IEEE; 2016. [4 p.]
- dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TIN2014-53199-C3-3-R
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/645012
- dc.rights © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The final published article can be found at http://dx.doi.org/10.1109/VS-GAMES.2016.7590381
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Virtual charactersen
- dc.subject.keyword Lip synchronizationen
- dc.subject.keyword Visual speech synthesisen
- dc.title Web-based live speech-driven lip-syncca
- dc.type info:eu-repo/semantics/conferenceObject
- dc.type.version info:eu-repo/semantics/acceptedVersion