Modeling of phoneme durations for alignment between polyphonic audio and lyrics

Citació

  • Dzhambazov G, Serra X. Modeling of phoneme durations for alignment between polyphonic audio and lyrics. In: Timoney J, Lysaght T, editors. 12th Sound and Music Computing Conference; 2015 jul. 30-ag. 1; Maynooth (Ireland). Maynooth: Music Technology Research Group, Department of Computer Science, Maynooth University; 2015. Oral session 7, Computational musicology and mathematical music theory 1; p. 281-286.

Enllaç permanent

Descripció

  • Resum

    In this work we propose how to modify a standard scheme for text-to-speech alignment for the alignment of lyrics and singing voice. To this end we model the duration of phonemes specific for the case of singing. We rely on a duration-explicit hidden Markov model (DHMM) phonetic recognizer based on mel frequency cepstral coefficients (MFCCs), which are extracted in a way robust to background instrumental sounds. The proposed approach is tested on polyphonic audio from the classical Turkish music tradition in two settings: with and without modeling phoneme durations. Phoneme durations are inferred from sheet music. In order to assess the impact of the polyphonic setting, alignment is evaluated as well on an acapella dataset, compiled especially for this study. We show that the explicit modeling of phoneme durations improves alignment accuracy by absolute 10 percent on the level of lyrics lines (phrases) and performs on par with state-of-the-art aligners for other languages.
  • Descripció

    Comunicació presentada al 12th Sound and Music Computing Conference, celebrada del 30 de juliol a l'1 d'agost 2015 a Maynooth (Irlanda).
  • Mostra el registre complet