Welcome to the UPF Digital Repository

Singing voice phoneme segmentation by hierarchically inferring syllable and phoneme onset positions

Show simple item record

dc.contributor.author Gong, Rong
dc.contributor.author Serra, Xavier
dc.date.accessioned 2019-04-16T08:12:28Z
dc.date.available 2019-04-16T08:12:28Z
dc.date.issued 2018
dc.identifier.citation Gong R, Serra X. Singing voice phoneme segmentation by hierarchically inferring syllable and phoneme onset positions. In: Interspeech 2018; 2018 Sep 2-6; Hyderabad, India. [Baixas]: ISCA; 2018. p. 716-20. DOI: 10.21437/Interspeech.2018-1224
dc.identifier.issn 1990-9772
dc.identifier.uri http://hdl.handle.net/10230/37115
dc.description Comunicació presentada a: Interspeech 2018, celebrada del 2 al 6 de setembre de 2018 a Hyderabad, India.
dc.description.abstract In this paper, we tackle the singing voice phoneme segmentation problem in the singing training scenario by using language independent information – onset and prior coarse duration. We propose a two-step method. In the first step, we jointly calculate the syllable and phoneme onset detection functions (ODFs) using a convolutional neural network (CNN). In the second step, the syllable and phoneme boundaries and labels are inferred hierarchically by using a duration-informed hidden Markov model (HMM). To achieve the inference, we incorporate the a priori duration model as the transition probabilities and the ODFs as the emission probabilities into the HMM. The proposed method is designed in a language-independent way such that no phoneme class labels are used. For the model training and algorithm evaluation, we collect a new jingju (also known as Beijing or Peking opera) solo singing voice dataset and manually annotate the boundaries and labels at phrase, syllable and phoneme levels. The dataset is publicly available. The proposed method is compared with a baseline method based on hidden semi-Markov model (HSMM) forced alignment. The evaluation results show that the proposed method outperforms the baseline by a large margin regarding both segmentation and onset detection tasks.
dc.description.sponsorship This work is supported by the CompMusic project (ERC grant agreement 267583).
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher International Speech Communication Association (ISCA)
dc.relation.ispartof Interspeech 2018; 2018 Sep 2-6; Hyderabad, India. [Baixas]: ISCA; 2018. p. 716-20.
dc.relation.isreferencedby https://doi.org/10.5281/zenodo.1185123
dc.rights © 2018 ISCA
dc.title Singing voice phoneme segmentation by hierarchically inferring syllable and phoneme onset positions
dc.type info:eu-repo/semantics/conferenceObject
dc.identifier.doi http://dx.doi.org/10.21437/Interspeech.2018-1224
dc.subject.keyword Singing voice
dc.subject.keyword Phoneme segmentation
dc.subject.keyword Onset detection
dc.subject.keyword Convolutional neural network
dc.subject.keyword Multi-task learning
dc.subject.keyword Duration-informed hidden Markov model
dc.relation.projectID info:eu-repo/grantAgreement/EC/FP7/267583
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/publishedVersion


This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

Compliant to Partaking