What/when causal expectation modelling applied to audio signals

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Hazan, Amauryca
  • dc.contributor.author Marxer Piñón, Ricardca
  • dc.contributor.author Brossier, Paulca
  • dc.contributor.author Purwins, Hendrikca
  • dc.contributor.author Herrera Boyer, Perfecto, 1964-ca
  • dc.contributor.author Serra, Xavierca
  • dc.date.accessioned 2018-04-26T08:57:29Z
  • dc.date.available 2018-04-26T08:57:29Z
  • dc.date.issued 2009
  • dc.description.abstract A causal system to represent a stream of music into musical events, and to generate further expected events, is presented. Starting from an auditory front-end that extracts low-level (i.e. MFCC) and mid-level features such as onsets and beats, an unsupervised clustering process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter-onset intervals relative to the beats. These symbols are then processed by an expectation module using Predictive Partial Match, a multiscale technique based on N-grams. To characterise the ability of the system to generate an expectation that matches both ground truth and system transcription, we introduce several measures that take into account the uncertainty associated with the unsupervised encoding of the musical sequence. The system is evaluated using a subset of the ENST-drums database of annotated drum recordings. We compare three approaches to combine timing (when) and timbre (what) expectation. In our experiments, we show that the induced representation is useful for generating expectation patterns in a causal way.
  • dc.description.sponsorship This work is partially funded by the EmCAP project (European Commission FP6-IST, contract 013123).
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Hazan A, Marxer R, Brossier P, Purwins H, Herrera P, Serra X. What/when causal expectation modelling applied to audio signals. Connection Science. 2009;21(2-3):119-43. DOI: 10.1080/09540090902733764
  • dc.identifier.doi http://dx.doi.org/10.1080/09540090902733764
  • dc.identifier.issn 0954-0091
  • dc.identifier.uri http://hdl.handle.net/10230/34478
  • dc.language.iso eng
  • dc.publisher Taylor & Francis (Routledge)ca
  • dc.relation.ispartof Connection Science. 2009;21(2-3):119-43.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/FP6/013123
  • dc.rights © Taylor & Francis. This is an electronic version of an article published in Hazan A, Marxer R, Brossier P, Purwins H, Herrera P, Serra X. What/when causal expectation modelling applied to audio signals. Connection Science. 2009;21(2-3):119-43. Connection Science is available online at: https://www.tandfonline.com/doi/abs/10.1080/09540090902733764
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Unsupervised learning
  • dc.subject.keyword Music
  • dc.subject.keyword Audio
  • dc.subject.keyword Expectation
  • dc.title What/when causal expectation modelling applied to audio signalsca
  • dc.type info:eu-repo/semantics/article
  • dc.type.version info:eu-repo/semantics/acceptedVersion