Welcome to the UPF Digital Repository

What/when causal expectation modelling applied to audio signals

Show simple item record

dc.contributor.author Hazan, Amaury
dc.contributor.author Marxer Piñón, Ricard
dc.contributor.author Brossier, Paul
dc.contributor.author Purwins, Hendrick
dc.contributor.author Herrera Boyer, Perfecto
dc.contributor.author Serra, Xavier
dc.date.accessioned 2018-04-26T08:57:29Z
dc.date.available 2018-04-26T08:57:29Z
dc.date.issued 2009
dc.identifier.citation Hazan A, Marxer R, Brossier P, Purwins H, Herrera P, Serra X. What/when causal expectation modelling applied to audio signals. Connection Science. 2009;21(2-3):119-43. DOI: 10.1080/09540090902733764
dc.identifier.issn 0954-0091
dc.identifier.uri http://hdl.handle.net/10230/34478
dc.description.abstract A causal system to represent a stream of music into musical events, and to generate further expected events, is presented. Starting from an auditory front-end that extracts low-level (i.e. MFCC) and mid-level features such as onsets and beats, an unsupervised clustering process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter-onset intervals relative to the beats. These symbols are then processed by an expectation module using Predictive Partial Match, a multiscale technique based on N-grams. To characterise the ability of the system to generate an expectation that matches both ground truth and system transcription, we introduce several measures that take into account the uncertainty associated with the unsupervised encoding of the musical sequence. The system is evaluated using a subset of the ENST-drums database of annotated drum recordings. We compare three approaches to combine timing (when) and timbre (what) expectation. In our experiments, we show that the induced representation is useful for generating expectation patterns in a causal way.
dc.description.sponsorship This work is partially funded by the EmCAP project (European Commission FP6-IST, contract 013123).
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher Taylor & Francis (Routledge)
dc.relation.ispartof Connection Science. 2009;21(2-3):119-43.
dc.rights © Taylor & Francis. This is an electronic version of an article published in Hazan A, Marxer R, Brossier P, Purwins H, Herrera P, Serra X. What/when causal expectation modelling applied to audio signals. Connection Science. 2009;21(2-3):119-43. Connection Science is available online at: https://www.tandfonline.com/doi/abs/10.1080/09540090902733764
dc.title What/when causal expectation modelling applied to audio signals
dc.type info:eu-repo/semantics/article
dc.identifier.doi http://dx.doi.org/10.1080/09540090902733764
dc.subject.keyword Unsupervised learning
dc.subject.keyword Music
dc.subject.keyword Audio
dc.subject.keyword Expectation
dc.relation.projectID info:eu-repo/grantAgreement/EC/FP6/013123
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.type.version info:eu-repo/semantics/acceptedVersion


This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics

Compliant to Partaking