Acoustic cues to beat induction. A machine learning perspective

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Gouyon, Fabienca
  • dc.contributor.author Widmer, Gerhardca
  • dc.contributor.author Serra, Xavierca
  • dc.contributor.author Flexer, Arthurca
  • dc.date.accessioned 2018-02-15T11:38:26Z
  • dc.date.available 2018-02-15T11:38:26Z
  • dc.date.issued 2006
  • dc.description.abstract This article brings forward the question of which acoustic features are the most adequate for identifying beats computationally in acoustic music pieces. We consider many different features computed on consecutive short portions of acoustic signal, among which those currently promoted in the literature on beat induction from acoustic signals and several original features, unmentioned in this literature. Evaluation of feature sets regarding their ability to provide reliable cues to the localization of beats is based on a machine learning methodology with a large corpus of beat-annotated music pieces, in audio format, covering distinctive music categories. Confirming common knowledge, energy is shown to be a very relevant cue to beat induction (especially the temporal variation of energy in various frequency bands, with the special relevance of frequency bands below 500 Hz and above 5 kHz). Some of the new features proposed in this paper are shown to outperform features currently promoted in the literature on beat induction from acoustic signals.We finally hypothesize that modeling beat induction may involve many different, complementary acoustic features and that the process of selecting relevant features should partly depend on acoustic properties of the very signal under consideration.
  • dc.description.sponsorship This research was partly funded by the EU projects S2S2 and SIMAC. The Austrian Research Institute for Artificial Intelligence acknowledges the support of BMBWK and BMVIT.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Gouyon F, Widmer G, Serra X, Flexer A. Acoustic cues to beat induction. A machine learning perspective. Music Perception. 2006;24(1):177-88. DOI: 10.1525/mp.2006.24.2.177
  • dc.identifier.doi http://dx.doi.org/10.1525/mp.2006.24.2.177
  • dc.identifier.issn 0730-7829
  • dc.identifier.uri http://hdl.handle.net/10230/33924
  • dc.language.iso eng
  • dc.publisher University of California Pressca
  • dc.relation.ispartof Music Perception. 2006;24(1):177-88.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/FP6/507142
  • dc.rights Published as Gouyon F, Widmer G, Serra X, Flexer A. Acoustic cues to beat induction. A machine learning perspective. Music Perception. 2006;24(1):177-88. DOI: 10.1525/mp.2006.24.2.177. © 2006 by the Regents of the University of California. Copying and permissions notice: Authorization to copy this content beyond fair use (as specified in Sections 107 and 108 of the U. S. Copyright Law) for internal or personal use, or the internal or personal use of specific clients, is granted by the Regents of the University of California for libraries and other users, provided that they are registered with and pay the specified fee via Rightslink® or directly with the Copyright Clearance Center
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Beat induction
  • dc.subject.keyword Rhythm
  • dc.subject.keyword Phenomenal accent
  • dc.subject.keyword Acoustic cues
  • dc.subject.keyword Feature selection
  • dc.title Acoustic cues to beat induction. A machine learning perspectiveca
  • dc.type info:eu-repo/semantics/article
  • dc.type.version info:eu-repo/semantics/publishedVersion