Mapping between dynamic markings and performed loudness: a machine learning approach

dc.contributor.authorKosta, Katerina
dc.contributor.authorRamírez, Rafael, 1966-
dc.contributor.authorBandtlow, Oscar F.
dc.contributor.authorChew, Elaine
dc.date.accessioned2020-11-16T12:00:19Z
dc.date.available2020-11-16T12:00:19Z
dc.date.issued2016
dc.description.abstractLoudness variation is one of the foremost tools for expressivity in music performance. Loudness is frequently notated as dynamic markings such as p (piano, meaning soft) or f (forte, meaning loud). While dynamic markings in music scores are important indicators of how music pieces should be interpreted, their meaning is less straightforward than it may seem, and depends highly on the context in which they appear. In this article, we investigate the relationship between dynamic markings in the score and performed loudness by applying machine learning techniques – decision trees, support vector machines, artificial neural networks, and a k-nearest neighbor method – to the prediction of loudness levels corresponding to dynamic markings, and to the classification of dynamic markings given loudness values. The methods are applied to 44 recordings of performances of Chopin's Mazurkas, each by 8 pianists. The results show that loudness values and markings can be predicted relatively well when trained across recordings of the same piece, but fail dismally when trained across the pianist's recordings of other pieces, demonstrating that score features may trump individual style when modeling loudness choices. Evidence suggests that all the features chosen for the task are relevant, and analysis of the results reveals the forms (such as the return of the theme) and structures (such as dynamic-marking repetitions) that influence the predictability of loudness and markings. Modeling of loudness trends in expressive performance appears to be a delicate matter, and sometimes loudness expression can be a matter of the performer's idiosyncracy.
dc.description.sponsorshipThis work was supported in part by a UK EPSRC Platform Grant for Digital Music [EP/K009559/1]; the Spanish TIN project TIMUL [TIN2013-48152-C2-2-R]; and the European Union's Horizon 2020 research and innovation programme [grant agreement No. 688269].
dc.format.mimetypeapplication/pdf
dc.identifier.citationKosta K, Ramírez R, Bandtlow OF, Chew E. Mapping between dynamic markings and performed loudness: a machine learning approach. Journal of Mathematics and Music. 2016 Aug 3;10(2):149-72. DOI: 10.1080/17459737.2016.1193237
dc.identifier.doihttp://dx.doi.org/10.1080/17459737.2016.1193237
dc.identifier.issn1745-9737
dc.identifier.urihttp://hdl.handle.net/10230/45777
dc.language.isoeng
dc.publisherTaylor & Francis
dc.relation.ispartofJournal of Mathematics and Music. 2016 Aug 3;10(2):149-72.
dc.relation.projectIDinfo:eu-repo/grantAgreement/ES/1PE/TIN2013-48152-C2-2-R
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/688269
dc.rights© This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of Mathematics and Music: Mathematical and Computational Approaches to Music Theory, Analysis, Composition and Performance on 3 Aug 2016, available online: http://www.tandfonline.com/10.1080/17459737.2016.1193237
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.subject.keywordDynamic markings
dc.subject.keywordLoudness-level representation
dc.subject.keywordMachine learning
dc.subject.keywordLoudness prediction
dc.subject.keywordMarking classification
dc.titleMapping between dynamic markings and performed loudness: a machine learning approach
dc.typeinfo:eu-repo/semantics/article
dc.type.versioninfo:eu-repo/semantics/acceptedVersion

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
costa_jmm_map.pdf
Size:
1.08 MB
Format:
Adobe Portable Document Format