Automatic characterization and generation of music loops and instrument samples for electronic music production

dc.contributor.authorRamires, António
dc.contributor.directorSerra, Xavier
dc.contributor.directorFont Corbera, Frederic
dc.date.accessioned2025-12-10T11:47:21Z
dc.date.available2025-12-10T11:47:21Z
dc.date.issued2023-02-08
dc.description.abstractRepurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.
dc.description.abstractRepurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.
dc.description.degreePrograma de doctorat en Tecnologies de la Informació i les Comunicacions
dc.embargo.termscap
dc.format.extent182 p.
dc.identifierhttp://hdl.handle.net/10803/687697
dc.identifier.urihttps://hdl.handle.net/10803/687697
dc.language.isoeng
dc.publisherUniversitat Pompeu Fabra
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess
dc.rights.licenseL'accés als continguts d'aquesta tesi queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons: http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.sourceTDX (Tesis Doctorals en Xarxa)
dc.subjectElectronic music production
dc.subjectInstrument classification
dc.subjectPercussive sound generation
dc.subjectMusic information retrieval
dc.subjectDeep learning
dc.subjectDeep generative models
dc.subjectProducción de música electrónica
dc.subjectClasificación de instrumentos
dc.subjectGeneración de sonidos percusivos
dc.subjectRecuperación de la información musical
dc.subjectAprendizaje profundo
dc.subjectModelos generativos profundos
dc.subject.udc62
dc.titleAutomatic characterization and generation of music loops and instrument samples for electronic music production
dc.typeinfo:eu-repo/semantics/doctoralThesis
dc.typeinfo:eu-repo/semantics/publishedVersion

Files

Collections

License

Rights