Ramires, AntónioSerra, XavierFont Corbera, FredericUniversitat Pompeu Fabra. Departament de Tecnologies de la Informació i les Comunicacions2024-03-162024-03-162023-02-152023-02-152023-02-08http://hdl.handle.net/10230/55792Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.Repurposing audio material to create new music - also known as sampling - was a foundation of electronic music and is a fundamental component of this practice. Currently, large-scale databases of audio offer vast collections of audio material for users to work with. The navigation on these databases is heavily focused on hierarchical tree directories. Consequently, sound retrieval is tiresome and often identified as an undesired interruption in the creative process. We address two fundamental methods for navigating sounds: characterization and generation. Characterizing loops and one-shots in terms of instruments or instrumentation allows for organizing unstructured collections and a faster retrieval for music-making. The generation of loops and one-shot sounds enables the creation of new sounds not present in an audio collection through interpolation or modification of the existing material. To achieve this, we employ deep-learning-based data-driven methodologies for classification and generation.Programa de doctorat en Tecnologies de la Informació i les Comunicacions182 p.application/pdfengL'accés als continguts d'aquesta tesi queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons: http://creativecommons.org/licenses/by-nc-nd/4.0/http://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAutomatic characterization and generation of music loops and instrument samples for electronic music productioninfo:eu-repo/semantics/doctoralThesisElectronic music productionInstrument classificationPercussive sound generationMusic information retrievalDeep learningDeep generative modelsProducción de música electrónicaClasificación de instrumentosGeneración de sonidos percusivosRecuperación de la información musicalAprendizaje profundoModelos generativos profundos62