Pons Puig, JordiNieto Caballero, OriolProckup, MatthewSchmidt, Erik M.Ehmann, Andreas F.Serra, Xavier2019-05-132019-05-132017Pons J, Nieto O, Prockup M, Schmidt EM, Ehmann AF, Serra X. End­-to-­end learning for music audio tagging at scale. Paper presented at: Workshop Machine Learning for Audio Signal Processing at NIPS 2017 (ML4Audio@NIPS17); 2017 Dec 4-9; Long Beach, CA. [Copenhagen]: Sound & Music Computing; 2017. 5 p.http://hdl.handle.net/10230/37217Comunicació presentada a: Workshop Machine Learning for Audio Signal Processing at NIPS 2017 (ML4Audio@NIPS17) celebrat del 4 al 9 de desembre de 2017 a Long Beach, California.The lack of data tends to limit the outcomes of deep learning research – specially, when dealing with end-to-end learning stacks processing raw data such as waveforms. In this study we make use of musical labels annotated for 1.2 million tracks. This large amount of data allows us to unrestrictedly explore different front-end paradigms: from assumption-free models – using waveforms as input with very small convolutional filters; to models that rely on domain knowledge – log-mel spectrograms with a convolutional neural network designed to learn temporal and timbral features. Results suggest that while spectrogram-based models surpass their waveform-based counterparts, the difference in performance shrinks as more data are employed.application/pdfeng© Sound & Music ComputingEnd­-to-­end learning for music audio tagging at scaleinfo:eu-repo/semantics/conferenceObjectinfo:eu-repo/semantics/openAccess