In the past, Acoustic Scene Classification systems havebeen based on hand crafting audio features that are input toa classifier. Nowadays, the common trend is to adopt datadriven techniques, e.g., deep learning, where audio repre-sentations are learned from data. In this paper, we proposea system that consists of a simple fusion of two methods ofthe aforementioned types: a deep learning approach wherelog-scaled mel-spectrograms are input to a convolutionalneural network, and a feature engineering ...
In the past, Acoustic Scene Classification systems havebeen based on hand crafting audio features that are input toa classifier. Nowadays, the common trend is to adopt datadriven techniques, e.g., deep learning, where audio repre-sentations are learned from data. In this paper, we proposea system that consists of a simple fusion of two methods ofthe aforementioned types: a deep learning approach wherelog-scaled mel-spectrograms are input to a convolutionalneural network, and a feature engineering approach, wherea collection of hand-crafted features is input to a gradientboosting machine. We first show that both methods pro-vide complementary information to some extent. Then, weuse a simple late fusion strategy to combine both meth-ods. We report classification accuracy of each method in-dividually and the combined system on the TUT AcousticScenes 2017 dataset. The proposed fused system outper-forms each of the individual methods and attains a classifi-cation accuracy of 72.8% on the evaluation set, improvingthe baseline system by 11.8%.
+