A multimodal late fusion framework for physiological sensor and audio-signal-based stress detection: an experimental study and public dataset
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Xefteris, Vasileios-Rafail
- dc.contributor.author Domínguez Bajo, Mónica
- dc.contributor.author Grivolla, Jens
- dc.contributor.author Tsanousa, Athina
- dc.contributor.author Zaffanela, Francesco
- dc.contributor.author Monego, Martina
- dc.contributor.author Symeonidis, Spyridon
- dc.contributor.author Diplaris, Sotiris
- dc.contributor.author Wanner, Leo
- dc.contributor.author Vrochidis, Stefanos
- dc.contributor.author Kompatsiaris, Ioannis
- dc.date.accessioned 2024-06-25T06:04:56Z
- dc.date.available 2024-06-25T06:04:56Z
- dc.date.issued 2023
- dc.description.abstract Stress can be considered a mental/physiological reaction in conditions of high discomfort and challenging situations. The levels of stress can be reflected in both the physiological responses and speech signals of a person. Therefore the study of the fusion of the two modalities is of great interest. For this cause, public datasets are necessary so that the different proposed solutions can be comparable. In this work, a publicly available multimodal dataset for stress detection is introduced, including physiological signals and speech cues data. The physiological signals include electrocardiograph (ECG), respiration (RSP), and inertial measurement unit (IMU) sensors equipped in a smart vest. A data collection protocol was introduced to receive physiological and audio data based on alterations between well-known stressors and relaxation moments. Five subjects participated in the data collection, where both their physiological and audio signals were recorded by utilizing the developed smart vest and audio recording application. In addition, an analysis of the data and a decision-level fusion scheme is proposed. The analysis of physiological signals includes a massive feature extraction along with various fusion and feature selection methods. The audio analysis comprises a state-of-the-art feature extraction fed to a classifier to predict stress levels. Results from the analysis of audio and physiological signals are fused at a decision level for the final stress level detection, utilizing a machine learning algorithm. The whole framework was also tested in a real-life pilot scenario of disaster management, where users were acting as first responders while their stress was monitored in real time.
- dc.description.sponsorship This work was supported by the XR4DRAMA project funded by the European Commission (H2020) under the grant number 952133.
- dc.format.mimetype application/pdf
- dc.identifier.citation Xefteris VR, Dominguez M, Grivolla J, Tsanousa A, Zaffanela F, Monego M, et al. A multimodal late fusion framework for physiological sensor and audio-signal-based stress detection: an experimental study and public dataset. Electronics. 2023 Dec 2;12(23):4871. DOI: 10.3390/electronics12234871
- dc.identifier.doi http://dx.doi.org/10.3390/electronics12234871
- dc.identifier.issn 2079-9292
- dc.identifier.uri http://hdl.handle.net/10230/60564
- dc.language.iso eng
- dc.publisher MDPI
- dc.relation.ispartof Electronics. 2023 Dec 2;12(23):4871
- dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/952133
- dc.rights © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.rights.uri http://creativecommons.org/licenses/by/4.0/
- dc.subject.keyword Stress detection
- dc.subject.keyword Multimodal fusion
- dc.subject.keyword Physiological signals
- dc.subject.keyword Audio analysis
- dc.title A multimodal late fusion framework for physiological sensor and audio-signal-based stress detection: an experimental study and public dataset
- dc.type info:eu-repo/semantics/article
- dc.type.version info:eu-repo/semantics/publishedVersion