Publicación:
A multi-modal visual emotion recognition method to instantiate an ontology

dc.contributor.author Heredia J.P.A. es_PE
dc.contributor.author Cardinale Y. es_PE
dc.contributor.author Dongo I. es_PE
dc.contributor.author Díaz-Amado J. es_PE
dc.date.accessioned 2024-05-30T23:13:38Z
dc.date.available 2024-05-30T23:13:38Z
dc.date.issued 2021
dc.description This research was supported by the FONDO NA-CIONAL DE DESARROLLO CIENT´FICO, TEC-NOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA -FONDECYT as executing entity of CONCYTEC under grant agreement no. 01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism, Autonomous and Semantic web based.
dc.description.abstract Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptation of the EMOTIC dataset. Results show that our method outperforms the single-modal methods. Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
dc.description.sponsorship Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica - Concytec
dc.identifier.doi https://doi.org/10.5220/0010516104530464
dc.identifier.scopus 2-s2.0-85111776744
dc.identifier.uri https://hdl.handle.net/20.500.12390/2959
dc.language.iso eng
dc.publisher SciTePress
dc.relation.ispartof Proceedings of the 16th International Conference on Software Technologies, ICSOFT 2021
dc.rights info:eu-repo/semantics/openAccess
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Visual Expressions
dc.subject Emotion Ontology es_PE
dc.subject Emotion Recognition es_PE
dc.subject Multi-modal Method es_PE
dc.subject.ocde https://purl.org/pe-repo/ocde/ford#1.05.01
dc.title A multi-modal visual emotion recognition method to instantiate an ontology
dc.type info:eu-repo/semantics/conferenceObject
dspace.entity.type Publication
Archivos