DC FieldValueLanguage
dc.contributor.authorAnagnostopoulos, Theodoros-
dc.contributor.authorSkourlas, Christos-
dc.contributor.authorGrudinin, Vladimir-
dc.contributor.authorKhoruzhnikov, S. E.-
dc.date.accessioned2024-07-11T12:04:35Z-
dc.date.available2024-07-11T12:04:35Z-
dc.date.issued2014-
dc.identifiergoogle_scholar-TOTF0PsAAAAJ:4TOpqqG69KYC-
dc.identifier.issn2500-0373-
dc.identifier.otherTOTF0PsAAAAJ:4TOpqqG69KYC-
dc.identifier.urihttps://uniwacris.uniwa.gr/handle/3000/2725-
dc.description.abstractHumans are considered to reason and act rationally and that is believed to be their fundamental difference from the rest of the living entities. Furthermore, modern approaches in the science of psychology underline that humans as a thinking creatures are also sentimental and emotional organisms. There are fifteen universal extended emotions plus neutral emotion: hot anger, cold anger, panic, fear, anxiety, despair, sadness, elation, happiness, interest, boredom, shame, pride, disgust, contempt and neutral position. The scope of the current research is to understand the emotional state of a human being by capturing the speech utterances that one uses during a common conversation. It is proved that having enough acoustic evidence available the emotional state of a person can be classified by a set of majority voting classifiers. The proposed set of classifiers is based on three main classifiers: kNN, C4.5 and SVM RBF Kernel. This set achieves better performance than each basic classifier taken separately. It is compared with two other sets of classifiers: one-against-all (OAA) multiclass SVM with Hybrid kernels and the set of classifiers which consists of the following two basic classifiers: C5.0 and Neural Network. The proposed variant achieves better performance than the other two sets of classifiers. The paper deals with emotion classification by a set of majority voting classifiers that combines three certain types of basic classifiers with low computational complexity. The basic classifiers stem from different theoretical background in order to avoid bias and redundancy which gives the proposed set of classifiers the ability to generalize in the emotion domain space.en_US
dc.language.isoenen_US
dc.relation.ispartofScientific and Technical Journal of Information Technologies, Mechanics and Opticsen_US
dc.sourceНаучно-технический вестник информационных технологий, механики и оптики, 137-145, 2014-
dc.subjectSpeech emotion recognitionen_US
dc.subjectAffective computingen_US
dc.subjectMachine learningen_US
dc.titleExtended speech emotion recognition and predictionen_US
dc.typeArticleen_US
dc.relation.deptDepartment of Business Administrationen_US
dc.relation.facultySchool of Administrative, Economics and Social Sciencesen_US
dc.relation.volume14en_US
dc.relation.issue6en_US
dc.identifier.spage137en_US
dc.identifier.epage145en_US
dc.linkhttps://ntv.ifmo.ru/en/article/11200/EXTENDED_SPEECH_EMOTION_RECOGNITION_AND_PREDICTION.htmen_US
dc.collaborationUniversity of West Attica (UNIWA)en_US
dc.subject.fieldEngineering and Technologyen_US
dc.journalsOpen Accessen_US
dc.publicationPeer Revieweden_US
dc.countryGreeceen_US
local.metadatastatusverifieden_US
item.openairetypeArticle-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
crisitem.author.deptDepartment of Business Administration-
crisitem.author.facultySchool of Administrative, Economics and Social Sciences-
crisitem.author.orcid0000-0002-5587-2848-
crisitem.author.parentorgSchool of Administrative, Economics and Social Sciences-
Appears in Collections:Articles / Άρθρα
CORE Recommender
Show simple item record

Page view(s)

14
checked on Sep 11, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.