Given the need of remote learning and the growing presence of virtual agents within online learning environments, the present research aims at investigating young people’ ability to decode emotional expressions conveyed by virtual agents. The study, involves 50 healthy participants aged between 22 and 35 years (mean age=27.86; SD= ±2.75; 30 females) which were required to label pictures and video clips depicting female and male virtual agents of different ages (young, middle-aged and old) displaying static and dynamic expressions of disgust, anger, sadness, fear, happiness, surprise and neutrality. Depending on the emotional category, significant effects were observed for the agents’ age, gender, and type of administered (static vs dynamic) stimuli on the young people’ decoding accuracy of the virtual agents’ emotional faces. Anger was significantly more accurately decoded in male rather than female faces while the opposite result was observed for happy, fearful, surprised, and disgusted faces. Middle aged faces were generally more accurately decoded than young and old emotional faces except for sadness and disgust. Significantly greater accuracy was observed for dynamic vs static faces of disgust, sadness, and fear, in contrast to static vs dynamic neutral and surprised faces.

Emotional virtual agents: How do young people decode synthetic facial expressions?

Esposito A.;Cordasco G.
2021

Abstract

Given the need of remote learning and the growing presence of virtual agents within online learning environments, the present research aims at investigating young people’ ability to decode emotional expressions conveyed by virtual agents. The study, involves 50 healthy participants aged between 22 and 35 years (mean age=27.86; SD= ±2.75; 30 females) which were required to label pictures and video clips depicting female and male virtual agents of different ages (young, middle-aged and old) displaying static and dynamic expressions of disgust, anger, sadness, fear, happiness, surprise and neutrality. Depending on the emotional category, significant effects were observed for the agents’ age, gender, and type of administered (static vs dynamic) stimuli on the young people’ decoding accuracy of the virtual agents’ emotional faces. Anger was significantly more accurately decoded in male rather than female faces while the opposite result was observed for happy, fearful, surprised, and disgusted faces. Middle aged faces were generally more accurately decoded than young and old emotional faces except for sadness and disgust. Significantly greater accuracy was observed for dynamic vs static faces of disgust, sadness, and fear, in contrast to static vs dynamic neutral and surprised faces.
File in questo prodotto:
File Dimensione Formato  
teleXbe2021_paper_5_revised_G.pdf

accesso aperto

Descrizione: main
Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 1.1 MB
Formato Adobe PDF
1.1 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11591/446663
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact