Scientists develop tech that can recognise human emotion real time
South Korean scientists have developed a groundbreaking technology that can recognise human emotions in real time, an advance poised to revolutionise various industries, including next-generation wearable systems.
The multi-modal human emotion recognition system, developed by a team from the Ulsan National Institute of Science and Technology (UNIST), combines verbal and non-verbal expression data to efficiently utilise comprehensive emotional information.
At the core of this system is the personalised skin-integrated facial interface (PSiFI) system, which is self-powered, facile, stretchable, and transparent.
It features a first-of-its-kind bidirectional triboelectric strain and vibration sensor that enables the simultaneous sensing and integration of verbal and non-verbal expression data.
The system is fully integrated with a data processing circuit for wireless data transfer, enabling real-time emotion recognition.
Utilising machine learning algorithms, the developed technology demonstrates accurate and real-time human emotion recognition tasks, even when individuals are wearing masks.
The system has also been successfully applied in a digital concierge application within a virtual reality (VR) environment, including smart homes, private movie theatres, and smart offices. This enables the provision of personalised recommendations for music, movies, and books.
The technology is based on the phenomenon of “friction charging,” where objects separate into positive and negative charges upon friction. Notably, the system is self-generating, requiring no external power source or complex measuring devices for data recognition.
“Based on these technologies, we have developed a skin-integrated face interface (PSiFI) system that can be customised for individuals,” said Prof. Jiyun Kim, from the Department of Material Science and Engineering at UNIST.
In the study, published online in the journal Nature Communications, the team successfully integrated the detection of facial muscle deformation and vocal cord vibrations, enabling real-time emotion recognition.
The system’s capabilities were demonstrated in a virtual reality “digital concierge” application, where customised services based on users’ emotions were provided.
“With this developed system, it is possible to implement real-time emotion recognition with just a few learning steps and without complex measurement equipment. This opens up possibilities for portable emotion recognition devices and next-generation emotion-based digital platform services in the future,” Jin Pyo Lee, the first author, from the varsity.
The research team conducted real-time emotion recognition experiments, collecting multimodal data such as facial muscle deformation and voice.
The system exhibited high emotional recognition accuracy with minimal training. Its wireless and customisable nature ensures wearability and convenience.