Let’s give robots a voice!
We might think that as humans we primarily communicate through speech, although we frequently express our emotions without words. These nonverbal expressions (e.g. laughter) enable us to understand the emotions of others irrespectively of cultural backgrounds and without the need for any learning. However, these are not specific to humans, as animals are also able to express their emotional states with vocalizations. Researchers from the Department of Ethology, Eötvös Loránd University; Department of Mechatronics, Optics and Mechanical Engineering Informatics, Budapest University of Technology and Economics; and the MTA-ELTE Comparative Ethology Research Group in Budapest, Hungary, investigated how to develop artificial sounds for robots which would allow them to express emotions nonverbally, similarly to animals.
“We can already encounter robots that cooperate with humans, and their role is going to increase in the future. Based on some well-known robotics research we could assume that these robots will look, move and talk like us, but this is not likely. On one hand huge technological developments are needed to achieve this — the marvellous robots seen in movies are only fiction so far –, while on the other hand overly human-like robots could induce fear or aversion in the people they are supposed to help, similarly to how a human-looking zombie would make us feel. In our opinion, robots should be regarded as a distinct species, and their communication should be developed in accordance with their functions and capabilities” – says Márta Gácsi, leading researcher.
Ethorobotics (from the abbreviation of the words ‘ethology’, the field of research studying animal behaviour, and ‘robotics’) investigates these prospects, e.g., by creating communicational signals based on biological premises. “Just as animal species have their own species-specific vocalizations, it’s important to develop distinct sounds that belong to robots, with which they can express emotions without the need to imitate humans.” – adds Ádám Miklósi, head of the Department of Ethology and leader of the MTA research group.
In the study published in Scientific Reports, the researchers created artificial sounds based on biological rules that can be found in the vocalizations of both animals and humans. “We can not only easily understand human nonverbal vocalizations (e.g., crying, screaming, laughter or sighing) and deduce what the emitter of the sound feels at the moment, but we can also infer it from the vocalizations of multiple terrestrial mammals, like dogs and pigs. This is made possible by the fundamental similarities of the vocal tracts of mammalian species, which result in sounds with similar acoustic parameters expressing similar emotional states. For example, we perceive sounds with higher fundamental frequency as more intensive, and sounds with shorter calls as more positive.” – explains Beáta Korcsok, first author of the article.
The researchers generated nearly 600 artificial sounds ranging from simple machine-like beeps to complex sounds which model phonation in mammals, by changing the pitch and the length of the sounds, and by adding different acoustic parameters that are characteristic of animal vocalizations. The sounds were rated in an online questionnaire by more than 230 volunteers. They had to answer two simple questions: how positive or negative is the valence of the emotional state the emitter is in, and how intensive is it?
“The participants rated the artificial sounds according to the simple biological rules that were observed in both human and animal vocalizations. They considered sounds consisting of a series of short calls as more positive, and high frequency sounds as more intensive, irrespectively of their modelled biological complexity. This serves as further evidence that the emotion coding rules investigated in the study might be remarkably old. The fact that we can recognise emotions even in the simplest, beep-like sounds indicates that these rules function independently of species and that not only the production of emotionally expressive sounds but also the neural processes of their decoding could be similar in terrestrial mammals.” says Tamás Faragó, researcher of the Eötvös Loránd University.
The researchers are planning on investigating the artificial sounds in real-life scenarios, with the help of their social robot, Biscee.
Scientific Reports: https://www.nature.com/articles/s41598-020-63504-8
Corresponding author: Beáta Korcsok, email@example.com