A team of UC Berkeley researchers recently released a study on nonverbal exclamations, identifying a rich and varied language from unspoken vocal expressions, including the sigh.
The study, which was conducted between 2016 and 2017, according to lead study author Alan Cowen, focused on vocal bursts — nonverbal vocalizations that convey meaning. Senior study author and campus psychology professor Dacher Keltner said vocal bursts predate language and can include the sounds people make when they cry or see something tasty.
“What we’re able to do here is come up with a dictionary of emotions we can express vocally so that researchers can use that dictionary to know what they need to study in order to understand, for example, deficiencies in emotion recognition,” Cowen said.
The researchers categorized the vocal bursts into 24 different categories, according to Keltner. Similar studies found only 13 kinds of nonverbal emotion categories.
According to Cowen, this study differs from previous similar studies because it separates what is reliable in participant judgment from what is not reliable through the use of dimensions — various criteria used to evaluate vocal bursts.
“Alan’s statistics were the most comprehensive and rigorous way to determine how many emotions we can convey with the human voice without words,” Keltner said. “Past studies had researchers simply assume on their own, without statistical analysis, the number of emotions rather than using data.”
The study was conducted in two stages. In the first stage, participants were asked to produce vocal bursts when given a particular situation, according to Keltner. Researchers then used statistical analysis to determine how many distinct emotions were communicated.
In the second part of the study, researchers used online videos to present realistic contexts for the vocal bursts, including babies falling or puppies being hugged. Participants then rated the vocal bursts, and researchers used statistical analysis to categorize the emotions into 24 distinct categories, Keltner said.
The results of this study have many implications, Keltner added. The study can be used to teach artificial intelligence technology how to identify emotions in human voices and draw connections between human and animal expressions. He added that Cowen has performed similar studies regarding facial expressions and tone of voice that are not yet published.
The study researchers are currently collaborating with the Minneapolis Institute of Art on a project that trains people to increase their emotional intelligence, or ability to know what others are feeling, through their behavior, according to Keltner.
“What I find really intriguing is that there is thinking that the human capacity to express emotion and music comes out of these vocal bursts,” Keltner said. “This study tells us how we might understand the richness of emotion in singing and other artistic forms.”