New paper alert! ‘How does your brain process vocal emotions: in categories or dimensions?’

How does our brain process vocal emotions: in categories or dimensions?
When we hear an emotionally charged voice, does our brain analyse the emotion as a separate
category or as continuous dimensions? An international team of researchers has come up with
a surprising answer to this question using a combination of neuroimaging techniques and
computational models of the voice: it’s not one or the other, but both. Voice emotions are
initially processed by the brain as separate categories, before being refined into dimensions.


We all know how effective the voice is at conveying emotions. What we know less about is
whether our brain treats vocal emotions as separate categories (such as pleasure or fear) or along
a continuum of emotional dimensions (such as negative to positive).
A new study puts an end to this long-standing debate by showing that it is neither categories nor
dimensions, but both, with quite different brain dynamics.


An international consortium of researchers from France, Scotland, Germany, Holland, Canada,
and the United States used brain imaging to measure the brain activity of healthy adult volunteers
as they listened to brief affective vocalizations representing different emotions. The affective
properties of the vocalizations were controlled using a computational model to produce a wide
range of more or less intense and identifiable emotions. The participants each underwent several
sessions of functional magnetic resonance imaging and magnetoencephalography to maximize
the spatial and temporal resolutions of the brain activity measurement as well as statistical
power. Sophisticated analyses indicate that both categories and dimensions explain much of the
brain’s activity, but differently: while early brain activity within 2/10ths of a second reflects
distinct emotional categories, later neural responses (after half a second) are increasingly
consistent with a finer, graduated representation of emotional dimensions.


These new data make it possible to reconcile two long opposing views by describing with a high
degree of spatio-temporal detail the representational dynamics of the cerebral processing of
emotions in the voice.


This study was supported by funding from the British BBSRC and the French ANR; the manuscript
and data are freely available.


Analysis of cerebral representations of vocal emotion. Top left: brief affective vocal expressions
are generated by morphing between recordings corresponding to 4 emotions (anger, fear, disgust,
pleasure) and a neutral expression. Top right: matrices representing the differences in brain
activity between each pair of stimuli, each brain region and each millisecond are compared to
matrices based on emotional judgments reflecting either categories or emotional dimensions.
Bottom: representation of brain regions showing an association with emotional categories at an
early stage.


Original publication:
Giordano, B.L., Whiting, C., Kriegeskorte, N., Kotz, S.A., Gross, J., Belin, P.
The Representational Dynamics of Perceived Voice Emotions Evolve from Categories to
Dimensions, accepted for publication in Nature Human Behaviour.

PhD position – UHasselt (UGhent & UM)

Interested in Rehabilitation Sciences? There is a PhD vacature in the REVAL Rehabilitation Research group, on auditory-motor coupling in MS, in the context of an FWO granted project. This PhD is situated within an interdisciplinary project, combining disciplines of rehabilitation sciences (UHassetl), musicology (UGent), and neurosciences (UMaastricht).

Interested candidates should have a background in rehabilitation sciences, movement sciences, cognitive sciences or neurosciences with a keen interest in clinical testing and learning neurophysiological EEG methods. 

For more information in ENG:  https://www.uhasselt.be/vacancies_detail?taal=04&vacid=2140&ref=1

For more information in Dutch: https://www.uhasselt.be/vacatures_detail?taal=01&vacid=2140  

Talk by Sonja Kotz – Women in Neuroscience

Our lab director Sonja Kotz will give a keynote talk with the title “Cortico-subcortico-cortical circuitry and the timing of sensorimotor behaviour”at the Women in Neuroscience Symposium on Feb 11, 2021. This great initiative has been organized by PhD students of the Brainlab and registration is free. Here you can find more information about the symposium and the speakers. Here is the program and here is the link to register.

Growing Up in Science – talk by labdirector Sonja Kotz

Our labdirector Sonja Kotz gave an interview about Growing Up in science during a student session. The interview was led by Esti Blanco-Elorrieta, SNL Student/Postdoc Representative and has been recorded. The recording is available through January 2021 on the SNL webpage, after log in.

The next SNL annual meeting will take place in Brisbane, Australia.

Poster presentation at SNL by Alex Emmendorfer

Alex presented her poster on ‘Atypical processing of phonotactic probability and syllable stress in dyslexic adults: an MMN study’ at the Societey for the Neurobiology of Language. Here you can see her poster:

Or download it in a better quality here:

As the conference was virtual, we are happy to share the recorded poster presentation, which can be downloaded using the link below.

“Understanding Language”

Exciting news, BANDlab’s Katerina Kandylaki is giving a lecture in the online lecture series “Understanding Language” organised by Studium Generale.

Her talk will be titled ‘Rhythm in Speech and Language Processing’

More information about the online lecture series in November and registration here. The lecture will also be recorded and available online after the event.

New paper: Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective

New paper from our lab in collaboation with Ana P. Pinheiro, Michael Schwartze and Sonja A. Kotz!

Why do some people hear voices? In the review, the authors propose that auditory verbal hallucinations (AVHs) are associated to changes in the cerebellar cicuitry in the forward model. In short, the reviewed evidence suggests that erratic predictions of sound and voice production are linked to impaired cerebellar function.

Curious? Read the full paper here: https://www.sciencedirect.com/science/article/pii/S0149763420305224

We are open!

We are still happily zooming and enjoying the virtual lab meetings!

Since the beginning of the pandemic we are working from home. After the initial lock down, testing sites are slowly starting to open up, while following strict safety protocols.

Now, meetings are held online and going to the office is only possible when necessary. Most likely, a lot of labs around the world experience something similar right now. What are your approaches to stay focused and to keep contact with your team? Some of us started to do virtual working sessions (coffee breaks included).

Stay safe and healthy everyone!