New paper alert! The authors found that a mismatch between sublexical and lexical affective values of words elicited an increased N400 response. Curious? Read about this new paper with contribution of Sonja Kotz here.
Find this new and interesting paper from our PhD student Hanna Honcamp here. She proposes that (semi) hidden markov models may relate to auditory verbal hallucinations to investigate temporal brain state dynamics.
The pint of science festival takes place from 9-11 May. You will find interesting talks in different locations. If you have difficulties to decide to which event you want to go, we can recommend the first session on Monday 9th at The student Hotel Maastricht. During the session ‘Do you hear what I hear?’ Xan Duggirala, Pia Brinkmann and Jana Devos will present interesting facts about hearing, auditory hallucinations and tinnitus.
The link for the event is here.
The link to the session ‘Do you hear what I hear?’ is here.
A new paper written by former BAND lab member Rachel Brown and lab director Sonja Kotz investigates predictive mechanisms in the aging brain. It is hypothesized that when we age subcortical-cortical communication decreases, while default executive coupling increases. Curious to read more? Go here.
On March 7, Xanthate and Pia pitched their PhD projects at the Women Researchers’Festival, which was held online and organized by the Female Empowerment group at Maastricht University.
Curious to read more about musical rhythm and how it engages a bilateral cortico-subcortical network that involves auditory and motor regions? Check out this new paper here.
Curious to learn how illness duration of dissociative seizures (DS), which are paroxysmal episodes of altered awareness and motor control that can resemble epilepsy, correlates with cortical thickness in hubs of the default mode network (DMN)? Check out the new paper here.
If you want to find out how patients with Parkinson’s Disease adapt their blinking behavior to tone sequences with different target probabilities? Read the new paper by Alessandro Tavano and Sonja Kotz!
Do we associated anger more with a male face and happiness more with a female face? Does this association between emotion and gender also extend to the auditory domain (e.g., voices)?
Faces and voices are more likely to be judged as male when they are angry, and as female when they are happy, new research has revealed. The study found that how we understand the emotional expression of a face or voice is heavily influenced by perceived sex, and vice versa. He said: “This study shows how important it is not to rely too much on your first impressions, as they can easily be wrong. “Next time you find yourself attributing happiness or sadness to a woman be aware of your bias and possible misinterpretation.”
Read how Biau and colleagues manipulated audio-visual asynchrony detection and their results that read like this:
‘Results confirm (i) that participants accurately detected audio-visual asynchrony, and (ii) increased delta power in the left motor cortex in response to audio-visual asynchrony. The difference of delta power between asynchronous and synchronous conditions predicted behavioural performance, and (iii) decreased delta-beta coupling in the left motor cortex when listeners could not accurately map visual and auditory prosodies. Finally, both behavioural and neurophysiological evidence was altered when a speaker’s face was degraded by a visual mask.’
Those results suggest that asynchrony detection of audio-visual stimuli in speech is supported by left motor delta oscillations!
Read more here: https://doi.org/10.1523/JNEUROSCI.2965-20.2022