Adapting to Dynamic Environments

We are interested in the ability to successfully navigate through, and interact with, an ever-changing dynamic environment. We investigate the mechanisms which allow adequate timing and adaptation to the rate and rhythm of events in the environment using behavioural and neuroimaging techniques. As inadequate timing factors into the neurofunctional profile of different patient populations, we use our findings to develop strategies for compensation.


Emmanuel Biau, Katerina Kandylaki and Michael Schwartze

Project I: Spontaneous sensory and sensorimotor timing
This project investigates emerging patterns and neural correlates of self-paced timing performance during the production and perception of simple sequential behaviour. The immediate goal of the project is to identify and to explore the relation of individual timing characteristics as part of a basic temporal sequencing profile. The long-term goal is to investigate how this basic profile may factor into more complex forms of behaviour.

Project II: Low frequency oscillations in auditory temporal processing
This project investigates the role of low frequency neural oscillatory activity in audition. The goals are to explore the relation of this specific type of neural activity and well-established event-related potentials of the electroencephalogram (EEG) in order to define their functional significance for the dynamic integration of acoustic information.

Project III: Cross-domain rhythm perception
For the past 30 years, the use of neuroimaging in cognitive neuroscience has challenged the traditional modular view of the human brain and has highlighted the necessity for cross-domain research. Seemingly unrelated cognitive functions such as speech and music perception are being viewed under a new prism; effects of the one domain can be transferred to the other. Currently there are strong theoretical accounts that assume common ground between the music and the speech and language faculties of the human brain. The gap, however, lies in defining which aspects of speech and music processing are shared and which are domain-specific. Rhythm perception has been highlighted as a shared mechanism for both music and speech processing, as it relies on the same acoustic features (e.g. waveform periodicity, amplitude envelope), and it takes place in anatomically overlapping brain structures.

The project NERHYMUS, “The NEurobiology of RHYthm: effects of MUSical expertise on natural speech comprehension”, funded by the European Commission under the Marie Skłodowska -Curie Actions scheme, investigates whether musical rhythm expertise affects the processing of rhythm during speech comprehension. We use a naturalistic experimental paradigm, in which participants listen to stories and poems, in combination with behavioural, EEG, fMRI and TMS methods in order to investigate the neural underpinnings of rhythm perception.
Establishing a close neurobiological connection of music and language processing through rhythm perception could revolutionise the design of rhythm-based therapies in speech and language rehabilitation as well as rhythmical training for children with developmental disorders.

Project IV: Battery for the Assessment of Auditory Sensori-motor and Timing abilities
This project aims at investigating the performance of patients with acquired (focal) and traumatic (diffused) brain injuries in several perceptual and motor tasks of the BAASTA (Battery for the Assessment of Auditory Sensori-motor and Timing abilities), and their correlation with neuropsychological measures of attention and working memory. This project aims at filling in a significant gap in the existing literature of timing abilities in patient populations: Indeed, despite convincing reports that timing deficiencies should be considered a fundamental aspect of acquired brain injuries, these deficits have never been systematically investigated.

Project V: Predictions in language preocessing
In speech processing, predictions can be made about the temporal and formal structure of a speech signal. Generating predictions of the timing and content of the upcoming events allows us to process incoming speech more efficiently, and may come in to play particularly in noisy environments and during skill learning. In collaboration with the M-BIC Language Lab, this project aims to examine the mechanisms and networks underlying predictions and cross-modal transformations in reading and spoken language skills.