Vocal learning

Speech is a dynamic combination of initiating learned motor patterns and learning new motor patterns on the fly. Although speech involves a complex of orofacial and laryngeal muscles, the latter are the primary sound source for speech, are much less studied. The ability to volitionally control these muscles is incredibly rare and may be one of the crucial skills that make humans the only speaking ape. We are using fMRI to study the brain systems that regulate the learning of new vocalizations, the retrieval of vocalizations that have already been learned, and the production of emotional vocalizations which, being innate, never required learning.

 

Project I: Vocal response conflict

The anterior cingulate cortex contains a somatotopic map of the body’s muscles and helps to initiate movements. The laryngeal part of this map has been studied in most extensively in monkeys where it is a crucial part of the pathway for producing emotional vocalizations. We are using a response conflict paradigm using fMRI to study the involvement of the anterior cingulate cortex in initiating learned vocal patterns (i.e. speech) and expressions of emotion.

 

Project II: Audio-motor learning

We learn to make new sounds by hearing an auditory target, estimating the movements required to reproduce it (a process known as the inverse model), and re-adjusting our movements estimates based on the success, or lack thereof, from each attempt at making the new sound (a process known as the forward model). We are investigating whether the brain systems that compute the inverse and forward models are specific to certain groups of sound making muscles common resources share across muscle groups.