Lecturer: Julia Berezutskaya
Fields: Neurotechnology, Artificial Intelligence, Speech, Invasive Brain Recordings
Content
Brain-computer interfaces (BCI) is a field of assistive technology that can provide severely paralyzed patients with means of communication. In this talk I will focus on invasive BCIs with a long-term goal to bring the developed technology to clinical application in severely paralyzed patients (i.e. patients with a “locked-in” syndrome due to a neuron motor disease or brainstem stroke). I will give an overview of the state-of-the-art in the BCI field and discuss the most recent developments in decoding and reconstruction of speech signals directly from the brain activity. Specifically, I will talk about two case studies by Vansteensel et al. (2016) and Moses et al. (2020) that both demonstrated the proof of concept in development of invasive BCIs for communication, successful implantation of the device in a paralyzed individual and validation of the assistive technology for communication over a long period of time. Then, I will discuss three main approaches to decoding of speech signals from intracranial data based on 1) the language perspective (decoding language elements of speech: words, phonemes, sentences), 2) the acoustic perspective (decoding of acoustic properties of speech signal and direct speech synthesis from brain data), and 3) the motor perspective (decoding of articulatory motor programs from brain activity). Relevant work from the labs worldwide will be discussed (groups led by E. Chang, P. Kennedy, J. Brumberg, N. Mesgarani in USA; N. Ramsey, C. Herff, B. Yvert in Europe).
Next to reviewing the advances in speech decoding from brain data, I will also cover specifics of intracranial recordings, discuss the difference between overt (spoken out loud) and covert (imagined) speech (relevant for decoding in paralyzed subjects as opposed to able-bodied individuals), and talk about the methodology of decoding approaches (machine learning and deep learning details). At the end of the talk, I will initiate a discussion with the attendees about the future of the BCI technology, next steps needed to bring BCI research to real-world applications, and controversies and challenges still remaining in the field.
Literature
- Vansteensel, M. J., Pels, E. G., Bleichner, M. G., Branco, M. P., Denison, T., Freudenburg, Z. V., … & Ramsey, N. F. (2016). Fully implanted brain–computer interface in a locked-in patient with ALS. New England Journal of Medicine, 375(21), 2060-2066.
- Moses, D. A., Metzger, S. L., Liu, J. R., Anumanchipalli, G. K., Makin, J. G., Sun, P. F., … & Chang, E. F. (2021). Neuroprosthesis for decoding speech in a paralyzed person with anarthria. New England Journal of Medicine, 385(3), 217-227.
- Herff, C., Heger, D., De Pesters, A., Telaar, D., Brunner, P., Schalk, G., & Schultz, T. (2015). Brain-to-text: decoding spoken phrases from phone representations in the brain. Frontiers in neuroscience, 9, 217. Berezutskaya, J. (2020). Data-driven modeling of the neural dynamics underlying language processing (Doctoral dissertation, University Utrecht).
- Berezutskaya J., Ramsey N.F. & van Gerven M.A.J (in prep) Best practices in speech reconstruction from intracranial brain data.
Lecturer
Julia Berezutskaya is a postdoctoral researcher at the Artificial Intelligence department of Radboud University (affiliated with Donders Center for Brain, Behavior and Cognition). In collaboration with University Medical Center in Utrecht (UMCU) she works on computational modeling of speech processes in the human brain. In 2020 she completed her PhD on “Data-driven modelling of speech processes in intracranial data” at UMCU. After the PhD she became a coordinating postdoc within the Language in Interaction consortium (https://www.languageininteraction.nl/) focusing on application of artificial intelligence methods to brain data underlying speech. In 2021 she joined the European consortium on Neurotechnology INTENSE, where she develops models for speech decoding and reconstruction from brain activity. She is a postdoc representative of the BCI society and a member of the UMCU Young Academy. In 2021 she was a recipient of the prestigious Trainee and Professional Development award from Society of Neuroscience. Julia is focused on bringing together people who work on natural language processing, machine learning, computational neuroscience and clinical neuroscience so that together they can build powerful models of speech production and perception in the brain. Not only are such models important for our fundamental understanding of how the brain works, they are essential for development of assistive neurotechnology and brain-computer interfaces that can restore cognitive function in patients, such as communication via decoding of attempted speech in paralyzed individuals.
Affiliation: Donders institute for Brain, Cognition and Behaviour
Homepage: https://www.juliaberezutskaya.com/