Q&A with Dean Krusienski, Ph.D.

Dean Krusienski, Ph.D.
Dean Krusienski, Ph.D., professor in the Department of Biomedical Engineering.

Dean Krusienski, Ph.D., a professor in the Department of Biomedical Engineering, focuses on neural signal processing and analysis for the development of brain-computer interfaces and neuroprosthetic devices.

1. What are you working on right now?

My lab is working on a variety of projects related to understanding the electrical activity of the brain and translating it into a functional output (e.g., commands for an assistive device). These systems are known as brain-computer interfaces or neuroprosthetics. We are particularly excited about our National Science Foundation-funded project, in collaboration with the University of Bremen in Germany, where we are attempting to synthesize acoustic speech directly from brain signals as the user imagines speaking.

2. What attracted you to this research?

I have always been fascinated by the overwhelming complexity of the human brain. As a postdoctoral researcher at the New York State Department of Health, I visited amyotrophic lateral sclerosis (ALS) patients in various stages of disease progression. It was heart-rending to see these once-vibrant individuals slowly becoming prisoners of their own bodies — able to think and feel but unable to move or communicate even their most basic needs. As an engineer trained in signal processing and analysis, I was compelled to search for ways to harness their thoughts and intentions via measurements of brain activity and use it to restore their ability to communicate.

3. What do you hope to achieve by synthesizing speech from brain signals?

The ultimate goal is to develop a speech neuroprosthetic that produces intelligible speech and can operate in real-time. This is critical in order for the system to function naturally and transparently to the user, ideally as if they and others do not even notice the presence of the prosthetic device.

4. How will a speech neuroprosthetic make a difference?

The most immediate impact would be for individuals who have completely lost the ability to communicate, such as those with late-stage ALS. This technology would provide a faster, more natural means of communication than previous brain-actuated typing systems that we have developed. We also believe that this technology can eventually be extended to help individuals with other neuro-muscular speech disorders.

5. Can this really be done? How are you investigating it?

It has been great working with the neurosurgeons, neurologists and epilepsy monitoring team at VCU Health. To achieve the necessary signal fidelity for representing the intricacies of speech production in the brain, we record signals from electrodes implanted on the surface of and deeper within the brains of patients being evaluated for severe, intractable epilepsy as they speak and imagine speaking. We then build computational models that attempt to reproduce the recorded speech by only using the recorded brain activity. While our early results are promising, our system is still many years away from being practical for patients. For now, we are primarily focused on the challenges of improving the intelligibility and latency of the system.

Fun Fact: My wife is Persian. Before meeting her, I was never introduced to Persian food and did not realize that it is considerably different from the traditional Middle Eastern cuisine I grew up with from my Syrian grandmother’s side of the family. While there are a few restaurants around Richmond that offer some Persian dishes, we are hopeful that an authentic Persian restaurant will eventually open here.