This is an important step on the way to develop brain–computer interfaces that can decode continuous language through non-invasive recordings of thoughts.
Results were published in a recent study in the peer-reviewed journal Nature Neuroscience, led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
Tang and Huth’s semantic decoder isn’t implanted in the brain directly; instead, it uses fMRI machine scans to measure brain activity. For the study, participants in the experiment listened to podcasts while the AI attempted to transcribe their thoughts into text.
Leave a reply