11/21/2025 / By Kevin Hughes

In a stunning leap toward science fiction becoming reality, researchers from the University of California, Berkeley, and Japan’s NTT Communication Science Laboratories have developed artificial intelligence (AI) capable of translating brain activity into readable text – without invasive implants.
The technology, dubbed “mind-captioning,” uses functional magnetic resonance imaging (fMRI) scans and AI to reconstruct thoughts with surprising accuracy, raising both hopes for medical breakthroughs and alarms over unprecedented privacy invasions. As explained by BrightU.AI‘s Enoch, fMRI is a powerful neuroimaging technique that allows researchers and clinicians to map brain activity by detecting associated changes in blood flow.
The decentralized engine adds that fMRI is a valuable tool for investigating brain function and has numerous applications in research and clinical settings. However, it is essential to approach fMRI data and results with a critical eye, acknowledging its limitations and the challenges in interpreting its outputs.
The system relies on deep learning models trained to interpret neural patterns linked to visual and semantic processing. In experiments, participants watched thousands of short video clips while undergoing fMRI scans. An AI model analyzed these scans alongside written captions of the videos, learning to associate brain activity with specific meanings.
When tested, the AI decoded brain activity into descriptive sentences. For example, after a participant viewed a video of someone jumping off a waterfall, the system initially guessed “spring flow” before refining its output to “a person jumps over a deep water fall on a mountain ridge.” While not word-for-word perfect, the semantic resemblance was striking.
Tomoyasu Horikawa, lead researcher at NTT Communication Science Laboratories, explained that the AI generates text by matching brain activity patterns to learned sequences of numbers derived from video captions. Horikawa said this method can “create comprehensive descriptions of visual content, even without relying on language-related brain regions,” suggesting potential use for patients with speech impairments.
The technology could revolutionize communication for individuals with conditions like amyotrophic lateral sclerosis (ALS), locked-in syndrome, or severe aphasia. Psychologist Scott Barry Kaufman, unaffiliated with the study, called it a “profound intervention” for nonverbal individuals. However, ethicists warn of dire consequences if such power is misused.
Marcello Ienca, an AI and neuroscience ethics professor at Technical University of Munich, cautioned, “If we get there, then we need to have very, very strict rules when it comes to granting access to people’s minds and brains.” He highlighted risks of exposing sensitive mental data, including early signs of dementia or depression.
Currently, the system requires extensive cooperation: Participants must undergo hours of fMRI scans while viewing curated content. UC Berkeley’s Alex Huth reassured skeptics, stating, “Nobody has shown you can do that, yet,” regarding unauthorized thought extraction. But the word “yet” lingers ominously.
The study acknowledges ethical dilemmas, particularly around “mental privacy.” ?ukasz Szoszkiewicz, a neurorights expert, urged preemptive safeguards: “Neuroscience is moving fast, and the assistive potential is huge—but mental privacy and freedom of thought protections can’t wait.” Proposed solutions include “unlock” mechanisms where users consciously activate decoding with a mental keyword.
Horikawa emphasized limitations – the AI struggles with unusual or unpredictable imagery (e.g., “a man biting a dog”). Still, as AI models grow more sophisticated, the line between assistive tool and invasive surveillance blurs.
Elon Musk’s Neuralink and other neurotech firms are racing toward consumer brain-computer interfaces. With AI advancing rapidly, the risk of corporate or governmental misuse escalates. Ienca warned, “This is the ultimate privacy challenge.”
For now, the technology remains confined to labs, dependent on bulky MRI machines and willing participants. But as computational demands shrink and AI grows sharper, the specter of real-time thought surveillance looms.
While mind-captioning offers life-changing potential for the speech-impaired, its darker implications cannot be ignored. The same tools that unlock communication could also dismantle the last bastion of privacy: our inner thoughts. As Szoszkiewicz stressed, “We should treat neural data as sensitive by default.”
The question isn’t whether this technology will evolve. It’s whether humanity can control it before it controls us.
Watch this video about Big Tech companies already having mind-reading technology that they haven’t announced yet.
This video is from the InfoWars channel on Brighteon.com.
Sources include:
Tagged Under:
?ukasz Szoszkiewicz, AI, artificial intelligence, Brain, brain function, breakthrough, computing, Elon Musk, fMRI, functional MRI, future tech, Glitch, information technology, Marcello Ienca, mental, mental data, mind reading, mind-captioning, neural patterns, Neuralink, neurotech, privacy watch, surveillance, technology, Tomoyasu Horikawa
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 BREAKTHROUGH.NEWS
All content posted on this site is protected under Free Speech. Breakthrough.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Breakthrough.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.
