Generative AI has already left an indelible mark on humanity’s digital landscape. Suppose we accept that as the “necessary cost” of a new industrial revolution, we must also confront a far more alarming—yet rarely discussed—price: the potential damage to our brains, especially those of teenagers. A new “whistle-blower” study from MIT offers an unsettling risk perspective on how we develop and deploy AI.
Productivity Booster or Thought Killer?
The core question is back on the table: Does ChatGPT liberate human productivity, or does it erode our capacity to think?
Researchers at MIT’s Media Lab recently released a controversial paper—the first to utilize high-density EEG brain scans to examine the impact of large language models (LLMs) on human cognition. The findings are anything but encouraging.
How the Experiment Worked
- Participants: 54 Boston residents, ages 18–39
- Tasks: Multiple 20-minute SAT-style English essays
- Groups:
- ChatGPT
- Google Search
- “Raw Brainpower” (no external tools)
Throughout every writing session, 32-channel EEG sensors tracked alpha, theta, delta, and other brainwave bands associated with creativity, semantic processing, and working memory.
The “Cognitive Cliff” in the ChatGPT Group
Compared with the other two cohorts, participants who leaned on ChatGPT showed unmistakable signs of cognitive decline:
- EEG drop-off: Neural activity fell sharply, especially in executive-control and attention networks.
- Behavioral slump: By the third essay, many subjects stopped thinking aloud entirely, effectively outsourcing the task to ChatGPT.
- Language sameness: Essays became formulaic, recycling the same sentence patterns and structures.
- Memory wipe: 83 percent of users could not recall what they had written.
Two high-school English teachers reviewing the results summed it up: “Technically correct, grammatically smooth—but soulless.”
Raw Brains and Search Engines Fared Better
Participants who drafted essays with zero tools displayed the densest neural connectivity, especially in bands linked to deep semantic work and creativity.
Surprisingly, the Google Search group also kept their brains firing. Actively searching, filtering, and synthesizing information stimulates cognition in ways that simply “letting the AI spit out prose” does not.
It’s How You Use AI That Matters
MIT ran a crossover test:
- ChatGPT users rewrote one essay without the AI.
- The raw-brain group used ChatGPT to refine their existing drafts.
The result:
- The original ChatGPT group’s brainwaves dropped even further, and memory of the text virtually vanished.
- The raw-brain cohort exhibited increased neural activity when they utilized ChatGPT as a polishing tool.
Takeaway: AI is not inherently damaging. When used to enhance your thinking—after you’ve already done the heavy lifting—it can be a genuine cognition multiplier. But when AI replaces thinking, mental muscles atrophy.
Teens Are at the Greatest Risk
Lead author Nataliya Kosmyna rushed the data online before peer review because “ChatGPT is already barreling into classrooms.”
“I’m terrified we’ll hear about a ‘GPT kindergarten’ in six months—that would be a cognitive catastrophe,” she warns.
Children’s and teens’ brains are highly plastic. Long-term dependence on AI-generated content may prune the neural pathways that support critical thinking and independent learning.
Child psychiatrist Zishan Khan echoes the concern:
“Over-reliance on LLMs is starving the neural circuits kids need for memory, association, and stress resilience. Unused circuits, either, leaving them less able to tackle complex problems later in life.”
What About Programmers?
Writing isn’t the only worry. MIT is already running a companion study on coders. Preliminary data suggest that heavy AI-assisted programming boosts short-term speed but appears to compromise long-term problem-solving and architectural reasoning.
Tech leaders eyeing AI as a drop-in replacement for junior developers may want to rethink that calculus.
The Ironic Aftermath
No sooner had the study been posted than hordes of users fed it straight into ChatGPT for a summary. Kosmyna anticipated the move and embedded a “language trap” in the paper, asking any summarizer to focus on a single table. Predictably, many AI-generated synopses misreported the findings and even hallucinated references to “GPT-4o,” which the paper never mentioned.
Original study: https://arxiv.org/pdf/2506.08872
Bottom line: Use AI to amplify your ideas, not to replace them—especially if your brain is still under construction.
- Long Time Link
- If you find my blog helpful, please subscribe to me via RSS
- Or follow me on X
- If you have a Medium account, follow me there. My articles will be published there as soon as possible.