A person holding a cup of bubble tea, smiling alongside a large pink teddy bear, with a vibrant pink background.
Maybe we think that AI is like a giant Teddy Bear, cute and cuddly, but it can suffocate us if we don’t pay attention or become too complacent?

📝 How I wrote this post (with a little help-and a sleeping cat)

Dear Invisible Friends,

My cat fell asleep on my notebook where I outlined the ideas for this post (true story), so I need to recall what I originally thought for this piece. Spoiler alert: using AI for writing essays limits your ability to recall what you wrote (according to Kosmyna et al., 2024). So let’s put my skills to the test.

I remember I wanted to explain the “methodology” of this post because I didn’t have the time to read over 200 pages of the original research. I wanted to explain that I read sections and skimmed the rest; I also read other (shorter) articles summarizing the same research, and I read the Jose et al. 2025 paper published in Frontiers in Psychology. I have also used a plethora of AI tools to help me brainstorm the article (such as DeepThink (R1); the Perplexity AI assistant, powered by OpenAI’s GPT-4o; I am also trying to get used to sider.ai). The blabbering is typed by yours truly, but I discovered I can fix my spelling and grammar with a WordPress tool.

Disclaimer: I’m not a neuroscientist nor an expert in education or AI. What you can read here is a combination of information available online and my interpretation.

I got distracted by some ‘Encarta Encyclopedia’ nostalgia-time for a break.

Collage featuring memorable scenes from 1980s and 1990s pop culture, including a comedic moment from a film on the top left, the cover of Microsoft's Encarta '95 encyclopedia on the top right, a dramatic scene between two characters from a movie on the bottom left, and a promotional image for Pirates Dinner Adventure on the bottom right.

🎓 What I Remember from Jose et al. (2025): My Human Summary

After a well-deserved break and serving as a human bed for my cat’s nap, I’m back. I wonder how we can use AI to optimize our tasks without losing our minds on it (quite literally), how we can continue to have cognitive abilities, such as critical thinking, and not erode our capacity to think as humans. This is the reason why I wanted to write this post, not just to jump on the hype wave.

Where was I? Yeah, summary of the Jose et al. 2025 paper. I will write a summary from the top of my head (after reading a normal-length review paper) and then I will copy-paste an AI-generated summary for comparison. Of course, the reference will be at the end of this post for consulting!

I’m cracking my knuckles before starting to type (in silence, because the sound freaks me out). The article was a review article, which means that it summarizes and quotes relevant research in the same field. From what I read, using AI in education has some advantages and some rather concerning disadvantages, and AI should enable but never replace human teaching and human interaction in education. From the top of my head, the advantages are simplifying information, language learning, clarifying tasks, helping with simple recall of information, and creating customized learning for students. The disadvantages are loss of motivation, loss of critical thinking, automation bias (blindly relying on the AI output), loss of deep thinking or deep learning, and reduced creativity. Another problem with AI is that the datasets used for training are biased, creating inequalities and sharing biased information; in addition to blindly trusting its results, it could provide the wrong information to students. I can imagine it could be tricky to design a system that plays on the advantages while reducing the disadvantages and avoiding risks for the benefits of students’ learning and growth. Of course, it could be tempting to forbid AI in the classroom, but that’s tricky at home, because AI is easily accessible.

🤖 Human vs. AI: Comparing My Summary to Perplexity’s

Now, let’s compare my take-home message with an AI-generated summary (by Perplexity). Let’s see if I fried my brain with too much AI or if it has any connection to the source material.

Plain English Summary: Artificial intelligence in education is a double-edged sword: it can make learning more personalized and efficient, but if students rely on it too much, they may lose essential skills like memory, critical thinking, and creativity. The key is to use AI as a helpful assistant-one that supports learning without replacing the hard mental work that leads to deep understanding and independent thought. Teachers and AI designers must work together to ensure that AI enhances, rather than erodes, our cognitive abilities.

Core Takeaways for Fast Review: AI in education is both an enhancer and a potential inhibitor of cognitive skills.

Cognitive offloading: Over-reliance on AI can reduce memory, critical thinking, and creativity.

Empirical evidence: AI boosts performance in some areas but may weaken deep learning and engagement if overused.

Theoretical frameworks: Effective AI use should align with Cognitive Load Theory, Bloom’s Taxonomy, and Self-Determination Theory.

Implementation: Integrate AI thoughtfully-balance efficiency with opportunities for independent, effortful thinking.

Pitfalls: Over-reliance, bias, and loss of motivation are risks; solutions include reflective activities, fairness-aware design, and fostering autonomy.

Conclusion: AI should complement-not replace-human learning, always aiming to empower rather than create dependency.

TheoryMain IdeaAI in Education Example
Cognitive Load TheoryDon’t overload working memoryAI removes distractions, but shouldn’t do all the thinking for you
Bloom’s TaxonomyLevels of thinkingAI helps with facts, but you need to practice higher-level skills yourself
Self-DeterminationMotivation comes from autonomy, competence, relatednessAI can boost confidence, but you need choice and human connection too

My “off the top of my head” summary of the article was not so shabby compared with the one generated by Perplexity, right?

🧠 The Kosmyna et al. Study: What It Found and Why It Matters

Now, let’s talk about the elephant in the room: the paper by Kosmyna et al., 2024. I’m reading my nootebook (now wet because I accidentally spilled water on it), and I wanted to share a summary and discuss its limitations.

First, I will provide a summary (with the help of DeepThink (R1) because I’m getting hungry and because thousands of articles online could provide a better summary).

🧪 What the EEG Data Tells Us About AI Writing

The Silent Cost of AI Writing: When researchers strapped Electroencephalogram (EEG) caps to students writing essays, they uncovered a disturbing truth about ChatGPT dependence. Your brain builds “cognitive debt” every time you outsource thinking to AI-like skipping gym workouts for your mind. The study found that heavy ChatGPT users exhibited 40% weaker neural connectivity compared to those writing unaided. Importantly, this decline was specific to certain brain networks involved in top-down control and semantic integration rather than a general loss of brain function. Because the findings are correlational, further research is needed to understand causality and broader implications.

ChatGPT users struggled to recall their own work just minutes after writing – 83% couldn’t quote their essays! The study links this difficulty to reduced frontal-temporal semantic coherence, meaning the brain networks responsible for integrating meaning were less engaged. However, this is not simply a memory problem; it reflects a broader neural under-engagement during the writing process.

The same group also reported a profound disconnect from their writing, often feeling, “This isn’t mine.” Most alarmingly, when habitual AI users later wrote solo, their brains showed less engagement than even total novices-proving this isn’t laziness but measurable skill erosion.

The researchers connect these effects to disrupted self-monitoring brain regions, which are crucial for metacognitive awareness and emotional engagement. In other words, the neural patterns observed correspond behaviorally to difficulties in deeply engaging with and feeling ownership over one’s own writing.

🧭 When You Use AI Matters More Than You Think

As the paper warns with a Dune epigraph: surrender your thinking to machines, and you surrender your capability.

Interestingly, however, this research also shows that AI-naïve participants using AI after initial unaided writing, showed increased engagement (Brain-to-LLM group). I must emphasize that when and how‘ AI is introduced matters for cognitive outcomes. Importantly, introducing AI after initial independent writing may enhance engagement, suggesting the timing and manner of AI use critically shape outcomes.

📚 What Is Cognitive Debt? A Simple Explanation

What is Cognitive Debt? Imagine your brain as a muscle. Every time you outsource thinking-like having ChatGPT write your essay instead of wrestling with ideas yourself-you skip the mental “reps” that build creative and analytical strength. This creates cognitive debt: a hidden deficit where short-term efficiency (quick AI-generated work) weakens your neural pathways over time. Like financial debt, the “interest” compounds: MIT’s EEG scans prove heavy AI users develop 40% weaker brain connectivity, struggle to recall their own writing, and feel disconnected from their work. The scariest part? When asked to write solo later, their brains still underperform-proving this debt isn’t just borrowed time; it’s stolen capability.

🧠 Core Takeaways: The Risks and the Remedy

Core Takeaways at a Glance

🧠 Cognitive Debt = Borrowed Brains: Outsourcing thinking to AI weakens neural connections (measured via EEG) like skipping gym weakens muscles.

📉 The Memory Tax: 83% of ChatGPT users couldn’t quote their own essays minutes after writing; brain-only writers recalled perfectly.

😔 Ownership Crisis: AI-assisted writers felt essays were “not theirs” (0% full ownership vs. 94% for unaided writers).

Session 4 Shock: When habitual AI users wrote solo, their brains showed less engagement than novice writers-proving decay.

🛡️ The Antidote: Use AI for research/editing (like search engines), but preserve “brain-only” blocks for creating original work.

🧩 What’s Missing From the Study

Now let’s talk about the limitations of this study (courtesy of Perplexity).

The study’s in-lab results are strong evidence that using AI tools like ChatGPT changes brain activity and learning outcomes during essay writing. However, because the researchers didn’t track what participants did at home, and because the sample was small and focused on a single task, we should be cautious about applying these findings to all students, all types of learning, or longer time frames. The study is a valuable first step, but more research is needed to confirm and expand on these results.

Limitations:

Sample Size: Session 4 had only 18 participants.

Task Specificity: Findings may not generalize beyond essay writing.

EEG Constraints: Surface-level activity only; deeper structures (e.g., hippocampus) not assessed.

Impact on Conclusions:

Underpowered Session 4 limits tool-switching generalizations.

Long-term “cognitive debt” beyond 4 months remains speculative.

🧠 My Opinion on the MIT Study: Interesting, But Not Gospel

Dina’s Opinion: I think that this research shows interesting results pointing out that over-reliance on ChatGPT or other AI tools when writing essays is bad for our brain connectivity, ownership of our work, and memory (correlational but compelling evidence). But we need to take the results with a pinch of salt due to the limitations on sample size, among others described by the authors. I’m looking forward to reading more research papers on the topic.

🔄 Navigating AI in Everyday Life: My Personal Balance

Personal reflection: I’m terrible with directions. However, I’m finding my way without Google Maps, because of what? Neuroplasticity. I am also trying to memorize little strings of information here and there and not having to check my phone all the time. I am trying not to ask AI for every single decision I make and think for myself. On the other hand, I really enjoy using AI to help me clarify difficult concepts when studying or helping me set priorities. Needless to say, it helps to make my texts readable. I’m conscious of the risks and opportunities of AI in education.

⚖️ Not Dumb, Not Divine: What ChatGPT Really Does (and Doesn’t)

I personally do not believe that “ChatGPT makes you dumb”; that would be taking the Kosmyna et al., 2024 research out of context and out of proportion. However, it indicates some correlations that are intriguing. It would be interesting to see how the level of both English and AI literacy before an experiment could affect the outcome of a similar setting. On the other hand, “use it, or lose it”: if we don’t nurture our critical thinking and other abilities, it would be like skipping leg day at the gym. My personal preference? AI-augmented “brain-only” thinking. This means writing ideas beforehand, using AI for brainstorming. Or it could also be drafting an article yourself and using AI for proofreading. There are a lot of possible combinations. What is not ideal is to passively let the AI write a text for you and not even read it or fact-check it. In that case, you would be paying a high interest rate on your cognitive debt. I’m not surprised why in the Kosmyna et al., 2024, the group that only used AI didn’t feel ownership or remember their essays, because, to start with, they didn’t write them. Well, that’s another discussion. Enough blabbering, I believe…

🧩 Try This: One Brain-Only Task This Week

And what do you prefer, AI-assisted or brain-only? Go ‘brain-only’ for one work task this week and share your experience. Please let me know in the comments below!

RoxenOut!

References:

  1. Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., Mumthas, S., & Joseph, S. (2025). The cognitive paradox of AI in education: Between enhancement and erosion. Frontiers in Psychology, 16, Article 1550621. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/full
  2. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2024). Cognitive debt when using an AI assistant for essay writing task. arXivhttps://arxiv.org/pdf/2506.08872


Discover more from Dina RoxenTool

Subscribe to get the latest posts sent to your email.

Posted in

2 responses to “Cognitive Debt or Cognitive Drama? My Take on the MIT ChatGPT Study”

  1. […] Cognitive Debt or Cognitive Drama? My Take on the MIT ChatGPT Study […]

    Like

  2. […] Cognitive Debt or Cognitive Drama? My Take on the MIT ChatGPT Study […]

    Like

Leave a reply to Back to Basics. AI terms and definitions and a short History. – Dina RoxenTool Cancel reply