Skip links

The Price You Pay for AI Convenience Is Cognitive Debt 

Technology has brought many benefits and conveniences to our lives. We tend to forget that with every technological disruption in history, there were moments of resistance, retaliation, reflection, and reconciliation to curb the negative consequences by allowing democratic participation.

“A thousand years of history and contemporary evidence make one thing abundantly clear: There is nothing automatic about new technologies bringing widespread prosperity. Whether they do or not is an economic, social, and political choice”
Book: Power and Progress Daron Acemoglu and Simon Johnson (From MIT) Awarded the 2024 Nobel Prize in Economics

Digital technologies have brought similar conveniences but oftentimes with unintended consequences. I would argue Generative Artificial Intelligence (GenAI), is another one of those technological disruptors that feels ‘shiny and new,’ but upon closer inspection, comes at a high societal cost. There are many GenAI trade offs mentioned already here at Dataethics.eu, but the more insidious cognitive impact on our society is harder to measure, as it is individual, contextual, and the harms are longitudinal rather than immediate. Considering the trajectory of how current digital technologies have changed our attention span, memory offloading, and lack of media literacy leading to manipulation in public discourse, it’s pretty easy to see how GenAI can exacerbate our habitual adoption of digital technologies without questioning the pay-offs. 

Generative Artificial intelligence (GenAI) for all its promises has already shown detrimental effects in our shared realties with AI-slopaganda, AI psychosis and suicides, as well as hallucinations and spreading of misinformation. Part of these effects are primarily due to the speed of implementation, lack of transparency, and gap in AI literacy that the public sorely needs in order to navigate AI usage, integration, and risk(s) mitigation. We tend to accept technological advances as ‘positive progress’ that is beneficial to our collective well-being. We forget the historical negative side effects that led to public scrutiny and action to ensure those benefits were shared and negative effects were regulated.

Offloading Critical Thinking Skills
One of these negative effects that GenAI brings is the cognitive impact of outsourcing and offloading critical thinking skills, leading to cognitive debt that incurs over time with extended use of GenAI. The term Cognitive debt from AI was described in MIT Media Lab: Your brain on ChatGpt: Accumulation of Cognitive Debt. Their experiment had students doing writing tasks while measuring brain activity using an electroencephalography (EEG) headset. It is not surprising that with AI, brain activity declines significantly as cognitive offloading of convenient answers and writing corrections removes necessary friction in research and learning, which is critical to students’ development. They concluded, “These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.” 

Similar results were further echoed by Microsoft Research group: The Impact of Generative AI on Critical Thinking, with knowledge workers self-reporting effects on cognitive efforts and declining confidence after use of GenAI tools.

The American Psychological Association published a recent paper titled: GenAI Reliance Behavioral Evidence of Cognitive Offload in High-Use Adults showing recurring themes of cognitive outsourcing, diminished perceived ownership of ideas, and trade-offs between speed and depth of thought. More importantly, the 1900+ participants overtime felt more demotivated on those tasks which can lead to job dissatisfaction.

Perhaps the most stark results come from an impressive survey of over 300 global experts in regards to AI. Elon University published a report, “The Future of Being Human” highlighting the negative effects of cognitive capabilities in AI Adoption: Capacity and willingness to think deeply about complex concepts, Confidence in their native abilities, and Meta-Cognition ranked the top negative themes of user exposures to AI. Collectively, these experts are calling on humanity to think intentionally and carefully, taking wise actions now, so we do not sleepwalk into an AI future that we never intended and do not want.

So how can we use GenAI and design systems that help enhance critical thinking skills? This workshop paper presented at the CHI 2025 workshop on Tools for Thought, takes a more multi-disciplinary approach aimed to bridge how the use of GenAI affects human thought. They discuss how AI affects metacognition to critical thinking, memory, and creativity, with an emerging design practice for building GenAI tools that both protect and augment human thought. Augmentation is very individual and hard to measure which means it can be hard to design and standardize. The paper highlights this challenge, “Not all cognitive augmentation is designed to challenge thinking or provide reflective friction. Indeed, several of the examples combine reflective prompts or other provocations with scaffolding intended to guide users through task completion.”  Measuring the effects will prove challenging as open-ended tasks, workflow context, as well as data changes, leads to ambiguity in the scenarios of the AI user being better or worse off with the help of AI. Reflective thinking requires suspension of judgement during the period of inquiry, but the temptation of AI-generated information may minimize and discourage suspension of judgement and create, rather, an ‘illusion of understanding.’ 

As with many digital technologies, AI regulations are slow to react.  Evidence to study consequences are contingent on funding, transparency, and data from the technologists that need regulating and scrutiny, meanwhile, the public is given access to unvetted technologies. The immediate dangers will be how fast we can shield our democracies from AI slop, cultivate trust with humans to govern and regulate when implementing AI, and exit plans for more control over our digital futures as the mountain of cybersecurity backlog grows more complex with Agentic AI applications. We could choose to PAUSE AI development to give time for actual consensus, or we can continue to allow market dominance of this narrative to test our institutions’ and societies’ resilience to rapid change and disruption. The choice has always been ours.

We must not ignore the inertia these GenAI technologies have already caused in mounting cybersecurity problems, psychological harm to data workers and users, as well as the environmental impact of data centers used to power GenAI. Economic entanglement will prove costly, as public sentiment shifts with simultaneous power struggles that will ensue, as the present fight for our futures will be another one in the books for Power and Progress.  

Read more about the author at our contributor’s site

Illustration: Screendump from indiabrains.com