When AI takes over thinking: The rising risk to human cognition
AI’s growing convenience boosts efficiency but quietly weakens human cognition and critical thinking in workplaces.


Artificial intelligence is no longer a futuristic promise. It is now embedded in daily organisational life. Across healthcare, agriculture, manufacturing, and services, firms are using AI to improve efficiency and productivity. With the rapid rise of generative AI, this shift has accelerated. Systems can draft reports in seconds, recommend decisions, synthesise large volumes of data, and automate tasks that once required years of professional expertise.
These visible gains, however, come with a quieter cost: the gradual erosion of human cognition. AI has become a default coworker many rely on instinctively. As per a Microsoft and LinkedIn report, 75% of knowledge workers already use AI at work, and 90% of them say it saves time under pressure, boosts creativity, and allows focus on higher value tasks. Yet, as generative AI penetrates knowledge workflows, questions are emerging about its impact on critical thinking and cognitive practice.
In a survey of 319 knowledge workers using generative AI at least weekly—a collaborative study by Carnegie Mellon and Microsoft Research—participants reported doing “much less” or “less effort” across cognitive activities that fall under critical thinking. This decline ranged from 69% to 79% across multiple Bloom’s taxonomy categories, including a 55% reduction in effort for high stakes tasks such as evaluation.
Another study by the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, Zurich, involving 666 participants, found a correlation between increased AI usage and lower critical thinking scores. Early neuro cognitive evidence reinforces these concerns. An MIT Media Lab preprint using EEG data reported that during essay writing, the LLM assisted group exhibited the weakest brain connectivity, with cognitive abilities declining as reliance on external tools increased.
This is particularly worrying in creative work. The risk isn’t only doing less thinking—it is cognitive fixation. Initial AI suggestions often anchor thinking and become a ceiling rather than a starting point.
As AI becomes integral to workflows, employees face continuous pressure to reskill while many capabilities rapidly become obsolete. Deloitte’s Human Capital Trends report highlights a paradox: leaders prioritise agility, while workers seek stability. AI is often presented as the bridge between the two, but its impact is more complicated. The same tools that promise empowerment can also disengage workers and dilute the human role in decision-making.
Increasingly, AI is doing the thinking even in areas where human judgment once mattered. Take something as routine as writing an email. Earlier, employees would frame arguments, choose words carefully, and refine drafts. Today, many simply describe the situation to an AI system and accept a polished response in seconds. The convenience is undeniable, but the cognitive effort that once shaped clarity and judgment has quietly receded.
A similar pattern is visible in advanced industrial settings. AI-powered control systems monitor thousands of parameters and recommend real time adjustments. Initially, these systems outperform human operators in detecting anomalies. Over time, however, operators shift from understanding the process to merely monitoring dashboards. Instead of diagnosing root causes, they wait for alerts and follow prescribed instructions. The ability to reason through the process begins to fade.
This is the fundamental risk of convenience. When AI takes over core cognitive work—sense making, synthesis, evaluation, and judgment—it threatens the very capabilities that make human intelligence valuable. Offloading higher order thinking doesn’t just change how work is done; it changes how people think. Problem solving skills weaken without regular use. Learning becomes passive. Over time, workers fall out of practice.
Evidence of this shift is already visible. The Microsoft Research study found that as generative AI use increased, perceived cognitive effort declined even as confidence in outputs rose. Early career professionals, in particular, could produce impressive work without fully understanding the reasoning behind it. Researchers call this phenomenon a divergence between capability and performance: results appear strong while underlying understanding remains thin. The behavioural shift is subtle but significant—moving from solving problems to merely checking answers.
Organisations often overlook this risk because AI delivers speed and visible improvements. But this view is shortsighted. As dependence grows, human judgment weakens, and workers become less effective when systems fail or conditions shift. Deloitte’s research echoes this concern, noting that while AI accelerates execution, it reduces opportunities for experiential learning unless work is intentionally redesigned.
To understand why this matters, it’s useful to distinguish between two forms of productivity. One comes from well-designed processes where outcomes depend largely on the process itself. Automation enhances performance here. The other comes from human contribution—where individuals interpret signals, adapt practices, and improve outcomes beyond what the process dictates. In these cases, who performs the work matters. Embedding AI too deeply into such tasks risks stripping away the human contribution that sustains long-term performance.
This shift also affects accountability. As algorithmic recommendations dominate, humans may remain responsible on paper while quietly surrendering cognitive ownership of decisions. Researchers describe this as human agency decay. Even research from OpenAI suggests that prolonged reliance on conversational AI can reduce independent engagement with problems, especially when AI is treated as an authority rather than a tool.
Organisational resilience depends on people who can reinterpret weak signals, challenge assumptions, and devise new approaches when old ones fail. Humans introduce disruption, creativity, and discontinuity—qualities essential in volatile environments. Excessive automation of judgment may make organisations efficient, but it also makes them strategically fragile.
This is not an argument against AI. Technology is central to the future of work and has already delivered enormous benefits. The concern is balance. Cognitive abilities require continuous exercise. When AI consistently performs demanding thinking tasks, people lose opportunities to engage deeply—often without realising it.
The danger is not immediately reflected in productivity metrics. But over time, this silent erosion of cognition may cost organisations far more than it saves—optimising systems at the expense of human intelligence, which ultimately drives learning, innovation, and resilience.
For economies like India, where demographic advantage and human capability underpin long term competitiveness, the stakes are higher. If organisations rush to automate judgment instead of augmenting it, they risk weakening the cognitive depth that fuels adaptability, entrepreneurship, and growth. Policymakers, educators, and business leaders must pay attention not only to what AI accelerates, but also to what it quietly displaces. In the race to automate work, the bigger risk may be automating away the thinking that underpins resilience, innovation, and economic stability.
Vijaya Sunder M, Assistant Professor, Indian School of Business.
Stuti Juyal, Post Graduate Programme in Management, Indian School of Business.
First Published: Mar 11, 2026, 12:11
Subscribe Now