Artificial intelligence is changing how we work. And more importantly, it is changing how we think. 

Most perspectives about AI focus on productivity: faster outputs, greater scale and increased efficiency. Those gains are real, but beneath the surface, something more significant is happening, and many organisations are only beginning to recognise it.  

People are starting to think less, not because they are incapable or disengaged, but because increasingly, many simply take the view they do not have to. 

Psychologists have long described humans as cognitive misers. We naturally conserve mental effort wherever possible. AI has supercharged that instinct: why struggle through ambiguity or wrestle with complexity when a system can generate a polished, confident answer in seconds? 

At first, this feels like a significant advance in efficiency with no downsides. AI helps draft emails, then structures ideas, and then recommend decisions. Over time, it then quietly begins doing the thinking and decision making for itself.  

The transition is subtle, but it matters enormously. The real risk in AI adoption is not only automation – it is cognitive surrender. 

AI amplifies judgement – good and bad. 

One of the most misunderstood assumptions about AI is that it democratises capability. The evidence suggests something more complicated. 

A widely discussed MIT and Harvard study found that top performers using AI improved significantly, by roughly 15%, while lower performers actually declined in performance. The same tool produced very different outcomes. The difference was not access to AI but rather the ability of people to use AI to enhance their own thinking and judgment. 

The strongest users challenged the outputs. They refined them, tested them and used AI to sharpen their own thinking. Others provided limited context and simply accepted answers that were often generic ‘slop’. AI is simply not ready to replace human capability (and still may not be for some time); it amplifies the quality of the thinking already there. This has major implications for organisations implementing AI at scale. 

The illusion of competence 

One area where this paradox has emerged clearly is education. Research from Lodge and Loble in Australia highlights a growing concern: students are increasingly outsourcing the very cognitive effort required for learning itself. The work looks better, more polished and more coherent, but underneath, understanding is often weaker. 

Once you recognise that pattern, you start seeing it everywhere: in workplaces, strategy discussions, decision-making and leadership communication. AI can produce outputs that sound authoritative, but does the person using it really understand the implications of what it has generated? 

That matters because critical thinking has always depended on mental debate: questioning, testing, wrestling with uncertainty, making mistakes and refining understanding. Remove too much of that internal debate, and capability can quietly erode beneath the appearance of performance and productivity. 

Productivity is rising. Capability may not be. 

Research from Microsoft and academic partners found that generative AI significantly improved productivity in workplace environments. But the deeper insight is often overlooked: many of those gains came through standardisation and acceleration, not necessarily deeper expertise. 

Another study examining consultants using AI found something equally revealing: AI users performed faster and better until they encountered problems AI could not solve. Then performance dropped sharply. 

There is an important difference between using AI for augmentation and simply outsourcing the obligation to work through complex issues. The latter approach inevitably results in the dulling of cognitive sharpness; whilst simultaneously producing worse outputs.  

The trade-off organisations need to understand 

AI reduces effort, but effort is also where judgment develops, expertise forms, innovation emerges and critical thinking is strengthened. That is the conundrum at the centre of AI adoption. 

The more organisations optimise purely for efficiency, the greater the risk that they unintentionally reduce the cognitive depth they rely on to adapt, innovate and lead. Over time, this can create serious organisational consequences, including teams producing polished-seeming outputs without deep understanding; decisions built on plausible but incorrect and generic assumptions; growing overreliance on systems people no longer know how to challenge; and widening gaps between strong critical thinkers and everyone else. 

Why human-centred AI matters 

This is why the future of AI cannot simply be technology-centred. It has to be human-centred. 

The goal of AI should never be to remove human thinking from work. It should be to elevate it and to add a positive dimension. Human-centred AI means designing systems, workflows and adoption strategies that strengthen human judgment rather than replace it. It means treating AI outputs as drafts, not decisions; preserving human accountability in critical thinking; building capability alongside automation; and designing environments where curiosity, challenge and reasoning still matter. 

The organisations that succeed with AI will not necessarily be the ones using the most AI. They will be the ones that remain most intentional about protecting and enhancing human capability while using it. 

The AI conversation is no longer about whether adoption will happen. The real question is this: are we using AI to scale intelligence, or are we slowly substituting for it?