top of page

Teaching People to Argue With Machines: Why AI Literacy Starts With Critical Thinking

Jan 3

2 min read

1

3

Most organisations are racing to use AI.Far fewer are teaching people how to question it.


That gap matters — and it’s widening fast. Surveys now suggest 20–40% of workers are already using AI at work, with adoption climbing year on year. Yet most organisations still lack any structured approach to AI literacy or critical-thinking development.


The result? Speed without scrutiny.


Why critical thinking, why now

Analyses from Stanford University and others point to a growing concentration of AI power: a small cluster of companies controls the most capable frontier models, along with the compute, data, and talent behind them. At the same time, AI systems are increasingly shaping how people write, decide, analyse, and judge.


Other researchers warn of a quieter risk: what psychologists call cognitive offloading. Emerging studies show a negative correlation between heavy, uncritical AI use and critical-thinking performance, suggesting that when people outsource effortful reasoning to machines, reflective judgement can weaken over time.


In brands and workplaces, this shows up as thinking outsourcing: content that sounds polished but feels generic, decisions that look smart but lack context, and strategies that slowly erode trust and distinctiveness.


We are all “kids” again

Not children — but young minds.


AI is a powerful new cognitive tool, and whether you’re 8 or 58, most of us are still early learners. Curious. Experimental. And highly impressionable in the face of confident machine output.


That’s not a deficit; it’s an opportunity. Leaders can deliberately shape how these young “AI minds” grow — before habits of blind trust harden. Creating workplaces where everyone is allowed to be a beginner again brings humility, playfulness, and learning back into the system.


From using AI to arguing with it

Evidence from education and professional learning is increasingly consistent: AI is most valuable when it helps people generate questions, counter-arguments, and debate, not when it simply delivers final answers.


AI and Learning Principles

Design principles for leaders and L&D

With AI adoption outpacing most policies and learning frameworks, these habits are becoming a capability moat, not a “nice to have”.


Infographic about 4 habits to create AI capabilities.

A final thought

Organisations that teach people to argue with machines will preserve uniquely human strengths: discernment, ethics, and creativity. Others may quietly let those muscles weaken — without noticing until it’s too late.


What habits are you building — in yourself, your teams, and your systems — to keep human judgement strong in an AI-enabled workplace?



bottom of page