Big tech AI in education narratives aren’t neutral information sources – they’re persuasive texts designed to shape how we think.
Through carefully chosen words, packaged in slick websites, “research” reports and glossy marketing materials, companies like Google and Microsoft don’t just inform us about AI. They shape how we think about it.
Much of this happens below our immediate consciousness, making it hard to notice, let alone question. And that matters. How we understand AI’s role in education will ultimately shape decisions in schools and influence policy.
Edtech marketing isn’t always easy to spot. It’s wrapped up in authoritative-looking research, school success stories, teacher training materials and blog posts fronted by thought leaders with impressive credentials.
Presented across multiple formats, these messages feel accessible, trustworthy and even benevolent.
But the variety and polish also mask the commercial imperative woven through these carefully crafted stories, disarming readers into assuming they’re encountering neutral, informative educational resources.
Huge profits
Neutrality is impossible when huge profits are at stake. The global AI in education market is set to grow from $9.7 billion in 2025 to $92.5 billion by 2030 (ResearchandMarkets.com 2025).
With that kind of money on the table, every carefully-chosen phrase serves a purpose. The strategies embedded in these materials quietly work to normalise AI in education, gently shifting society’s thinking about its adoption and influencing decisions in schools and at policy level.
One of the most common tactics is to position AI as the solution to deep-rooted problems.
These narratives are attractive to busy school leaders because they reduce complex social and pedagogical challenges into neat technical problems with straightforward answers.
And if something sounds too easy, it probably is. But once we can recognise the strategies used, we can begin asking better questions about where AI genuinely adds value.
So, what should we look out for?
First, students. You often have to work quite hard to find them in big tech’s education narratives. When they do appear, they’re rarely cast as creators or decision-makers, but rather as passively waiting for AI to adapt or improve their learning.
AI, meanwhile, is positioned as the agent of progress, subtly replacing student agency with the lure of algorithmic optimisation.
If we absorb stories where students are acted upon by AI, we risk designing a future where young people have less ownership of their learning.
Teachers fare only slightly better. They are typically positioned as beneficiaries of AI rather than professionals wielding it.
Compare: “AI helps teachers differentiate resources” (a classic edtech line) with “Teachers use AI to differentiate resources.” A quick skim and they read the same.
But read them slowly: who is driving the action? A tiny grammatical shift makes AI the capable agent and teachers the assisted.
Another powerful linguistic move is to present AI adoption as something that simply happens.
Metaphors like “education must evolve” position schools as living organisms reacting to inevitable environmental change.
Phrases such as “as AI transforms education” frame technology as a force of nature: external, unstoppable, unquestionable. Swap “AI” for a company name, e.g. “as Google transforms education” and the powerful force becomes glaringly obvious.
When we accept AI as “the future,” we stop asking whether it’s the future we actually want.
Educators are not powerless
But educators are not powerless. Developing awareness is a practical first step. When reading AI-related materials…
Look for who has agency. Who is doing the learning and making the decisions – the student, the teacher, or the technology?
Watch for metaphors of inevitability. If change is presented as natural or unstoppable, ask who benefits from that framing.
Examine the problem definition. Has a complex educational issue been reframed as a technical problem only a product can solve?
Notice what’s missing. Where are the students? Where is teacher judgment? Where is the pedagogy?
AI will play a role in education’s future, but that future should not be written solely by the companies that stand to profit from it.
By examining the language used to sell AI, educators can reclaim the narrative and ensure our decisions serve learners, teachers and our wider communities, and not commercial agendas.
Your thoughts