Opinion: Edtech

AI bias poses serious risks for learners with SEND

The promise of personalised learning masks very real risks that demand careful attention from educators and policymakers

The promise of personalised learning masks very real risks that demand careful attention from educators and policymakers

1 Aug 2025, 5:00

For learners with special educational needs and disabilities (SEND), AI offers both promise and peril. While adaptive technologies can be tailored to individual needs, the risk of bias raises significant ethical and practical concerns.

Bias in AI refers to systematic and unfair discrimination that arises from the design, training, or deployment of artificial systems.

In the context of SEND, such biases may result in technologies that misunderstand, overlook or pathologise neurodivergent and disabled learners.

Let’s take each form of bias in turn.

Data bias

AI models are only as good as the data on which they are trained. Most educational AI systems rely on large datasets gathered from learners in mainstream settings in typical school environments. These datasets often fail to adequately include children with complex, atypical or multiple needs.

As a result, learners with SEND may be statistically invisible. For example, speech recognition technologies used in language learning apps or classroom settings often perform poorly for individuals with speech impairments or atypical prosody.

The absence of diverse speech patterns in training data leads to misrecognition or system failure, which in turn may be wrongly interpreted as learner error. Such “performance gaps” risk reinforcing deficit narratives around SEND, further marginalising these students.

Algorithmic bias

Even when learners with SEND are included in datasets, algorithms frequently optimise for majority behaviours or normative developmental pathways. In adaptive learning platforms, this can mean that neurodivergent behaviours are interpreted as anomalies or errors, penalising learners for non-linear or non-standard progression.

In addition, predictive analytics used to identify at-risk learners or allocate resources can be especially problematic. If historical data reflects systemic under-provision for disabled students, AI tools may perpetuate or even exacerbate these inequalities.

For example, an AI model predicting poor academic outcomes for students with EHCPs might lead to pre-emptive withdrawal of challenging curriculum content, thus limiting opportunity rather than enhancing support.

Design bias

Like other systems, design choices in AI often reflect ‘mainstream’ educational assumptions about what counts as success, engagement or normality.

As others have argued, edtech frequently promotes individualised, efficiency-oriented models of learning that may be at odds with the relational, holistic, and strengths-based approaches required by many learners with SEND.

This issue is compounded by a lack of disabled representation in the development of AI tools. Very few edtech companies co-produce tools with learners with SEND or involve inclusive educators in system design.

As a result, assistive technologies may be “retrofits” rather than fundamentally inclusive, and may inadvertently exclude users who do not conform to predefined interaction models.

Ethical and social risks

The use of potentially biased AI systems with learners with SEND raises several ethical concerns. These point to a need for explicit transparency, accountability and participatory design.

Misdiagnosis and misclassification

AI-driven screening tools might falsely flag behaviours as signs of difficulty or ignore genuine need due to rigid classification models.

Surveillance and autonomy

Behaviour-monitoring systems, often used with neurodivergent learners, can lead to heightened surveillance, with negative impacts on autonomy and dignity.

Label reinforcement

Algorithmic feedback may reinforce negative labels (e.g., ‘low ability’), affecting teacher expectations and learner self-concept.

Digital exclusion

Over-reliance on AI may exclude learners with sensory, cognitive or physical access barriers if the principles of universal design are not embedded.

Opportunities for inclusive AI

Despite these concerns, AI has the potential to support inclusive practice if designed, developed and  deployed developed in partnership with the communities they are intended to serve. Promising examples include:

  • Customisable interfaces that allow learners to adjust speed, language, font, colour, and layout to match their access needs.
  • Emotion recognition systems that can support learners in recognising and understanding social-emotional antecedents and responses.
  • Natural language processing tools that support text-to-speech, grammar correction and simplified summaries can aid learners with a range of different support needs.

Policy implications

To ensure AI supports inclusive education, several actions are needed at the policy and institutional level:

Inclusive data policies

Mandate the collection and ethical use of diverse, anonymised data that accurately reflects the needs of all learners.

Bias audits and regulation

Require developers to conduct impact assessments and bias audits before deploying AI in educational settings.

Training for educators

Build teacher confidence in interpreting AI outputs critically and in advocating for equitable use with SEND students.

Funding for co-design

Invest in participatory research and development with learners with SEND, families and educators.

Ethical standard

Establish cross-sector frameworks to guide the ethical use of AI in inclusive education contexts.

Barrier or bridge?

AI in education is neither inherently inclusive or exclusive; it reflects the values, assumptions and decisions of its creators and implementers.

For learners with SEND, biased AI systems can perpetuate disadvantage, misinterpret behaviour and entrench exclusion.

Yet with intentional design, inclusive participation and robust governance, AI can also become a powerful tool to personalise learning, support access and promote equity.

In navigating this landscape, educators, developers and policymakers must resist the temptation to see AI as neutral or infallible.

Instead, they must foreground the rights, voices, and lived experiences of learners to ensure that technology serves as a bridge to inclusion rather than a barrier.

Latest education roles from

IT Technician

IT Technician

Harris Academy Morden

Teacher of Geography

Teacher of Geography

Harris Academy Orpington

Lecturer/Assessor in Electrical

Lecturer/Assessor in Electrical

South Gloucestershire and Stroud College

Director of Management Information Systems (MIS)

Director of Management Information Systems (MIS)

South Gloucestershire and Stroud College

Exams Assistant

Exams Assistant

Richmond and Hillcroft Adult & Community College

Lecturer Electrical Installation

Lecturer Electrical Installation

Solihull College and University Centre

Sponsored posts

Sponsored post

Retirement planning and financial resilience – what do teachers need to think about?

Regional Manager, Oonagh Morrison, from Wesleyan Financial Services, discusses how financial resilience can impact retirement planning.

SWAdvertorial
Sponsored post

From Provision to Purpose: Making Internal AP Work for Every Pupil

Across England, a quiet transformation is underway. In schools up and down the country, leaders are reshaping how we...

SWAdvertorial
Sponsored post

Dream Big Day: Empowering Every Pupil to Imagine, Create, and Flourish

In today’s rapidly evolving world, educators face an immense challenge: How do we inspire young people to envision ambitious...

SWAdvertorial
Sponsored post

Reframing digital skills for the workforce of tomorrow

No longer just for those with a passion for technology: why digital skills matter

SWAdvertorial

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *