Do Androids Dream of Dyslexic Sheep? Ethics and Neuronormative Injustice in AI

Philip K. Dick’s novel Do Androids Dream of Electric Sheep? has been asking the same question for over fifty years: what makes a person human? In the bleak vision of the future he imagined, the boundaries between human and machine blur: androids are almost indistinguishable from people, intelligent and capable, yet still deemed inferior because they supposedly lack empathy. In Dick’s world, humanity is defined not by intelligence or rationality, but by compassion. His message, therefore, remains prescient and pressing: when social norms determine who or what counts as valuable, the consequences can be deeply destructive.
That message feels more urgent than ever in an age where AI already shapes much of our daily lives. The boundaries between human and machine decision-making are no longer speculative science fiction – they’re here, now, and they influence how we learn, work, and interact.
Data-driven AI systems are now pervasive. Not only does our daily private life rely on the appeal of AI-based products, but ideas for AI-powered education also inspire policymakers and researchers alike. Some even suggest that teachers might become unnecessary as AI-driven personalised learning could help students learn better, faster, and more easily in the future. That would indeed be a major change. However, we believe that AI systems can only support learning and teaching; they cannot replace teachers.
Nevertheless, our opinions, preferences and even our social interactions are already shaped by invisible algorithmic processes. AI sits like a ghost on our shoulders, whispering in our ears what our society already believes, without us even noticing. As Nietzsche warned, ‘If you gaze long into an abyss, the abyss also gazes into you.’ But when the abyss reflects our own biases – about gender, ethnicity or less visible differences such as neurodiversity – the dangers multiply. AI not only observes our biases, it amplifies them.
This is where our “EdTech in Action” talk comes in, presenting the results of our latest research on fairness and AI. The aim is to show that data-driven AI systems can reinforce precisely such prejudices. It shows that AI contains what we call neuronormative bias. In other words, machine learning models are not simply neutral tools – they often assume that neurotypical ways of thinking and behaving are the “standard,” while divergent cognitive styles are treated as deficient or even disordered: AI systems trained on historical employment data may learn to associate dyslexia with lower competence in high-status professions; automated recruitment platforms can filter out applicants whose speech patterns deviate from the norm; emotion-recognition software may interpret atypical facial expressions or body language – often found in autistic or ADHD individuals – and even atypical face types, as negative. Educational platforms can reward only the “average” student profile, sidelining those whose learning strengths lie outside traditional rubrics. The result is a cycle of exclusion, reduced economic opportunities, and increased psychological stress.
Bias here it is not a bug; it’s a function. By reproducing the prejudices already embedded in society, AI risks entrenching inequality in ways that are harder to detect and therefore challenge.
This technological bias compounds a problem that neurodivergent people already face. In schools, workplaces, and society, the default expectation is built around a “normal” learner or employee. Divergent communication styles, working patterns, or attention strategies are often misinterpreted as weaknesses. The result is a cycle of exclusion: fewer educational opportunities, reduced economic prospects, and greater psychological stress. Studies consistently show that professional status, income, and educational access are closely linked to mental health. Those with secure jobs and strong qualifications benefit from resilience and recognition, while those facing precarious employment or limited education are more vulnerable to anxiety and depression. For neurodivergent individuals, these risks are magnified by systems that not only fail to value their strengths, but actively undervalue their being.
Here, the parallels with Dick’s androids are clear. Just as androids are denied full humanity because they fail to conform to social norms, neurodivergent people are often denied recognition because their cognition doesn’t match neuronormative expectations. When these assumptions are encoded into algorithms, they not only reflect discrimination but automate and scale it.
If there is a way out of this cycle, it’s to begin with education. Inclusive education is more than just a pedagogical principle: it’s an ethical imperative. By designing learning environments that celebrate rather than suppress diverse strengths, we can break the links between difference and disadvantage, observing and othering. Instead of collecting neurodiverse traits like statistics and data farming them like sheep, we have an opportunity to stop the dehumanisation of the neurodivergent – “the other” – and embrace their dignity and individuality.
A genuinely neuro-inclusive system values varied pathways to learning and achievement. This doesn’t just benefit neurodivergent students; it enriches the educational landscape for everyone. Importantly, inclusive education also shapes the future of technology itself. The data we collect on learning today will inform the AI systems of tomorrow. If those datasets reflect diversity, future AI models are more likely to recognise, rather than erase, human variation. Classroom inclusion ripples outwards, shaping the fairness of future algorithms.
Our ethical challenge is therefore twofold: prevent neuronormative injustice from being hard-coded into the systems that increasingly govern our lives; and reimagine education so that diversity becomes the foundation of innovation.
Empathy must guide this transformation. Just as Dick’s novel framed empathy as the defining characteristic of humanity, we too must embed it into how we design, train, apply and use AI. Machines cannot “feel,” but we can shape them to reflect and respect the diversity of human experience.
Perhaps one day, androids really will dream of dyslexics. And perhaps those dreams will not represent a defect, but a symbol of the creativity, resilience, and perspectives that neurodivergent thinking offers. Perhaps then, AI will not risk becoming a nightmare for the neurodivergent.
The future of AI is not only about technical sophistication and the creation of ever easier processes and workloads; it’s about whose stories we choose to recognise. By grounding technology in empathy and inclusion, we have the chance to build systems that expand the horizons of aspiration – for everyone.
Dr Martin Bloomfield, Beyond Inclusion, UK
Prof Dr Claudia Lemke, Berlin School of Economics and Law, Germany
Stay up to date
Subscribe to the free GESS Education newsletter and stay updated with the latest insights, trends, and event news every week. Your email address will remain confidential