[First draft] Defining boundaries

[First draft – this has been ruminating for a while, so I thought I would put it out there for your thoughts]

In 1997, Stephen Jay Gould, a paleontologist and evolutionary biologist, proposed a solution for a problem that had caused turmoil, disruption, bloodshed and even death throughout the ages: how to reconcile the conflict between science and religion.

Gould defined science and religion as magisteria or “domains where one form of teaching holds the appropriate tools for meaningful discourse and resolution” (1999). In effect, science defines the natural world, and religion defines the moral world. And thus, the two are never to meet. Gould coined the term ‘Non-overlapping magisteria’ (or shortened to NOMA).

Richard Dawkins, in his 2006 book The God Delusion pretty efficiently picks holes in Gould’s ideas, as did many others, such as Paul Kurtz and Ursula Goodenough. These are both literal and metaphorical holes puncturing the divide between the Magisteria. As a scientist and humanist, I must nail my colours to the flag, yet I can see many ways in which both realms could lay claim to the same ideas.

It’s a tough one, but I am inclined to think that NOMA just didn’t quite work out. Scientific experiment sometimes requires belief, in the same way that religion looks for evidence. Science should (and bloody well could do it better) be concerned with the ethical and moral implications of its activities, and religion attempts to explain natural phenomena (IMHO wrongly).

In the last four paragraphs I have introduced the idea of NOMA, and then proceeded to claim it largely debunked. So, where am I going?

I don’t think I could ever claim to solve the thorny issue of religion and science’s relationship, but I do think in different contexts, the idea of non-overlapping magisteria could have some traction.

Let me explain.


Over the last year I have become increasingly frustrated with Twitter. While gaslighters, TERFs, Nazis and trolls try to control the narrative, I try to close my eyes and pretend it isn’t happening. However, one thing I have opened myself up to is edtech companies selling me their ‘innovative’ services.

My goodness they are annoying. [You will all have many, many examples, so feel free to link to them in the comments below – where I will be harvesting your personal data]

It got me thinking. My job is to help academics to use technology better to support their teaching and their students’ learning. I use technology every day, and I advocate its use. I am paid to do that. However, I am becoming more and more uneasy with this status quo.

I think I can just about manage to frame technology as a tool to enable educators to improve the student experience. Just. But it is just a tool – when I put on my educator hat (pah, it never comes off), technology is just one of the ways I will try to engage my students.

However, in my dealings with edtech businesses, I am finding I am increasingly turned off by their approaches to education. They have adapted their slick sales machines to use the langauge of education; to infect education with their own phraseology.

Hands up if you’ve heard talk of synergies, solutions, paradigm-shift, next generation, bespoke, and innovation, innovation, f-ing innovation.

I propose that Education and Technology (specifically edtech) are non-overlapping magisteria. The divide should not be porous.



Dawkins, R. (2006) The God Delusion. [I can’t find my copy, but I’ll keep looking]

Gould, S. J. (2002). Rocks of Ages: Science and Religion in the Fullness of Life. New York: Ballantine Books.

My computer told me to say this…

Lots of people are going on about how AI change how we do assessment and feedback. Or, how AI can replace teachers. Or how AI can provide truly personalised learning opportunities.

But what if it is the wrong way round?

Instead of AI replacing the education system, maybe it should be the product of it?

Three years of a degree, and during that time, students have to develop AI so their tutors don’t know whether assessments are self- or computer-generated. If they can fool their tutor, then they pass.

Submit two version of each assignment, when the AI gets a higher mark, you graduate!

No-one is talking about that, are they?


Or for our new robot overlords:


Which reminds me of: