I’ll start off by saying that I am not a fan of AI – mainly because most of the ‘fan-boi’s are so non-critical and sometimes obnoxious – and I am certainly not a fan of private companies exerting power over people by controlling public spaces (looking at you Google and Twitter in particular). However, ChatGPT, and it’s underlying LLM (large language model) GPT3, are getting a lot of press so I thought I should dip my toe in that particular sea. I did make a prescient suggestion in 2017 about AI and assessment.

What is ChatGPT?

ChatGPT, from the outside, looks like Alexa on steroids. Ask it any question, including a style of response, and in seconds it will respond with very articulate and seemingly insightful answers. How does it work, you may be thinking? Well let me briefly and inexpertly explain.

Open AI, the creators of ChatGPT, have scraped the web of millions and millions of ‘data points’ (read ‘stuff that someone else made’). It then calculates the probability that given the input criteria, what the most probable output sentence would be. Working word-by-word, it will generate completely sentences on a subject. Think of it like the next word suggestion tool on your phone, but supercharged. This ‘ChatGPT hype’ piece by Jorn Bunk is an excellent summary. TL;dr: it is an impressive bullshit generator and therefore can’t be factually trusted. I would also check out the first half of Ted Chiang’s New Yorker piece ‘ChatGPT is a blurry JPEG of the web’. If you want to see the bullshit generation at work, I’m guessing you can just read any LinkedIn ‘thoughtleader’ long-read post since the beginning of 2023.

My experience of using ChatGPT

I had to do some work on a particular subject in order to generate a resource to share. Although I have knowledge and experience in the area, I certainly wasn’t an expert and hadn’t created anything in a similar format previously.

Out of curiosity, I decided to ask ChatGPT what it would consider an appropriate answer. And to my surprise, it was appropriate. I even tested the answer on an unwitting colleague, who confirmed that it sounded “pretty much right”.

Once I had the initial responses, I started interrogating the system with further in-depth questions to elicit more detailed and nuanced responses. I gathered these all together and edited the text together to form something coherent.

I also used other sources of information, such as primary research and secondary resources, to validate the ChatGPT output. If I wasn’t using any form of AI support, I would still try to triangulate my ideas and suggestions, so employed the same practices.

After a lot of cutting, pasting, rewording, and critiquing, I ended up with something that was I was happy with.

As a test, I ran both the original, combined, AI-produced draft and the final piece through OpenAI’s AI Text Classifier. It shifted from ‘unclear’ to ‘very unlikely’ to be AI-generated. This is quite vindicating as I put quite a lot of human work around it.

Reflections about the outputs

Looking initially at the outputs, you need to be aware of the limitations of such as system to best use it. It is really important to remember that ChatGPT is widely regarded as a Tory MP-level bullshit generator, asserting complete falsehoods with confidence.

Firstly, in order to ask the right questions of the machine you need sufficient information literacy skills. As you are able to engage the system in active review and reformulation of answers, you need to be able to refine and adapt your questions to suit.

Secondly, you need domain knowledge to validate if the answers you receive are reliable and accurate. They say you can’t bullshit a bullshitter, but I think ChatGPT is the expert at doing exactly that.

Without these two aspects, I can’t see how you can produce anything worthwhile. You’ll produce something, but won’t know if it does what you need it to do.

Reflecting about the process

It’s probably worth mentioning here that I have scored very highly on the ADHD screening tests (so if you read this it means I published it, yay, I’m winning). Ironically, I have failed to organise going for a proper diagnosis so it’s not formal that I have ADHD. Read into that as you will, but please use that as a caveat for all that follows.

I was astounded by what I found. It wasn’t the tech that blew me away.

I don’t like writing, but I have been told that I’m quite good at it and can adapt my writing to my audience (unless you don’t like this).

My usual writing process

When I was studying for my degrees, my usual process was the following:

  1. Initial source capture, skim and sort
  2. Go through relevant sources and copy any phrase that seemed vaguely relevant (and gather references)
  3. Build my essay plan based around a typical structure
  4. Move quotes around and fill in gaps.
  5. Paraphrase quote and make sure it all makes sense
  6. Tweak ad infinitum

I think I hated writing because the process of writing was concurrent with the process of analysing and understanding my sources. I was asking my brain to perform multiple operations at the same time. This caused a lot of cognitive stress. As I got closer to the deadline, my brain was able to flip to ‘write mode’ and I would get the words out. Most of the time they were okay, sometimes actually quite good, sometimes exactly what they were: rushed and lazy.

Benefits of using ChatGPT

By outsourcing a lot of the initial thinking to AI, it allowed my brain to shift to ‘critique-mode’. In this mode, I am taking the text, combining it with my domain knowledge, but I am in editing mode. Much easier. Bad news though, you still need to have done the reading or have the experience, sorry.

However, by offloading the initial draft to another service, it meant my brain wasn’t multi-tasking, and the process was faster and the output was (hopefully) better. ChatGPT helped me understand my difficulties with writing and has given me ways of approaching it in the future, with or without their service.

Ethical and philosophical considerations

There has been a lot written about how the LLM for ChatGPT was built, especially criticising their extractive use of global south labour to sanitise their scraped data, and disregard for copyright when gathering that data. I will leave it to better minds than mine to tear down that hell-hole.

Is the work I produce ‘mine’ if I use a tool, like ChatGPT, to generate an initial draft? I am sure there are plenty of people who would say definitely no. However, the skills and expertise I have had to use throughout the process has made it mine.

As way of example, imagine if I was making a piece of furniture. Now, I could make every joint, carve every little detail, and do everything by hand. Or I could design everything on a computer, using standard templates for the joints, and then get a CNC machine to cut my materials to my designs. I then take the cut pieces, put them all together, and add all the finishing touches. In both instances, my skills have been used to create the piece of furniture, but possibly different skills used in different ways. I would say both are mine. You could argue one is artistry and one is manufacture, but it doesn’t change ownership. My knowledge, my skills, my effort. It’s my ‘product’.


While ChatGPT may have some uses (and misuses), I am aware of the real harm caused by its exploitative practices. I am also full aware that my calling out megacorp’s potential for evil is hypocritical – I use Google, Amazon and facebook on a daily basis – but it shouldn’t be an all or nothing. We can take steps to question things while we work to get our own house in order; you don’t need perfection first. Rather than chasing use cases, good and bad, we should be questioning the ethical practices of these private, exploitative companies that have such influence over society.

However, ChatGPT is great for restructuring what is already known, and pretty good at synthesising multiple ideas into one. It is also a useful tool to hand-off ‘writing’ while you are thinking deeply about a subject. I can also see great benefits for non-neurotypicals. Will I use it again? Yes, I probably will, but with a bad taste in my mouth. I will definitely use what using ChatGPT helped me learn about myself to rework my process.

And yes, this blog was written (initially) by ChatGPT. I’d claim it was meta but I fear Facebook’s lawyers.

Addendum (24/2/23)

After writing this piece, I have been thinking. I concluded I probably would use ChatGPT and similar again. However, on reflection, I think it is slightly more nuanced than that.

What I found useful was the dissociation of thinking from writing. So, I think the use of ChatGPT is one way to enable that, but there are many others. What I need to do is to reflect on, and adapt, my writing processes with that revelation in mind. Using ChatGPT actually isn’t necessary.

The more I learn about it, the more problematic I find it. The fact OpenAI, the company behind ChatGPT, are financed by Peter Thiel, Elon Musk and other misanthropists doesn’t sit well with me.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.