In my previous post, I explained why writing will always be a valuable life skill, regardless of what happens in the economy.
But of course, education is not just about developing life skills; it is also about preparing students for the world of work. And AI is clearly having a big impact on the economy and in particular on the type of writing that is done at work, so what should educators do?
The apparently obvious option is to get students to use AI a lot in the classroom, as preparation for what they can expect in the workplace.
Here I want to explain why that isn’t a good idea, and why using AI in the classroom will paradoxically leave students less able to thrive in jobs that use AI.
Using technology in the workplace is great
I am not anti-AI in the workplace: far from it. Here at No More Marking, we are using Large Language Models in powerful ways to improve formative and summative assessment.
More broadly, I think AI / LLMs are going to be used more and more in professional jobs to speed up certain tasks. That is a good thing, because in the workplace, the aim is to get a task done as efficiently as possible, not to provide a learning experience for the worker. If a car mechanic or an accountant or a lawyer can use a new piece of technology to fix your car or get your tax return done or do the conveyancing on your mortgage, everyone benefits.
Economically valuable skills are complex skills.
However, it’s also true to say that very simple skills don’t command much economic value. If a new technology makes a task very easy and efficient, it can completely eliminate any economic value in providing the task. Suppose LLMs get so good that anyone can ask them about a legal issue and get a perfect answer. In that case, people are not going to pay someone a large hourly rate to ask the question for them! They’ll cut out the middleman and ask it themselves.
So a student who is typing an essay question into an LLM, handing the output to their teacher and justifying it on the grounds that “this is what professionals do in their day job” is not making a good argument.
If that really is all the professional is doing, their job is not going to be around for long.
Are LLMs really so good that they will make most current professional jobs obsolete?
In my previous post, I gave an example of a job that has been made obsolete by technology – ancient Greek marathon runners. No-one now employs runners to deliver messages. They send text messages or hire cars instead.
However, over the shorter term, it’s rare for jobs to become completely obsolete like this. According to one analysis, out of 270 occupations listed in the 1950 US census, only one has become obsolete due to technology: elevator operator. In many other jobs, technology has reduced but not eliminated the demand for humans, or it has changed the type of work humans do and the particular tasks they focus on.
At the end of this post I will consider the extreme scenario that AI obliterates professional work, but in the short-term, I don’t think this is likely (partly because the hallucinations made by LLMs are a real and persistent problem). My predictions are as follows.
AI will be used widely in office jobs
It will need the oversight of humans - the human in the loop
It might take over some tasks, meaning humans will focus on other tasks
It might reduce the demand for humans to do certain jobs, particularly entry-level jobs
Some of the human oversight of AI might look basic - maybe the professional skims and scans the output of the LLM and makes a couple of edits – but those changes might be incredibly important and add a lot of value that consumers are not able to provide themselves and will be happy to pay for.
There’s a famous story about an expert plumber who turns up at a house with a huge plumbing problem. He takes a glance at the tangled network of pipes under the sink, turns a couple of washers, and solves the problem. The point of the story is that the plumber took a minute to solve the problem, but he was deploying expertise that took years, maybe decades, to develop.
Similarly, the lawyer who skims an LLM’s output and makes a series of quick but vital changes is relying on skills that took years to develop.
Should we just ask the trainee plumber to spend all their time glancing at a tangle of pipes and pointing at a couple of washers? Should we get the trainee lawyer to spend all their time scanning LLM outputs and making a couple of changes?
No.
Training for work is not the same as work
It is a well-established principle of cognitive science that you can’t develop complex skills in one go. You have to break the skills down into smaller chunks which often don’t look like the end skill you are aiming at.
Here are three really important papers which explain this point from different angles.
Epistemology is not pedagogy. Paul Kirschner has written a lot about how experts think in qualitatively different ways to learners. In this 2009 paper, he shows that what you do when you are an expert is very different from what you do when you are a learner. “The epistemology of practicing in a domain is not a good pedagogy for learning that domain” because “learners or novices are not miniature professionals or experts.”
If you want to learn something, deliberate practice is better than work. K Anders Ericsson is the researcher who developed the idea of “deliberate practice”. In this 1993 paper he draws a distinction between work and training, and shows that the activities which you do in a paid job are not optimised for learning. If you want to get better at a job, you often need to do different activities to those you would do in the job.
You can’t ask technology to learn for you. Barbara Oakley & Terrence Sejnowski have created one of the best online guides to learning how to learn. Just last month, they co-authored a paper on the problems with “cognitive offloading”, which is when we get technology to do a task for us so we don’t have to think about it. “Over-reliance on external memory can leave one with a collection of correct outputs (answers obtained from tools) but without the integrated understanding or procedural fluency that marks true expertise.”
In my own writings, the analogy I like to use is another one involving the marathon! A marathon is a complex skill. To acquire the skill, you don’t start out by running marathons in every training session. You do some things that don’t involve running at all – gym sessions, yoga, cross training. Even when you are running, you aren’t running entire marathons. The aim of training is not to replicate the end goal, but to develop yourself in ways that allow you to achieve the end goal.
Similarly, the skills you are going to need to work with AI aren’t going to be developed by using AI. In fact, as Oakley et al argue, they will probably be stunted by excessive use of AI in training.
Apply these principles to teaching writing, and it suggests that we should be teaching writing by building up the fundamentals of sentence structure and vocabulary, and assessing it in the absence of AI. A student who has learned to read and write in this way will be much better placed to work effectively with AI in the workplace. They will be able to ask sensible questions of the AI, critique and edit its responses, and make sure its tone is right for the context.
What about if the LLMs do take over?
Of course, all the above is true if human expertise remains economically valuable. What if it doesn’t? What if LLMs get so good that they do obliterate all or most professional jobs? This is a hard scenario to think about – probably as hard as an ancient Greek trying to imagine a world where marathon runners aren’t needed because of cars and the internet…
But we can give it a go. If it is the case, here are two poles of outcomes: a utopian one and a dystopian one.
AI ushers in a world of abundance, prosperity and peace. Everyone spends more time in education developing their unique human skills. Education is about human development, not economic development, and many people choose to learn to write because it’s a form of human development.
AI ushers in a world of personal and national conflict over the ownership of the means of AI production and the distribution of its rewards. Education is about preparation for physical and mental combat in these conflicts. People learn to read and write for a variety of strategic reasons. Members of the military are instructed in pre-technological skills as an insurance policy against catastrophic technological failure – just as the current US navy have reintroduced celestial navigation after eliminating it for a decade.
So whatever the potential future, I don’t think writing, or literacy in general, is going to become obsolete.
However, I do think there is a good chance that literacy could become less widespread, if the labour market only rewards more advanced literacy skills and demand for more basic literacy skills declines. I will explore what this means for education in our next post.
Great arguments backed by the data. I, too, am nervous about letting kids lose with LLMs in the classroom. There is little guidance and less research supporting it. Do you see any place for student use of AI? Coding? Feedback? Other forms of multi-modal generation? Given the way most students seem to be using the technology, I do fear that the adults who are skilled with using AI are projecting onto younger people a level of domain expertise they have built up over years and assuming the students will simply develop those as they use the LLMs. Great post and lots of useful data!