Navigating the AI revolution in education What do schools need to consider?

The text below is a transcript of the video, above. I hope you find it useful ☺️

Today, we’re looking at how teaching machines to learn could have an enormous impact on how we humans teach and learn. Let’s take a closer look at the role of Artificial Intelligence (AI) in education. Is AI the future of education? Or is it another teaching fad? Let’s find out.

As the great Douglas Adams once wrote about technology adoption:

“Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

And so, there are indeed plenty who, as Adams put it, are making a career in it. The past few months have seen an explosion in the number of posts, videos, tedtalks and all sorts of other content on the pros and cons of using AI in schools, as well as the tools and strategies that teachers and students can use to adopt AI effectively and in a manner that supports both teaching on the one hand, and learning, on the other. And that is great. We do need that.

But there are also those who view the adoption of AI, often of any technology, with more scepticism. And that is quite healthy too. Mustafa Suleyman, the founder of Deepmind and more recently, Inflection AI, the creators of the really quite amazing chatbot PI, says that “if you don’t start from a position of fear, you’re probably not paying attention”. Over time, and with experience, I’ve learnt to listen to pessimists, as they keep us grounded, after all they are the ones who insist on putting seatbelts in cars.

At a practical level, it is reasonable to be concerned with the what of AI (here’s what you can do with AI, have you seen this new fancy AI tool?). On the one hand AI does promise to make learning dynamic, interactive and personalised (think Duolingo), but on the other hand some fear it could be used to replace teachers and many worry about cheating and about the reliability of the content that language models such as ChatGTP produces.

Others might be concerned with the how of AI (here’s how you can use AI to lighten your workload, here’s how AI can help you analyse data…). It is important to recognise that AI can help teachers assess student performance and in so doing help with, for example, the planning of lessons or the giving of feedback (if you’re interested in finding out more about how to give better feedback, watch my video on making feedback effective)

But for me the most important and perhaps first question we should ask about the adoption of AI is not the what or the how, it’s the why. Why would we want to use AI in education? After all, we have managed pretty well thus far without it?

For me the principal reason is knowledge. Professor Daniel Willingham says “you can only think deeply and critically about what you know well”. In other words: deep and critical thinking is based on deep knowledge.

If we want students to become skilful and knowledgeable users and developers of technology; if we want them to think critically and creatively about the advantages and disadvantages of using AI and technology more generally, it behoves us then to teach them not just about the foundational knowledge related to AI (what it is, how it works), but also the ethical, economic, and societal considerations and implications.

And the urgency is there because AI is already woven into most parts of our lives. When we use voice assistants, when we shop online, when we stream films, when we use social media, when we plan car journeys… From this perspective, we should teach our students about AI not because it prepares them for the future, but because it prepares them for the present.

However, we would be wise to proceed with caution, not because AI might take over the world and eliminate all humans, that’s not on the cards – well, not yet anyway – but because we’ve been here before.

What do I mean by that? Well, I’m old enough to remember when social media was going to bring communities together and when making the sum of all human knowledge available online to everyone was going to democratise access to information and make us all cleverer and better informed. Not sure that worked out just how the optimists had hoped, myself included. .

Sure, there have been numerous advantages – for example enhanced interactivity and collaboration, and the ease of access to information is real – but in hindsight it might have been better to ensure the social web, as it became, were better regulated so as to mitigate some of the disadvantages – disinformation, access to harmful content, issues with personal data….

There clearly needs to be a balance between regulation and innovation, let’s take that as a given. I’m just not sure we ever struck that balance with social media companies, and, to a not insignificant extent, we are living with the consequences of that today. As professor Rose Luckin puts it “we must ensure that AI serves us, not the other way round. This will mean confronting the profit-driven imperatives of big tech companies.”

Better, more effective regulation then seems to me to be part of the solution to the problem caused by indiscriminate, unthinking and, I suppose, unintelligent use of AI, where consequences be damned. Yes, AI could potentially help us find a cure for cancer or solve our energy issues, but it could also conceivably help unscrupulous interests develop bioweapons or start the next war.

As educators we should be interested in the advantages that using AI could bring to teaching and learning, while remaining mindful of the potential disadvantages. AI is already being used and abused in ways that many of us would not have foreseen just a few months ago. .

Let’s recap briefly how a school’s digital strategy should adapt to the routine use of AI:

  • A strategy document or policy should define the scope. WHAT is the definition of AI and WHAT does it cover? The strategy or policy should identify WHAT is good and WHAT is bad about AI.
  • It should tell us HOW AI can be used but also HOW it can be abused. HOW are we encouraging appropriate use and HOW do we propose to mitigate when it’s potentially misused.
  • Most importantly, it should make the reasons WHY we are adopting AI clear. The reason WHY we teach about AI is not because it does clever or gimmicky things. The reason WHY is because we recognise its value as an important, and maybe even necessary component of a holistic education.

If I were foolish enough to make a prediction, I’d probably suggest that artificial intelligence will be even more embedded into our lives than it is now, in both good and bad ways. In education I can see AI helping pupils routinely to navigate through personalised curricula, probably still in support of old-fashioned timetabled lessons, while assisting still very human teachers in curating and selecting resources, as well as in marking and assessing. In this future I imagine, classrooms remain reassuringly familiar, where technological possibilities don’t get in the way of educational necessities.

And it could be that in that future, just like in the present, the wholesale application of technology to every aspect of teaching and learning though theoretically possible, just wouldn’t be preferable.

But who am I to make predictions, as physicist Niels Bohr is reputed to have said “making predictions is hard, especially about the future”.

Your feedback and comments are very welcome