Artificial Intelligence in Teaching: Nicola Bates

On Saturday 1st July over 1,000 alumni and current participants of the Teach First program gathered at the Totteridge Academy in North London. The reason? Teach First was celebrating 20 year of tackling educational inequality. Initially set up with a cohort of c.150, myself being one of the ‘guinea pig’ year back in 2003, it has grown to over ten times this annual number.

Over this time technological developments have come thick and fast, with ChatGPT being just the latest of inventions to grab the public imagination. Initial discussions I’ve had at university and in my school governor role took the form of how students can use ChatCPT to cheat on assignments. These discussions evoked a situation 20 years ago when Wikipedia pages were used by pupils verbatim to circumvent actually doing research homework. So like most things, a problem which is not completely without precedent, but giving some additional challenges.

An evolution of the ‘cheating student’ discussion followed into how ChatGPT can be used to aid teachers and more generally how AI technology can be used in the education sector. This combination of technology in schools and education has been badged as “EdTech” (Educational Technology) within the UK. The meaning of this encompasses areas such as supporting children using ipads and laptops, aiding teachers planning lessons or running a schools communications system.

Again most of this EdTech is not new, with technology being in school for a while now. Twenty years ago for example I was presented with a brand new interactive white board when I began my teaching journey. What seems to be new is the framing, this wave of technology being powered by so called AI and the drive to use this.

At a chairs of governors meeting for example it was demonstrated how AI technology could be used near instantly to create a lesson plan for any subject entered, tailoring for lesson numbers and topic length. This produced what looked like the ideal option of segmented work topics, links to other resources on the internet and suggested differentiated work and homework option. With schools competing for teachers this was put across as a way to reduce teacher burden in order to enhance time spent in the classroom. But who is to say that this is correct - do we check each scheme of work produced or rely on the technology that all is okay? Who is to say this is more time efficient and helpful for the teacher – the teacher still needs to know the content to deliver this?

There are many reasons given to use these technologies. The impact of the pandemic for example has widened the attainment gap between pupils from lower and higher socio-economic backgrounds. Use of AI technologies could have a positive impact on giving extra teaching opportunities here.

I am, however, always wary of people putting blind confidence in technology that, usually, they don’t understand over a human being. How can we have stage gates in place to ensure common sense is not abandoned, or silenced by various targets, if technology is found to be damaging to an educational setting. There has not been experiments and research conducted as yet on the impacts the use of AI systems would have as yet for example.

There are also fears from teachers and unions about where the line gets drawn in terms of technological integration. For example if the AI is doing all the complex tasks of teaching and setting curricula, does that leave the human level of input as just a playground monitor and classroom disciplinarian. A move from teacher to monitor would de-skill the teaching population leaving tasks for the humans that are less satisfying.

These issues are being discussed wider than the UK. A Guardian article written about the AI for Good Global Summit in July stated that education would be the biggest beneficiary of AI over the next few years, with Stuart Russell predicting AI likely to spell the end of the traditional school classroom.

Countries with a less developed education system for example could access personalised Chat GPT style tutors which could reach a wider audience, enrich education and widen global access. This would allow the delivery of a reasonable level of education to every child in the world. This personalised format could offer tailored support, differentiate based on ability giving pupils the ability to lead the learning through their curiosity. Lots of amazing possibilities.

However, lets also look at the risks of this. The packages which will be used in these personalised Chat GPT devices would be heavily fought over internationally. China and the West for example competing for the markets in order to be in control of the narrative and truth of global education. A huge risk potential for indoctrination, billions of people being given the same material to learn. This would have the potential to change the direction of global opinion and political tendencies. Whatever way it moves public opinion it could do so in a correlated way across the whole world as different generations accessed the educational feeds.

With all the technological changes ahead and with us today, I try and be open to the learning opportunities and benefits that these can bring. However, there is always my inner voice which is being equally mindful of when maybe cost optimisation may go too far. We really need to look at the system as a whole and out to longer time scales to really think of the impact these changes can have. I would say that within education, especially for the younger age groups, human intelligence and compassion would be hard sell for me to hand over to these technologies.

Anyway, let’s catch up in another 20 years and see how we have done.



Comments

Popular posts from this blog

Post-PhD thoughts on the Cyber Security field: Amy Ertan, 2017 CDT Cohort, now Cyber and Hybrid Policy Officer at NATO HQ in Brussels.

Remote working and Cyber Security: Georgia Crossland and Amy Ertan

New Publication: Remote Working and (In)Security?: Amy Ertan