article

How teachers, parents and students are integrating AI into schools

One of our keynote speakers at last week’s Learnovation Summit was Rita Bateson, co-founder and Director of Education at Eblana Learning, which provides AI guidance for international schools and teachers. Prior to Learnovation 2025, we sat down with Rita to find out more about her company and how teachers, parents and students are coping with integrating Artificial Intelligence into schools.

  • 12 minutes
Member content
Photo of Rita Bateson speaking at Learnovation 2025

Can you tell us a bit about your background?

I left Dublin in 2001, and I was teaching for years in the UK, Switzerland, Bermuda, Germany and The Netherlands. I then worked for an international educational organisation called the International Baccalaureate (IB). I oversaw the Mathematics and Science curriculum for international schools globally. I came home to Ireland in 2018 – just in time for Covid – and founded a private international school in Leopardstown. Post Covid and once the school was established, some time away from the classroom allowed me time to think, research, read and write and led to an eventual inspiration to start a company to support schools with AI implementation with my co-founder Rachel Bodily.

Tell us more about Eblana Learning?

We are based in Dublin with an all-female directorship and an employee base captained by an all-female advisory board. We provide AI guidance for international schools and teachers who desperately need it. Schools are finding themselves inundated both by tech salespeople trying to sell platforms and parents who are either wildly for or wildly against AI. It’s impacting every single part of teaching and learning and they’re not able to keep up with the speed, the scale and the risks.

We started off with in-school support as everyone had a bit of PTSD with teaching online, but it wasn’t scalable over time, requiring just one person per school. Schools started to ask us to put some knowledge online, so we developed eLearning modules, which led to an online academy with various modules like AI literacy for teachers, AI and wellbeing, the impact of AI and what sustainable and ethical AI looks like for schools. We also look at what AI is doing to exams and assessments – the good and the bad.

The next natural question for the schools was then what to do in the classroom? The demand was clear for an AI curriculum from early years to aged 18/19 for international schools. This includes embedding AI into subjects as well as guidance on how to AI proof what students are doing at home and in assessments. We launched it into international schools on September 1st of this year.

We still have consultants go into schools because people still want in-person training, so we’ve grown to a team of 15 between our office team and consultants on the road. Four of us are based in Ireland and the rest are based internationally, delivering the curriculum to leaders and teachers. Sometimes we also do parent sessions where we talk to parents about what AI looks like in education and what it means for their children academically and for their wellbeing.

So, you just work with international schools?

Yes. We’re not working with any Irish schools yet, mainly because the Irish schools aren’t there in terms of implementation and guidance as it’s such a nationalised system. Schools here need to wait on guidance and direction from the Department of Education. Also, my background is in international schools so it was a natural place to start.

Are there a lot of companies out there trying to ‘sell’ AI solutions to schools?

Yes. Schools are crying out for guidance. And it’d be better if it was an educationalist driving the educational conversation instead of just AI ‘tech bros’ all the time. All the AI companies are motivated to look like they’re doing the right thing and offer free courses, but there’s only so many hours in the day to complete this spectrum of offerings. Learning requires the quality assurance and sometimes the subject matter expertise might not be there to accurately review them from the perspective of education, and it’s disjointed as a result. We (Eblana) are working hard to come into schools and make it all holistic, make it all a one-stop service.

You spoke at Learnovation 2025 about the ‘homework apocalypse’. Can you elaborate for those who weren’t there?

In the first year of ChatGPT, there wasn’t a high level of engagement outside of the media and small scale experimentation. As soon as AI came to Snapchat, kids were playing with it a bit and parents knew it was happening but didn’t really pay much attention. The following year was very interesting because it was a kind of a ‘catch me if you can’ year where students were using it, but teachers couldn’t detect it.

But now, that frustration has turned around in the other direction with students getting annoyed at teachers who are using AI on them to generate teaching resources or gain feedback on their work. They are frustrated with teachers who are generating resources from AI and not reading them before using them –  so they might not be the best quality.

This has all led to a situation where some teachers are turning away from homework entirely –  the ‘homework apocalypse’. It’s now so difficult to detect AI work. You might sense it but if you’re going to accuse a child of using AI, especially in a high-stakes assessment, you need to be completely sure.

Even if you ask them to handwrite it or ask them to record their screen, for example, AI has ways around all of that. AI can now generate handwritten work which is incredibly convincing. You can feed an AI bot enough of your writing to give it your voice and it can create the same mistakes that you do, making AI-proofing almost impossible.

Are there other trends you can tell us about? 

Well cognitive offloading is a big one.  Many young people are now offloading a lot of their thinking and their research – and therefore a lot of their skill development – onto AI. When we thought tech was taking all our skills, we felt we could fall back on our critical thinking and creative skills but now AI is taking those skills away from us.

Young people don’t have a sensitivity to the algorithm so are already living in their own bubbles. And even those who are quite critically aware are struggling to see what’s real as they are assuming everything is ‘fake news’. So, AI is having a real impact on truth. Sometimes AI is trustworthy, but it can be incredibly bland or mediocre and that also changes the nature of the discourse.

Kids are living a much more friction-free existence than we did. As they’re offloading all their cognition, it means they don’t have the chance to have a productive struggle or to have to fight through something. The Leaving Cert is different because the Leaving Cert is in such a traditional space that they still have that struggle imposed upon them. But it’s going to be harder and harder to teach these young people when they get into third, fourth and fifth year as they’ve been using AI for such a long time.

So, how do we change things?

It’s about figuring out how we get downstream from all that. How do we have a culture of trust and responsible and ethical use of AI and where do we put that in our teaching and learning and that’s what we do with schools. Schools need to be developing AI policies.

It’s not all doom and gloom. Teachers are finding incredible uses for AI especially to personalise lesson plans. So, there are really great things happening out there, but there is no agreement, no school-wide conversations, no baseline training. One teacher might be talking to the kids about AI while in the next classroom, another teacher is scaring them about it.

Ultimately every teacher is going to approach it differently. Every teacher is going to use it differently. Every kid is going to use it differently. We also see some kids who are quite resistant to AI as well as some who are quite nervous.

But as a society, we can be more considerate. We can be more judicious. We can take all the lessons that we learned from social media, all the lessons we learned from the pandemic, all the lessons we learned from edtech — all the successes and the failures. I think this is a really unique chance to get it right this time.

But sometimes it feels like we’re sleepwalking. We’re energy blind and systems blind and I think we’re very reactionary.

The next natural step is to be discerning and to decide what we’re going to do with those outputs. We have to decide that AI is going to be good in certain specific cases and, in a couple of years, revisit it and examine it again.

Who is responsible for all this?

We need teachers to be modelling AI well but for that they need license from the authorities. They need to be given the right tools or at least a nudge in a direction.  We also need parents to understand it better.

We certainly also need critical voices. There are people who don’t want to use AI and that’s fine so long as they know how everybody else is using it.

So, I think it’s a little bit everybody’s responsibility. But somebody has to move first. The international schools are fortunate because they have a bit more freedom. They can respond to parent demand and market forces and that’s probably a precursor of where national systems are going.

China is already embedding AI curriculum from the age of three upwards. The UAE and Saudi are doing the same. The US has just released a lot of funding for AI curriculum for schools. I don’t want to say we run the risk of falling behind but we do risk being reactionary instead of taking thoughtful action.

Governance is always top down and policies should be bottom up. But it’s not unique to education. Under the EU AI Act, all employees are supposed to have some level of AI literacy training, and every company is supposed to have an understanding of the AI uses in their company and I don’t know if that’s kind of widely known.

Is the future somewhat depressing?

No, not at all!

I think the future will be amazing. But I don’t want us to sleepwalk into things like we’ve done before. Humanity is not great at learning our lessons from the past, are we?

We need to teach children to have the mental discipline to know when not to offload to AI and when not to trust AI. We don’t want to make the same mistakes we did with social media; being so slow to regulate. We need to be more considerate and judicious. This is the perfect moment to put guardrails on AI. We have a really unique chance to get it right this time.

Exclusive content for our members

If you're already a member, please sign in now to continue.

Or contact our team to find out more.