Inside Learning Podcast with Nell Watson – Transforming Society with AI: Ethics, Education, and the Future of Work

Posted by Learnovate

This episode of the Inside Learning Podcast, presented by the Learnovate Centre, features an insightful discussion with Nell Watson, an expert on AI ethics.

The conversation delves into recent advancements in AI and their transformative potential for society and economies, emphasizing the need for ethical leadership and responsible AI development. Watson discusses the concept of a ‘Sputnik moment’ for AI, highlighting the rapid progress in AI capabilities and the challenges in steering this technology to align with human values and ethics. The episode also explores AI’s impact on education and the workforce, suggesting a shift towards developing critical thinking and creative skills amid an AI-dominated future. Additionally, the dialogue touches on the idea of AI-driven corporations and the potential of universal basic income supported by AI dividends. The session concludes by discussing the importance of continuous learning and adaptation in navigating the future shaped by AI.

In this episode:

  • The Transformative Power of AI: A New Era
  • Ethical AI: Steering the Future Responsibly
  • Introducing Nell Watson: AI Ethicist and Author
  • AI’s Impact on Society: A Conversation with Nell Watson
  • AI Ethics and Safety: Navigating the Challenges
  • AI in Education: Shaping the Future of Learning
  • AI in the Workplace: Algorithmic Management and Its Implications
  • The Future of Work and Education in the AI Era

The links mentioned in the episode:


Aidan McCullen 00:00:15

AI is continuously transforming our world, unfurling capabilities with unprecedented accessibility and flexibility. 

With nothing more than simple words, we can tap into an incredible variety of services of an ever improving quality. 

Today’s adaptable AI models enable robots and systems to respond seamlessly to human commands across various real world app


Agent based models can think step by step to collaborate with other versions of themselves, and even to form AI communities. 

We are at a pivotal Sputnik moment set to reshape global society and economies through AI’s transformative power. It’s necessary to take stock of where the recent developments have taken us and to deliberately choose where we want to go from here. 

A responsible future for AI requires vision, foresight and courageous leadership that upholds ethical integrity even when tempted to act otherwise. 

Such ethical conduct fosters trust. And trust is the linchpin that holds our societal structures together and builds branding power. 

That is an excerpt from a brilliant new book, Taming the Machine, Ethically Harness the Power of AI. The book is by a researcher, writer, speaker, and applied tech ethicist. 

She is president of the European Responsible AI Office, an AI expert at Singularity University, and pioneers global standards as an AI ethics certification maestro at IEEE. 

She advises leading organisations on their machine learning strategies specialising in AI research and ethical advocacy. 

And she is the author of that new book, Taming the Machine. Nell Watson, you’re very welcome to the show. 

Nell Watson 00:02:04

It’s a pleasure to join you today.

Aidan McCullen 00:02:08

I thought we’d start with that timely moment, this Sputnik moment that I mentioned in the introduction.

You have a quote at the start by Jan Tallien, and I love this quote.

I’m just going to use this to tee you up to bring us whichever way you like.

Building advanced AI is like launching a rocket.

The first challenge is to maximise acceleration, but once it starts picking up speed, you also need to focus on steering. 

Nell Watson 00:02:30

That’s absolutely where we are today.

It’s strange to think that only about 16 months ago, the whole chat GPT Sputnik moment arose.

And that’s where, of course, the power of AI was thrust into the public consciousness.

But it had been bubbling for some time. Many of us in that space had been waiting for the breakthrough moment when most people would wake up and realize just how far we’d come.

And we’re about to go much further because there’s this next phase of AI, agentic models, which are able to form plans, sophisticated plans with many different sub-steps within them.

They’re able to take action upon those plans and indeed to delegate those tasks to other versions of themselves, to work together in an ensemble with some of them taking on different specialist roles and even skills.

And this means that AI is about to become a lot more autonomous. We’ve gotten used to this idea of interacting with AI systems using natural language, just talking to them as we might a human being.

But soon we’re going to be emancipating them to go off and work on problems independently to figure out the best way of solving a problem that we’re trying to work on, whether it’s planning an event or planning a trip or figuring out the best way to persuade people to try a new product, for example.

However, as with that wonderful quote from Jan Talen, once we’ve gotten things up to speed, we definitely need to learn how to steer them. And we’re still in that phase.

And it’s about to become a lot more difficult with these agentic models, because we have to start teaching them about our values, our preferences, our boundaries, no less, if they are able to take action on our behalf, right?

If these systems are going to act as concierges, possibly even ambassadors for ourselves, they need to understand how to do things in a way that pleases us, in a way that fulfils the mission and doesn’t come short of it.

Taking shortcuts that we don’t want, for example, might even be dangerous shortcuts. And equally, we don’t want these systems to go off and spend all day on one single task, polishing a desk until all the varnish has been taken off it, because obviously that’s not what we’re looking for.

Suppose that we are planning a picnic. We don’t want the system to decide that a thimble of tap water and a cracker each for everyone is going to fulfil that mission.

We also don’t want that system to plan a nine-course dinner either. Either of those is not in the sweet spot of what we’re looking for.

Similarly, if that picnic is for your local mosque or synagogue. Ordering ham sandwiches for everyone is not going to be pleasant for them either. Similarly, if it’s There’s a lot of vegetarians and vegans in your midst, or gluten-free folks also.

And that’s why it’s so important now that we learn about not just AI ethics, which is about how we use technologies and try to make sure that AI technologies are transparent, that we can understand what’s going on in them, that there’s a minimal amount of disproportionate biases, etc.

But that also we look at the safety of these systems, their ability to align with our desires, our expectations, and to understand and interpret our values, to be able to read the room in whatever culture or situation or schema it happens to be in, whether it’s you yourself late at night having some existential question, whether it’s in some big board meeting or indeed dealing with a client.

We need systems which are able to understand how to fit in mercurially with human society. And that’s an immense challenge. It’s a very difficult process, somewhat akin to rearing a child or indeed training a pet to behave in ways that we wish. Except that this child or pet may soon potentially eclipse human capabilities, right? 

In many ways, we are like teenage mothers giving birth to a demon. And we are obliged to raise it as an angel. And that’s not going to be easy for us.

And that’s why we need to think very carefully on how we use AI and how we can better fit it into our lives and into the future of humanity itself.

Aidan McCullen 00:07:33

I often think about the show Black Mirror that’s on Netflix, brilliant show that actually does hold up this black mirror to society.

And listening to your talks and reading the book, it came to mind that when AI learns more and more about the decisions we make, about how we think, etc., it’s going to realise the black mirror.

It’s going to eventually realise there’s a lot of stuff we do that isn’t so great.

I thought we’d just mention this. And for our audience, what we’re going to talk about are two main things today. AI meets HR and AI meets education and the benefits of those.

And if it’s okay with you now, also, I’d love you to share how you got to be one of the world’s experts in AI ethics and a researcher in AI, because I think you’re such a great role model for many people out there to get there. 

And to understand how does one get there as well because it’s going to become such a valuable role as we advance through ai in the future. 

So with that maybe we’ll start with this black mirror because you do shine a light on that so to speak in your talks. 

Nell Watson 00:08:41

Absolutely. In many ways machines are going to be not just trying to understand us so they can fit in, but they’re also going to understand our little foibles, our peccadilloes, the little aspects of our personalities, of our civilization and its various mores.

And some of those interpretations may not be generous to us or may embarrass us in some ways.

We’ve seen how the revelations of Copernicus and Galileo, that the Earth orbits the Sun and not the opposite way, changed our view of the universe.

It made us feel a bit less special, that perhaps we weren’t the most important thing in the universe after all.

And similarly, Darwin’s revelations about evolution have also made us realise that we weren’t formed perfect as little homunculi of the divine necessarily, not in the way that we conceived anyway.

But that we had scrambled up from murky depths over a very long period of time, and through considerable suffering, no doubt, across the eons.

And of course, that in itself is a beautiful and an incredible, magnificent thing as well, that we are indeed the formation, therefore, of countless millions of selection events, and I think there’s beauty in that too.

But sometimes when we have ingested a belief and that belief is questioned, that can be a painful thing.

It’s an opportunity for growth, but it’s also an opportunity that people can take to get angry because the world has changed.

It’s moved in a direction they didn’t like, and perhaps it’s made them feel uncomfortable.

And I wonder if AI may make us feel uncomfortable about some of our beliefs about the world, the things we consider to be okay or not okay, AI may change our perspectives on.

Indeed, just as different philosophers or religious leaders have come up with new sets of morals over time, AI systems may do something similar.

They may find new ways of interacting which are more equitable or indeed lead to better forms of cooperation between people.

And I think that’s perhaps likely to create a bit of a schism in society as some people update to the new rules, having witnessed that they are indeed pretty good ways of living, and other people decide, no, thank you, I’ll keep the values I have. Cheers all the same.

So it’s going to be an interesting turnabout where at some point we stop putting values into machines and they perhaps start putting values into us.

Aidan McCullen 00:11:38

One of the ways you talk about that happening is the idea of, if we have a legion of nanobots throughout our bodies, that in a way we can create a hive mind.

And I thought about the utopian view of that, where a lot of people won’t like the data privacy of that, but at the same time, we can transcend things like death as we know it.

We can live on with our ancestry. We can have a huge depth of experience from previous generations, but also we could possibly solve huge thorny problems that we’re faced with

as a planet. 

Nell Watson 00:12:17

Absolutely. Our smartphones and the internet have become a third hemisphere of our brain in many ways.

And I think that connection is going to increase further, not just through augmented reality glasses and things like that, but eventually AI systems, which are a literal co-pilot that are able to exist within our bodies and to connect to our external and internal senses so that they understand our experiences from what we see and feel and hear, but also what we feel inside.

And I guess that will help machines to understand us in a very intimate level.

And indeed, perhaps, if a machine has all of our memories and all of our feelings about our memories, then that would present a reasonable facsimile, perhaps, of a human being to some degree once that person themselves has passed on from this world.

So another element that we can do then is that we can potentially radically improve empathy between people as well.

If we could, for example, feel our impact on others very directly, then we would understand the consequences of our actions in that moment.

Just how much those words we said, which we didn’t necessarily mean, had stung that person, right?

But if we sing a beautiful song, make a meal for someone when they’re feeling sick, that kind of thing, we could also feel the joy that we give to others.

And indeed, perhaps the thing to do in life would be to create joy for other people, and there would be no profit in being wicked to each other.

Those are the kinds of ways in which our society may evolve in the next 50 years or so.

Aidan McCullen 00:14:13

Which is a lovely segue for AI and education I think, because one of the great realisations, I suppose over the last 50 years or so was Howard Gardner’s idea of multiple intelligences that there’s more than IQ in the world.

And I feel greatly that AI can, because of that empathetic aspect we can understand how different people learn, et cetera.

And I thought maybe you’ll bring us through what you’re seeing out there because not only are you a researcher and ethicist, but you’re also, you also had a startup and you understand where things are going you share a wide array of resources where people can learn about machine learning etc so you’ve studied this deeply so no better person to tell us about where you see the benefits of ai in the education realm

Nell Watson 00:15:04

Yeah it’s a heck of a journey for myself in waking up to the power of ai around about 2012 2013 when I met professor Jeremy Howard at Singularity University, who was talking about deep learning, which was this very new technology at the time, which enabled us not just to find patterns within data, as we do with machine learning in general, but to find patterns within patterns.

And that enabled these systems to make very deep inferences and predictions and understandings of things that were never possible before.

And at the time, I had a very difficult technical problem I was working on, which was 3D body measurement using nothing more than two photographs from a camera.

That company is still going today. But we had a very difficult problem, which was cutting the person out of the background because you don’t want to be measuring wallpaper behind someone, etc.

That’s a problem in machine vision called image segmentation. 

And myself and my team we had created this very complex algorithm to try to cut people out in the image and we could get the head to fit but the crotch would break, one arm would fit, a foot would not and it worked perfectly 15% of the time which was extremely frustrating because of course we could see the promise was there but it was unfulfilled.

However having learned about deep learning, we were able to use a machine vision technique called convolutional neural networks, where we took about a thousand photo edited before and after images of what we were looking for, basically silhouetting the person out.

And we fed that into the machine and it worked flawlessly, basically immediately.

And so this problem, which had stumped us for over a year was solved overnight.

And that’s when I realised the power of these techniques and indeed what they were beginning to evolve into.

Over the years since then, I of course became an evangelist for this amazing new set of technologies, but increasingly also became quite concerned about the ways in which these technologies could be used, sometimes by bad actors or sometimes by honest people just forming misapprehensions about how best to use these systems. Their strengths, and indeed their limitations.

And so that’s why for many years now I’ve been working with organisations such as the IEEE to develop new standards and certifications for AI ethics and AI safety, so that we can help more people to use systems in ways that are safer and less likely to cause various issues and scandals within society.

When it comes to education, I think that we’re going to have to reconsider how we educate people. We’ve entered a world where facts are cheap because we can find them out there on the internet.

But we still have to ascertain whether those facts are correct, or indeed whether there are different nuances to those facts that perhaps we might be missing.

The context to those facts might have been de-linked for some reason or other. And indeed, we’re now entering also a world where generating things is very cheap and fast. Where we can, you know, take a few bullet points and turn it into an essay without too much difficulty, where we can generate all kinds of content, whether it’s videos or images or 3D worlds, and edit things trivially also.

We can pinpoint the very specific thing that we would like to change or like to animate, etc., and have these systems go forth and do that.

And until recently, it was only the province of Hollywood and maybe state intelligence services that had access to those kinds of capabilities.

And now any one of us can pick those up for a trivial amount in a subscription per month.

And so it means that we need to be training people less about how to do things and more about how to make sense out of things,

How to curate more than creating, per se.

I could analogize it to the old factory model of schooling, where, you know, in a world where we still manufactured a lot of things in the West, which we do less so, I’d like to see us maybe do more so again.

But in a world where we manufactured things, many people were seen as interchangeable elements, a little bit like the machines in which they operated.

And so the schools became the same way, right?

Regimented rows of students all learning the same content, the same way. But that creates people who can cook, people who can follow instructions, people who can implement something but not be able to see the bigger picture necessarily, not with the training they’ve been given in any case.

What we need to be doing is not training cooks, but rather training chefs, because a chef is able to put together elements in an original manner.

They’re able to curate an experience for someone, not just cooking a grilled cheese sandwich, but actually putting together an experience which is meaningful for them, where there is the matter of presentation, where there is the ambience of where someone is eating.

There’s the story which comes with food. 

If you go to a really good restaurant, the chef or the waiter is going to come out and explain what you’re eating and all the elements that go into it and why it’s special and interesting. And that’s an important part of that experience.

And so by training people in how to curate, how to collaborate with other people to share ideas and to work together in empathy, how to understand very complex problems and all of their interlinkages, and indeed to think in a critical and also creative manner.

These are the chief skills of the 21st century.

It’s not so much about teaching people how to do things, so much as teaching how to build their character so that when they hit a roadblock, they don’t give up, but they keep on pushing forward, forward even when things might be uncomfortable.

In our world, which is changing so quickly, it’s imperative that we keep learning,

that we don’t have this idea that we just go to school and our minds are filled with stuff and we leave at some point, but that we keep reinventing ourselves, right? 

We need to be like Madonna [laughs] We need to keep coming back with a different version of ourselves.

And to do that, we need to have the character attributes of being comfortable with failure, of being comfortable with being a novice, because that’s generally not a nice place to be.

We like to stay in our lane. We like to know the things we’re good at and keep doing those.

And generally speaking, it makes sense to do that.

But in a world of incredible disruption, we are probably going to have to jump lanes a few times, right?

To change different focuses in our careers or jump between careers even.

And to do that, we will need to be in an uncomfortable place of being a novice. And it’s not easy to try learning a new thing and be surrounded by people who are so much better than you because of course they have a lot more experience, right?

And so it’s easy to give up in that moment and lose heart.

And I think that’s something that young children have that as adults we often tend to lose, right?

The young child playing the violin, making a horrible sound, isn’t self-conscious about it.

And so she keeps on playing until she gets good, whereas an adult will often give up.

And that’s why we need to be able to teach these this aspect of character so that people will keep reinventing themselves and to have the courage to do so

Aidan McCullen 00:24:05

Beautiful that’s such a beautiful message for not just children. 

I’m going to tell my kids that to be more like a chef than a cook and make sure they don’t get me wrong and I don’t take it literally like an ai might but also so that many adults do as well.

And this is a nice way to introduce the idea of AI and HR because AI will totally revolutionise or reinvent that realm of the world as well.

I’d love you to give us a high level of what’s happening there.

Nell Watson 00:24:38

Absolutely. We are seeing an era of algorithmic management, of machines beginning to take over a lot of those sorts of middle management or line management functions.

Of observing people doing their thing on the job, of setting people’s shifts, of perhaps even strongly contributing towards hiring and firing decisions.

Indeed, sometimes already some companies have people doing deliveries and there are several cameras in their van and they are observing the efficiency of that worker and indeed looking for potential infractions.

However, sometimes they get it wrong. Sometimes somebody might just happen to scratch their neck, but the system decides that they are talking on their phone and writes them up and possibly even docks their pay.

And it’s actually very difficult to challenge that, right? To challenge the decisions or predictions of these systems.

And indeed, when these models go very wrong, as we’ve seen in, for example, the Horizon post office scandal, which was a very simplistic system by modern standards.

But still, people trusted it too much.

And we tend to do this because we observe the system working pretty well, maybe 95% of the time, but we forget that one time in 20, that machine is going to make a misapprehension about something.

And when you’re dealing with potentially somebody’s fate, whether it’s in health care or the judicial system or financial systems, etc., or indeed employment, those outcomes can be catastrophic if we don’t provide enough oversight and supervision and double-checking of the system’s work.

We saw in the Horizon Post Office scandal, of course, that there were hundreds of unsafe convictions and dozens of people wrongfully sent to jail for years, many of which had to sell their homes to pay off debts that weren’t theirs because they’d been falsely lumbered with these fraud charges.

Marriages were broken up. At least three people took their own lives and never saw their vindication, which of course took years for many people.

And I’m sure there’s still a lot more to come out about that particular scandal. Unfortunately, this keeps happening. We saw something similar with the Dutch child benefit scandal, whereby people whose first nationality was not Dutch, even if they become naturalised Dutch citizens, were really given the third degree and were even threatened to have their children taken off them, etc., when they were by and large generally innocent.

And it caused such a furore that the Dutch government actually collapsed as a result. Same thing has happened in Denmark, in Michigan, in Australia, many other places. 

We have not yet learned this lesson of not trusting machines too much, not trusting their judgments, always verifying that these machine decisions are safe and reasonable.

And I think, unfortunately, there will be further tragedies before we begin to really internalise that lesson.

However, we do have some examples in the past of learning similar difficult lessons.

The air travel sector in the 1940s and 50s went through a similar phase of many different tragedies in short succession.

It was an exciting time, but it was also a scary time to be in the skies.

Thankfully we learned quite quickly from those tragedies. We developed new logging systems such as the flight recorders and cockpit voice recorders, which told us about what was going on in the system and indeed in the human beings interacting with it.

We developed new rules such as sterile cockpits, etc. And pretty soon, air travel became statistically the safest way to travel from A to B. 

And so I think we’ll go through a similar phase. The short to mid-term with AI will be a little bit hairy, a bit of a rocky road.

But so long as we keep learning, keep adapting, and earnestly move to using best practices as they emerge.

With the help of standards, certifications, and well-reasoned legislation, I think that after a time, things will shaken down and we’ll be able to use AI in a way that we can begin to trust and more and more consciously and less cautiously integrate into our personal and professional lives.

Aidan McCullen 00:29:44

Now, many of our audience may be concerned about what the future of work looks like if there’s work, if we’re on UBI, universal basic income.

And you hinted at some of this yourself. So I’d love you to share you as an example, how you manage to piece together education, the mindset you approach that education with.

You’re an entrepreneur. You hinted at this through the idea of learning to get back up when you fall, reinventing yourself, consistent learning, et cetera.

But I’d love you to just give a quick thought to those people who are worried about the future education of the future work of the future.

Nell Watson 00:30:27 

I think that we’re definitely going to still need plumbers.

We’re going to still need people to do plastering and care work and a range of different activities that require fine craft skill and require a lot of empathy for other human beings.

There are a lot of roles that we don’t necessarily wish to give to machines, even if they could do them, because it would potentially not be very dignified, right?

Like having an AI preside over your funeral, I think most people would find that to be an undignified use of the technology.

So some roles are definitely going to be safe from being usurped by AIs anytime soon, I would imagine.

In terms of universal basic income, I think that I have some doubts around the thermodynamic feasibility of that.

I think that it’ll be very difficult to pay a lot of people from a portion of human wages earned by other people.

However, there is an alternative, I think, that could work.

Because these AI agents, which are able to form a kind of an ensemble to work together, is pretty similar in many ways to a corporation, right?

And indeed, it’s possible to create virtual corporations of AI agents where we have an engineering department, a quality assurance department, marketing department, etc.

And all of these agents coordinate to create a product, whether that’s a movie script or a video game.

And so if these AI corporations are maybe even competing in the free market with human corporations, that’s going to be an interesting development, especially because some of those AI corporations will be hiring human beings to go and do legwork for them, right?

Things that machines necessarily aren’t best at.

I think that there’s an opportunity, therefore, for machine-driven corporations to provide dividends to human beings as a form of universal basic income.

I suspect that in decades to come, the majority of our economy will actually be driven by these AI-managed companies, because they will gradually out-compete a lot of the human ones,

But that these AI-controlled companies will provide us with dividends based upon their provisions in the market, and that we can perhaps use those to augment our income to support our lifestyles.

I think that’s possibly going to be one way that we can help people to transition towards potentially doing some other things with their lives, as AI increasingly takes over in many, but not all, sectors. 

Aidan McCullen 00:33:24

Brilliant. Now, and Nell, for people who are interested in getting you to come to work with their organisation, to speak at their events, where’s the best place to find you?

Nell Watson 00:33:34

I have a little website at, I put some articles and resources up on there. And of course, you may find my book, Taming the Machine of Interest, and you’ll find info on that at

Aidan McCullen 00:33:51

And also I found a nice animation that’s due to be released today, the day of this release of this podcast as well. And I’ll link to that and I’ll link indeed to your website.

For now, author of Taming the Machine, Ethically Harness the Power of AI, Nell Watson. Thank you for joining us.


Thank you. Thanks for joining us on Inside Learning. Inside Learning is brought

to you by the Learnovate Centre in Trinity College, Dublin.

Learnovate is funded by Enterprise Ireland and IDA Ireland. Visit to find out more about our research on the science of learning and the future of work.

Become a Member

Become part of a global community of leaders in the future of learning and the future of work. Join Learnovate today! Contact to find out more.