Calum highlights the potential impacts of these advancements on society and the necessity for proactive planning to avoid negative outcomes. They also delve into the challenges and opportunities of AI in education and the importance of developing skills for an unpredictable future. Calum further shares insights into his company, Conscium, which focuses on machine consciousness, AI agent verification, and neuromorphic computing.

In this episode:
* Calum Chase on AI and Singularity
* The Economic Singularity Explained
* The Future of Jobs and Skills
* Planning for a Post-Job Economy
* The Role of Education in an AI-Driven World
* Calum Chase’s Journey and Conscious AI

Transcript: 

[00:00:00] The Inside Learning Podcast is brought to you by the Learnovate Center. Learnovate research explores the power of learning to unlock human potential. Find out more about learnovate research on the science of learning and the future of work at learnovatecenter.org. 

Aidan McCullen: Today’s guest argues that in the course of this century, the exponential growth in the capability of AI is likely to bring about two singularities points at which conditions are so extreme that the normal rules break down.

The first is the economic singularity. When machines reach a level that renders many of us unemployable and requires an overhaul of our current economic and social systems. The second is the technological singularity. When machine intelligence reaches and then surpasses the cognitive abilities of an adult human, relegating us to the second smartest species on the planet, artificial intelligence can turn out to be the best thing ever to happen to [00:01:00] humanity.

Making our future wonderful almost beyond imagination, but only if we address head-on challenges. It will raise. We welcome experts on artificial intelligence and its likely future impact on society and bestselling author of Artificial Intelligence and the two singularities surviving ai, our jobless future Pandora’s Brain and Pandora’s article, my friend Calum Chase.

Welcome to the show.

Calum Chace: Thank you, Aiden. What a lovely introduction. Lovely to see you again.

Aidan McCullen: Great to have you on the show, man. . It’s almost 10 years ago that we met and so much water’s been under the bridge. People are listening to you now. A lot has changed.

Calum Chace: Absolutely. A lot has changed. When we met the first big bang in ai, which brought deep learning to the world. Which is a, a rebranding of artificial neural networks that had happened, that happened in 2012, and people started to really notice it around 2014, which is about [00:02:00] when we met. And now of course, we’ve had the second big bang in ai which is the arrival of transformer AI technology, which has given us the large language models, which are getting everybody rightly very excited. 

So the transformer technology arrived in 2017 with a publication of a paper by some Google researchers called Attention Is All You Need. And where do we go from here? Well, it’s just the every day brings a new announcement from one of the hyperscalers, the big tech companies, de developing and deploying these large language models. And of course there’s much more down the road. So yeah, exciting times and exciting times.

Aidan McCullen: This show is about the future of work and the neuroscience and the science of education, and I thought that this new reconfiguration of society, that’s necessary because of ai. Means that we actually don’t even know what skills will be necessary in the future.

It’s something when I talked to you about before what do you think the skills [00:03:00] gap will be? You’re like , well, we don’t really know yet, and I loved that answer and I’d love you to elaborate and share a little bit about that. 

Calum Chace: So the economic singularity is a term I coined to mean the time when machines get so good at doing everything that humans can do for money, that there are no more jobs for humans. This is a, an idea which quite a lot of people believe, but I’m sure the great majority of people don’t think is coming.

Those who thought about it, and most economists don’t think it’s coming and I think this is remarkably shortsighted of our species. Unless there is a silicon ceiling through which machines can’t pass in terms of their intelligence and their ability, then they will surpass us in all the things that we can do for money. We don’t know when I used to think it was some decades away, but the arrival of large language models has seems to have shortened everybody’s timescales. And it could happen this decade. Even. Sam Altman and Demi who run the two biggest [00:04:00] AI labs both seem to think that it could happen by the end of this decade. The itch that they’re referring to is the arrival of artificial general intelligence, which is a very poorly defined term, but the best way to think of it is it’s the. It’s the threshold through which machines pass on their way to becoming super intelligences. So the economic singularity, I think is coming.

We don’t know when. Between now and then there’s going to be increasing automation. We’re seeing it happen and people are having to get new jobs and sometimes join new companies and sometimes change the whole industry that they’re in. And this is gonna happen faster and faster. I call this the period of churn. But until the point comes when machines can do pretty much everything that we can do for money, there’ll be lots of jobs for humans. Demand is pretty elastic. And so there’s always things for people to do that can make money. And I think they transition from a full, reasonably full economy, full employment economy with a lot of churn [00:05:00] to one where there’s no jobs for humans will be quite sudden.

I think it will take a few weeks or a few months, maybe a couple of years at most. And we really ought to plan for what the economy should look like after that. And it amazes me that there is virtually no planning going on. There’s a interesting podcast by a man called Ezra Klein launched last week. Ezra Klein is a mainstream New York Times journalist. He’s a political writer, used to do the healthcare beat before that, not a particularly tech guy. In this podcast, he says, the government knows that a GI is coming. And he asks somebody who was an AI advisor to the to the, to the Biden administration, why isn’t anybody planning for what happens when there’s no jobs for humans? And the guy says, well, we didn’t expect it to happen on our beat. We thought it wouldn’t happen until the next administration. So it’s their job. And that’s amazing that there is so little planning. I’m hoping that we can perver preserve some semblance [00:06:00] of, of the capitalist economy because it’s been so incredibly successful. It, it’s gonna have to be very different because if we live in a world where humans can’t earn the money that we need to live through jobs, how else do we receive that money? Well, there’s gonna have to be a form of distribution. I don’t really like the idea of universal basic income, partly because the little word in the middle shows that it would be poverty wages for everybody, and that’s not good enough.

We have to have, at the very least, a sort of. Progressive progressive, generous income. But how do you do that? How do you get, assets income distributed from those, the small minority of people who own everything, post the economic singularity to everybody else all around the world, not just in America? That’s a tough question and we’re not facing up to it. So we really need to start doing that.

Aidan McCullen: Yuval Nori talks about this idea of the gods and the useless and how, for example, you see it even now. Say chat, [00:07:00] GPT, there’s a premium version that’s $200 a month versus $20 a month versus the free version. And there’s , a huge jump between each of those tiers.

And I thought about that as a kind of a metaphor for what would happen, for example, if you had a brain human interface that gave you god-like intelligence. But I use it as a kind of a way to think about the increasing skills gap, but also the wealth gap that you’re referring to there, that if. You have a very small proportion of society that are so wealthy that they almost like living on an island, a guarded island, almost like this gated development away from the rest of the people that you see in these kind of dystopian futures.

And you think is the biggest misconception about AI’s impact on jobs is that people are focused on automation replacing roles rather than how AI will redefine work itself and in a way make that gap even wider than it is today.

Calum Chace: A few years ago I edited a [00:08:00] book a collection of short stories written by myself and some colleagues called Stories from 2045, and it was. A whole series of different scenarios about how the economic singularity might play out. And I told everybody, write one optimistic story and one pessimistic story. It was quite easy to get ’em to write the pessimistic one that they more or less write themselves. It was quite hard to get people to write the optimistic ones. But, but we did, and it’s a really interesting collection of possible outcomes. And it’s, as I say, it’s really easy to see how it could go horribly wrong. Imagine a world in which there are no longer any jobs for humans. That has happened quite suddenly and people start to staff because they haven’t got the resources to live. The small number of people who own everything could. It’s an option for them. Retreat to heavily fortified remote islands, which are hard to get to surround themselves with robots and defense systems of formidable power. Just wait it out until the whole, the rest of the, [00:09:00] of the species has died and then go back and enjoy the planet to themselves. That is a possible option.

And looking at some of the billionaires around today, you can see some of them might well do it, but I don’t think on the whole, that’s the way it will pan out because, , 9 billion people versus a small number of rich people is an equation that they might not want to face.

Also, you know, I think most rich people are like everybody else. They have a conscience, they have empathy, and they don’t want to watch the rest of the human race starve. So I think that they will be amenable to some form of distribution, but how it happens, what the mechanism is, we’ve got no idea and we really, really should be planning for it. 

Aidan McCullen: the skills gap Calum is something that I just wanted to focus on for a moment because you’ve written before, you write for Forbes, you write for Conscium, the company that you work for at the moment, I’d love you to share a bit about  Conscium, but 

 I’d love you to share your view of the education system 

Calum Chace: sure. Well, let’s just look at the, the skills gap and, and the education system. [00:10:00] So. People often say, what jobs should my children target? What career should they target in order to future proof themselves? And the point I make is there’s no way to know. A few years ago we were all confidently predicting that the first large section of the economy to be automated by, in intelligence systems would be professional drivers. And the thing to do was to, initially people thought the thing to do was to focus on numeracy and STEM skills. Because computers are all about stem. And so that would be the area to go in. And the drivers did not all get automated. They will one day because self-driving vehicles are, are now with us, and they’re slowly being rolled out.

But drivers are still in short supply. They’re in high demand at the moment. It turns out that STEM skills are still valuable. But they, you can see that they might become less valuable because machines, LLMs and other types of ai. Can replace them more easily. So then [00:11:00] everybody started saying, well, it is creative skills and human skills that we all need because creativity can’t be replicated by computers and therefore they will be the last things to go well that turned out to be wrong as well.

Because large language models can write, they can write better poetry than me, frankly. They can write reasonable stories. They can’t write novels yet, but that will come. They can write scripts, bad scripts for movies. They can write scripts for short stories. And people like translators and low end copywriters, they are very much in the firing line for automation now. So the way AI evolves in the details is highly difficult to predict. And I, I think it’s a fools game to say, you should do this or do that, because that will be future proof. I think what you should choose as a career is something that you really enjoy doing because you, you are better at the things that you enjoy. So follow your passion. That’s always my advice too. To young people. As regard to the education system, perhaps unusually, I [00:12:00] think that the current curriculum is broadly speaking, okay. In school you’re supposed to learn some history, some maths, some language some literature, some biology, some chemistry, physics and so on. And that’s a pretty good syllabus. The problem with the education system is that we require children to sit still for large periods of time and listen to somebody talking. Which is the opposite of what kids are wired to do. Kids are wired to run around and play and compete with each other. And so I think the syllabus is okay, but the way we deliver the education needs changing and I think it will change as we get ais, which can be personal tutors. We are heading towards a world in which every child can be like Alexander the Great because they can have their own AI tutor. Which knows them really well, knows what they know already from the syllabus and what they need to know next.

Knows how they learn best, knows how to cajole, encourage, bully, plead to [00:13:00] get the child to do the work that’s necessary. ’cause you have to work to learn things. You’ve got it. It is a working process. Ultimately, we will need a different education will serve a different purpose in the future when we get to the economic singularity. Education will stop being vocational because there are no vocations. ’cause there’s no jobs for humans. Lots of work but no jobs. And instead it will become vocational. So it goes from vocational to vocational in that the part of education becomes you now don’t have to do a job for a living. And we figured out, hopefully we figured out how the new economy works. So everybody is free to do whatever they want to learn, to explore, to play. To have fun, to socialize, and you need a well stocked mind to be able to do that successfully. So education will play a different role. I imagine the syllabus won’t be terribly different. It’ll still be learning about all those things about human history and the history of the world, history of [00:14:00] animals, zoology, biology, chemistry, physics, literature, philosophy, and so on. It just needs to be delivered in a more appealing and human way. We invented the current. Education system in Victorian times, it was invented to get people fit to work in factories and it was military and industrial and that really does need to change.

Aidan McCullen: So Colin, you’ve brought us on a journey from where AI has gone over the last 10 years since we first connected, but you’ve also changed a lot, and you’ve co-founded the company yourself around the idea of conscious ai and many people still struggle with what that means, I’d love you to tell us a little bit about that and also a bit about Conscium.

Calum Chace: Sure. So you’re quite right. I’ve gone on quite a journey in the last 10 years or so. I was really interested in AI for a long time and I retired in 2011 and started thinking and writing about ai and then people started giving, asking me to give keynote talks, which I’ve been, is what I’ve been doing for the last decade or so, and I really enjoyed doing that.

It’s [00:15:00] good fun. But just recently , a friend of mine who I met around about the same time as you actually Ed, and a guy called Daniel Hume who. Set up an AI consultancy, sold that to WPP for a hundred million dollars. He said, let’s start a new company to address the issue of machine consciousness. So the idea is this, in the next few years, or possibly decades, it’s not definite, but we may well create machines which are conscious. We know that consciousness is. A property of information processing at some level. We don’t know exactly how it arises in humans. We don’t even know what it’s exactly for in humans and bizarrely, we can’t actually prove to ourselves that each other has it. I know I’m conscious. I believe you are conscious Aiden, but I can’t prove it. If and when we do create machines which are conscious, then there’s a whole series of things flow from that. Firstly, we will have a duty of care to them because they may, it’s not guaranteed. By them being conscious, but they may well be able to suffer. They may have what’s [00:16:00] what neuroscientists call affect. So they may be able to suffer and we should not make billions of artificial consciousnesses suffer that’s called mind crime, and it could be the worst thing humanity ever does. So it’s really important that we can detect the presence of consciousness or at least make A-A-A-A-A, an informed. Estimate about whether consciousness is present in order that we can avoid committing mind crime. Also, if we understand consciousness in machines, it will help us to understand consciousness in humans, almost certainly. And I think consciousness is the most important thing about us. You know, I think my mind is much more important than my body. And then there’s a third idea about consciousness, which is more controversial. Imagine we’re at the cusp of creating super intelligence, an ai which is smarter than humans in every possible way, and is gonna go on to become much, much, much smarter than us in a fairly short timescale, probably weeks, possibly months, years, until there’ll be an [00:17:00] entity on the planet, which is millions of times smarter than us, and we will become the chimpanzees in the picture.

We will lose control of our future to this successor species. Now, if you are at that point, you’re at the cusp of super intelligence arising. Do you want it to be conscious or do you want it to be unconscious? Intuitively, most people say for goodness sake, make it unconscious because if it’s conscious, it will have its own goals and desires and wishes, and they may be in conflict with ours. I think that’s a mistake. I think that’s confusing. Volition with consciousness, they’re two different things. You can be unconscious and have goals and purposes. In fact, machines already do because we give them to them and they have already shown signs of editing their goals. So I think that if you had that choice, you should want it to be conscious because a conscious super intelligence will understand much better what we mean when we say we are conscious.

Please treat us with respect. It may choose [00:18:00] to do otherwise anyway, but I think there’s a better chance it will treat us well if it also is conscious. So Conium is in the business of trying to figure out what the markers for consciousness may be. And there’s a wonderful academic called Mark Som. So I know you’ve interviewed numerous times who wrote a wonderful book called The Hidden Spring, about how consciousness arises. He is now the heart of our research program into consciousness, and Kum has a couple of other work streams. One is it’s got three work streams. One is straightforward, perceived into what artificial consciousness is, how to detect it, whether it’s a good thing or not. The second one is verifying agents agent verification.

So this, as everybody says, is the year of AI agents. Companies are starting to produce agents at a vast rate. Thousands, tens of thousands per company, per year. These agents need to be verified. They need to know that they’ll do what they’re supposed to do and not go rogue, and that’s a [00:19:00] business we plan to be leaders in. Then the third work stream is neuromorphic computing, which is a different type of computing than the traditional architecture. It should, it is more efficient and less brittle. It’s been very hard to scale. Nobody’s managed to scale it yet. That’s that. We’re gonna join that effort and hopefully boost it. So Conscium are fascinating. New enterprise. We’re fairly new. We’ve had our first seed round, we’re going, we’re putting together our second seed round and then later this year we’ll be raising our a round. And it’s very exciting. I’m, I’m delighted to be part of this process.

Aidan McCullen: It’s a fascinating, fascinating world to be in. I’m delighted for you as well because you had been educating the world for a long time and we went through ai winters. We’re certainly in, in an AI at least a spring anyway, I. But we’re certainly heading towards a massive summer, and it worries me sometimes, man, I feel sometimes I’m an AI pessimist in some ways that I just worry that to your [00:20:00] point, we’re not ready for it.

And I. As you said, governments are kicking the can hoping that the next government or next administrations will take on the effort of it. You know, one of the French aristocrats once said, APRE mall deluge, which means after me the flood, that worries me that we’re not putting in place the building blocks. I was encouraged by what you said about the education system, but a final word then for the listener to this show, people who work in education, people who are interested in , putting the building blocks in place for the future of work.

What’s your message?

Calum Chace: My main message is we are, as a species, sleepwalking towards these two singularities, the economic singularity and the technological singularity, and we really all need to wake up. So everybody needs to inform themselves as much as they can about ai. It’s a complicated subject. It’s big and there’s lots of different opinions, so it’s quite hard to do, but. It is the most important developing [00:21:00] development in the world, and we all ought to be paying attention. I think there’s probably not very much we can do to influence the outcome at a macro level. As I say, I think choosing whether the first super intelligence is conscious or not, might be a big thing, but broadly speaking, things will go the way they will go. But what we can do , and must not do is panic. Because when we panic, we elect extremely bad leaders who make terribly bad decisions. We could screw everything up on the road to these two singularities. The two singularities can have very, very bad outcomes, but they can also have very, very good outcomes.

We really could have a world which is unbelievably wonderful, and what we shouldn’t do is just throw the train off the rails. In the meantime, by electing very stupid and bad leaders. So my, my message is let’s not sleep, walk to the singularities. Let’s inform ourselves and let’s do what we can to achieve the wonderful future.

Aidan McCullen: Nice [00:22:00] message, man. Nice, positive utopian message. Where’s the best place for people to find you? Calum. You have your own podcast as well. Very successful podcast. You write for Forbes and indeed  Conscium?.

Calum Chace: Yes, there’s the London Futurist Podcast, which you have been on, Aiden the great episode. I have a website, Calum chace.com. I’m quite easy to find. My name is odd and  Conscium  Conscium.com. C-O-N-S-C-I-U m.com. If you’re interested in artificial consciousness,

Aidan McCullen: Author of Economic Singularity and all the others, and I’ll link to where to find them. Calum Chace, thank you for joining us.

Calum Chace: It is a pleasure to be with you again, Aiden. Lovely to see you again.

Thanks for joining us on Inside Learning. Inside Learning is brought to you by the Learnovate Center in Trinity College. Dublin. Learnovate is funded by Enterprise Ireland and IDA Ireland. Visit Learnovate center.org to find out more about our research on the science of learning and the future of work.