Transcript
Aidan McCullen: This month’s guest delivers a roadmap to achieve business growth, transformation, and innovation through the use of emerging technologies, but crucially in a human-centric manner that benefits both business and humanity.
Drawing on your experience working with organizations like Google, Yale, and the United Nations to offers a unique blend of strategic guidance, ethical considerations, and practical application to help organizations not just survive, but thrive, through bold and empathetic leadership, author of six books.
But today’s focus is her latest What Matters Next, A leader’s guide to making human friendly tech decisions in a world that’s moving too fast. Kate O’Neil, welcome to Inside Learning.
Kate O’Neill: Thank you so much, Aidan. Thanks for having me.
Aidan McCullen: It is great to have you, and I’m absolutely intrigued by your background, your depth of experience, and your range of experience, including those six books I mentioned. Let’s share a little bit about that to our [00:01:00] audience.
Kate O’Neill: Oh, well, thank you. It’s been about, you know, 30, 30 years of practitioner experience that led to, well, I, not all those years of practitioner experience before, 10 years of professional speaking advising, authoring, and that sort of thing. Prior to that I was in, practice with Toshiba and Netflix and, Hospital Corporation of America and these very large organizations, but also very small startups.
And so getting this very interesting range of. The size and scale of an organization, the different types of focus of the organization, the different cultures within an organization, different industries, verticals, and it’s been really, really fun throughout my career to be the pattern recognition specialist.
Be the person who’s thinking about, what does it take from that? That learning over here to apply to get the kind of results that we need over in this space. And how do we do that in a way that respects the [00:02:00] people all throughout the organization and on the other side of the transaction.
Aidan McCullen: Before we get into the book, let’s just share a little bit more about that because I. I, I really see that as a huge benefit for people in the future when you have breadth of skillset rather than what we’ve been really trained for, which is having this real strategic depth, like go after something, become an expert.
But the view you mentioned there, like pattern recognition or sense making, and that only comes from the kind of breadth that you have.
Kate O’Neill: Yeah, I think there’s something about, you know, people use the term T-shaped, right? Like, I think that’s fair. You could be broad and have one particular sort of focus area, or you could think about it as like comb shaped in a sense, right? You’ve got the across the, across the top sort of breadth, and then lots of different depths that you go through.
But I actually think that the real value comes more from like mesh shaped in a sense where you are [00:03:00] allowing yourself to become. Broad in multiple layered ways and explore the depths of different kinds of things. So when you do that, you’re able to kind of form these really interesting, intricate layers of understanding that you can carry with you into different spaces.
And I think that is an incredible asset, especially in a time when we’re trying to figure out, you know, what is a human advantage in an era where AI can do so much so fast. And seem so fluent and all of that. I think the thing that we really bring so much of the time is showing up in a curious way, in a way that respects the, the human experience in a way that has empathy and con contextual awareness and, you know, can kind of think about not pattern recognition.
In terms of data, because computers can do that so well. But pattern recognition in terms of context and empathy and experience and what does this feel like for people? So I think, you know, kind of building [00:04:00] that bridge and being the person who shows up, going like, alright, tell me about what the user experience is like.
Tell me what the customer experience is like, the patient experience, the learner experience. That really is an incredible advantage no matter what these days.
Aidan McCullen: And it’s one of the huge benefits. As you know from writing where you can go deep on a subject and then connect the dots between the other subjects that you’ve done previously. And I think that’s one of the things I see certainly with people with the range that you have that in the past that was kind of seen as how could she possibly know so much about this?
She’s not an expert. And actually, as you said, that kind of mesh is an advantage now, and for many of our audience who work in learning. In instilling that into students of the future or students of the present, or even people relearning at the moment is so important. I’d love for you to share your thoughts on that.
Kate O’Neill: Yeah, I think it’s really important. We’ve, we’ve spent so many years talking about upskilling and reskilling and that those are really important concepts [00:05:00] today. I’ve been talking more and more these last few years about what I call minimum viable skilling, which includes what I’ve been calling skill prompt skilling.
So the skill of prompting generative ai. One of the things that I think is really key about prompting is that you are allowing yourself to articulate the success conditions of what it is you’re trying to achieve, right? You’re saying, I would like for this project to yield these kinds of results.
I do not wanna see these kinds of conditions. I don’t want it to do this. I do want it to do this. And you’re being very clear and it’s. Explicit , and communicative about what it is you’re trying to achieve. I think the key there is you would be doing the exact same thing if you were delegating that to a person, right, to a human being.
And it’s allowing you to get very crisp and articulate and in your head about, you know, the visualization of what success looks like for something. And that is an incredible skill no matter what. I think it’s a [00:06:00] really good way for any organization of any kind to think about future readiness is to be thinking about training and learning as being about how do we instill the kind of skills that are broadly applicable across emerging technologies?
Because all of the new agentic tools also rely on prompting. so how do we, you know, kind of get that basic modality in place that also happens to, you know, kind of foster these great human to human communication skills and delegation skills that kind of prepare you for management, that prepare you for leadership and so on.
So these are the kinds of things that I think that. today’s learning officers and, and, , HR people should be thinking about is how do we kind of get the most for the work that we’re doing to bring people along to a more future ready state.
Aidan McCullen: One of the things I thought about reading your book was the importance of decision making, and I, it reminded me actually ’cause of your background in [00:07:00] Netflix, the. Brilliant series Ozark that I’m sure like absolutely brilliant series, extremely violent for those who haven’t seen it. But, but really, really there’s a, a voiceover at one stage and what he talks about is how decision making, good and bad really direct your life.
Sliding doors, moments that these decisions make you make. And when I think about that in the context of what you said there about learning about. The world in a state.
of flux, which it always has been, but it seems a bit faster than ever before. That decision making becomes absolutely key. And a brilliant lady we spoke to before on the show is Ellen Langer, who was the first psychologist in Harvard Business School, and she talked, she gave me this beautiful line.
She said, everybody is caught up in this fear of making the right decision. She said, but you need to make the decision and make, then make the decision right. So the reason I say that is like not making a [00:08:00] decision is still a decision. And what I love about your book is you give us a schema or a framework to make better decisions and you talk about the harms of action and inaction.
And I’d love to start there ’cause it’s a great place to start with the book.
Kate O’Neill: yeah.
I think that insight you just shared is, is very related to that harms of action and inaction. Right? Because the, the idea is that I think people get hamstrung by this decision that sits in front of them, right? Not really thinking about the fact that this is just today’s decision. This is just this decision.
There’s gonna be another decision tomorrow and another decision after that no matter what you do, right? There’s still going to be. decisions, and I think you, we, underestimate the continuity of the sort of flow from previous decisions into the current decisions that we make. And I think the, the more we see that continuity, the more we recognize that through line, the less we feel.
[00:09:00] pressure of this one specific decision and we can instead focus on making it the best way we can for now, while respecting, as you pointed out, that harms of action and harms of inaction is, you know, recognizing that if we have all of the information that we think we’re realistically going to get.
Then we are delay, then our delaying on making that decision is causing us harms of inaction or costing harms of inaction. And if we don’t have all the information that we realistically think we can get, that we’re going to need to amass in order to make a more responsible decision then are we invoking the harms of action.
And it’s, it’s not a very difficult model. I think it’s one that’s fairly easy to ask yourself as sort of a sanity check in the moment of making a complex decision. And I think it’s one that I’ve heard back from, from readers has been really helpful and instructive in kind of in framing up, like, realistically, do I have the information that’s going to [00:10:00] allow me to navigate?
Am I invoking harms of action or inaction by making this decision in this particular way, knowing that tomorrow is just going to bring another decision and the day after that.
Aidan McCullen: And that comfort with decision making is something that I really got from the book. But I loved what you talked about. You talked about a way to make better strategic decisions, whether personally or for an organization is moving from questions and everybody says, oh no, you’re better off the better question, the most beautiful question.
And you kind of go, yeah, but move from questions to insights And then to this term, I love bankable foresights.
Kate O’Neill: I am so glad you love that term. It’s my favorite, my favorite realization that I was able to put together for this book is, yeah, it, it was like a gift to the world. I felt like this process of moving from questions to insights is also no small thing because I think one of the things I realized over the course of the years putting this together.
Was that we have the wrong idea about how we learn. We think we learn by asking questions and [00:11:00] getting answers and voila, like we’re done, right? But actually what I point out is that when we ask questions, what we get are partial answers, and in fact, lots and lots of partial answers. In fact, everybody I ever asked this question, like how many times have you ever asked a question and gotten a different answer from every single person on your team?
Everyone pretty much raises their hand to that question because it’s a universal phenomenon. We all know that that’s true. There’s the sort, the sort of, metaphor of being on multiple sides of the elephant, right? We’re all seeing these problems in different ways, depending on what we’ve been tasked to do, what our priorities are, what our values are, like, what we’re, you know, what our skills are, and then when we.
Pull those partial answers together and synthesize those into, you know, where is the common truth? those partial answers, but also like what’s the tension between them? Why? Why is someone saying this and someone else is saying this? And those sound like two very different answers. [00:12:00] Why do they sound like different answers?
Am I asking not the right question yet? Or you know, are we pointing at something that maybe we just don’t understand well enough, et cetera? there, we are actually able to synthesize. The most profound thing in my, in my mind, in all of business wisdom is insights and insights are like this incredibly crystallizing lens that help us as we accumulate them, we can make more rapid decisions in ways that allow us to cut through this process much more, efficiently.
As I think about, one example of that is. You know, we all know that Apple is a design centric company, right? That’s one insight about Apple. And if you’re an executive at Apple, you know this insight in your bones, right? So if someone brings you a new feature that they wanna roll out in the latest iOS upgrade and it’s not well designed, that is an instant answer that you can, you know, thumbs down that is not ready to go like this in our company.
That [00:13:00] means go back and make sure it’s ready to be, it’s designed well and goes out. Very simple analogy, but I think in every organization there is something that feels as intrinsic to the DNA as design is to Apple. And when you know those things, you’re able to make what I call ethically accelerated choices.
Then the, the point that you brought up, and that’s my favorite thing, is out of the process of making decisions, using insights you’re making, what is a timely approach for today? Because you’ve chosen, this is how we’re gonna go forward today, and what it also gives you is this almost like exhaust from that process.
Is what is a bankable foresight. It’s ma. It’s saying, look, I don’t have to take action on this today, but I do have this sense that there’s going to be something that matters in the future. And if we triangulate today’s decision and the decisions we make every day after that. Toward the bankable foresights we’re accumulating.
We are beginning to get a much clearer sense of what the future is likely to be, and that whole [00:14:00] shroud of uncertainty starts to lift. It is no longer this entirely blackened space that we just have no idea what’s. Coming where we have actually built some equity in the future, we have actually built some sense of there being a scaffolding that we’re leaning into, that we’re kind of building into, and it is no longer as mysterious and as daunting and as, hard to make decisions around.
Aidan McCullen: I absolutely love it, and I envisaged it as. A like a wall in an organization, ideally somewhere where everybody will see it around the coffee machine or something like that. Or the water cooler if anybody has one anymore,
Kate O’Neill: Yeah.
Aidan McCullen: somewhere where you collect organizational knowledge or institutional knowledge because you think about how many, it’s happened.
Everybody listening to this show, I’m sure you’ve had an idea inside an organization. You leave and so does that idea. It’s gone forever and I love the idea of somewhere where this [00:15:00] is a bank of knowledge and foresight. Attempts at stuff that we tried before that might not, might have been too early because.
Oftentimes we throw out so much value that’s inside the organization, bring in a new person, and they discover it all over again.
Kate O’Neill: right.
Aidan McCullen: just, that’s what I loved about that idea.
Kate O’Neill: Yeah.
Aidan McCullen: are also the idea that it’s a, a north star or it creates a North star because you talk about a model that aligns several things.
So focus, purpose, values, and resources most importantly, on an organizational level, I’d love you to share more about that.
Kate O’Neill: Yeah. So I think one of the things that, that, that the bankable foresights and, and what you’re just describing about that shared sort of common knowledge, does for you is that it allows you to cut through the decision making of, one of the things I talk about is this idea of the. preferred future and the probable future?
Well, first of all, I should back up and say one of the, the reframes that I’ll offer anyone listening is I think that we should get in the habit of talking about future [00:16:00] in a plural sense. So futures plural as opposed to future. And when we do that, what we recognize is that there is not one sort of deterministic line that we’re all following, and then we’re just kind of marching toward.
This inevitability, we’re shaping it every single day. And what we shape has this kind of, open, aperture, sort of conical sort of visualization in my mind. Like that’s how I think about it on the now next continuum, where I had that drawn and almost like a bow tie, right? And you sort of open up toward this, these incredibly, cascading futures.
And what happens is when you think to think about. Out of any given decision, out of any given problem that faces the organization, you wanna ask, you know, what is the most likely out of the scenarios that could happen, what is the most likely scenario to happen? Right? Like, if we’re intellectually honest and truly objective, the thing that’s most likely to happen is, you know, increasing regulation [00:17:00] around healthcare, or we’re going to see a decrease in the amount of people in registered in higher education, or whatever it might be.
We have that, and then what we have is our. most preferred outcome, what we would most like to see happen because it aligns with what we believe matters the most in how we, show up in the world and what we’re trying to offer with true value into the world. that incredibly shapes our next actions, because now we know that all we really have to do is move the most likely outcome toward our most probable, our most, sorry, most preferred outcome.
Every single day, every single decision we make, all the resources we’re aligned, all the focus, all the purpose, all the, all of everything we’re trying to do, every single decision that we face. We were thinking, what is the action I would most need to take, or the decision I would most need to make that would steer the inevitability toward what I most would like to see happen?
And that’s so [00:18:00] crystallizing, I think, for, for so many leaders faced with the complexity and the uncertainty and the, changing values and changing priorities inside of their organization or industry. is all that it really boils down to, and it’s a very easy lens to apply, I think every single day.
Aidan McCullen: I love it and I actually love it on a personal level as well. I kind of, when I read about this part, I felt about the the bow tie. And it’s like all these possible futures exist. It’s kind of like quantum physics in a way where the, the way you focus, you will bring it into attention, you bring it into reality.
And I have this concept in my head that. Your vision is, or your, your, even your articulating your, your goals is like shooting a harpoon into the future and then actually bring it into realities, the harpoon part where you pull it back again, so you pull it into reality and that
Kate O’Neill: well
Aidan McCullen: that’s. Yeah, well that’s, that’s [00:19:00] kind of the way I visual, I’m a metaphorical thinker, so I thought about that as part of it.
But the real, real important part of the book and , we have to talk about this, is I love the fact that you always put the human at the center of this so. We’re at this point where the technology on the planet has never been so good. We’re at the verge of quantum computing with ai. So, and, and I don’t think that people understand.
You mention it, the, the jumps. What are we at 16 jumps of doublings of technological power. More probably. And now AI is getting better at a rate that we just can’t even comprehend. And we don’t know what it’ll be like in a year or two, three years if we can power it,
Kate O’Neill: Yeah.
Aidan McCullen: if we have the power to. But in there is the technology will run ahead, but we evolve very, very slowly.
We can be lost in that evolution. And I think that’s such an important part of your writing and your talks [00:20:00] and this book, and I’d love you to share that as a real message to get through to our audience.
Kate O’Neill: Yeah. Well, I think one complimentary or sort of companion piece to this is, , my TEDx talk that I did in October. It’s called, , we Cannot Leave Meaning Up to Machines. And that is the sort of the underlying theme of that human friendly, human centered piece of what matters next is that there’s so much that I think machines are so fantastic for, and technology in general has been a fascination of mine my entire life.
And AI is something that I’ve been thinking about and learning about and, and speaking about for, you know, decades, not just the last, you know, five years or, or four years since, , the chat GPT moment really opened up, , attention for most people. But I think what, what happens with large language models, for example, in specific, is that they are so fluent and so good at simulating this.
Sense of the, the fluent use of language, , to, to trigger [00:21:00] our human, , sort of mirror neurons or, you know, all of our sort of human instincts of empathy to suggest that we are interacting with an intelligence like our own, maybe, you know, far beyond our own. And then we begin to treat it with the kind of empathy that we would treat another person.
That allows us to then sort of delegate some of the responsibility for decision making to this far greater intelligence. It allows us to say like, oh, okay, well let’s share our innermost secrets and landscape, you know, for, , the benefit of this, this thing to share back with us what we should be doing, how we should be making decisions.
All of these tools are incredibly useful if we use them in the right proportion, if we use them in the ways that they’re actually going to. Facilitate our wisdom and our meaning making, but we are the meaning makers. Meaning is of, of a function of human consciousness. It is a function of [00:22:00] being a, being in a body with senses and making meaning out of the senses that we.
The, the data we gather through our senses in our bodies and then having language kind of connected with those experiences, all of those components are a necessary part of the way humans make meaning. Now, I have said many times like machines, AI like may make its own meaning at some point in the future or may already be sort of making some sort of primitive meaning on an AI level, but it will not be the same kind of meaning that humans make.
And I think we need to respect. That the human meaning that our understanding of meaning has been at the core of all of culture for all of human civilization, and it’s part of all of our history and legacy. And I hope that it’s something that we are invested in handing down to generations ahead of us.
So what we need to do is make sure that we understand it in order to preserve it and protect it [00:23:00] even as we use increasingly sophisticated technology. To make ourselves more effective and efficient and to further our agendas and to further progress on behalf of humanity, on behalf of the human experience.
Those are two trains that sort of run in parallel, and we, we are not riding the exact same train, but we need to be very conscious of. What it is that drives human experience and what it is that furthers technological progress. And they are aligned, but not necessarily the same. So I think those, those are, , worthwhile thought exercises for people to spend some time with.
Aidan McCullen: Absolutely beautiful. I have loads more questions, but I actually loved what you just said, and it’s a lovely message to, to land the ship on for people who want to find out. And your name Kate O’Neill, firstly Irish background?
Kate O’Neill: a little bit.
Aidan McCullen: Apart from the name and, uh, is where people can find you, find out about your work. You do workshops and keynotes all over the world. [00:24:00] Where’s the best place, Kate?
Kate O’Neill: Koko Insights is my company. Yeah. And, and it has information about what matters next, as well as my other books as well as my speaking, my advisory work. And yeah, it has been an incredible, the privilege of a lifetime to get to work with so many companies at, so at a high level, but I also really enjoy just the fact that people of all kinds, people across every organization in every role are really struggling to understand how to bring themselves to this moment. And I feel that I’m hearing from people that they’re getting a lot of value out of reading this book as well. So I really hope that that’s true for folks.
Please feel free to reach out to me on LinkedIn or wherever you find me. I’d love to hear what resonates with you about it. So thank you, Aidan again for having me on the show. I appreciate that.
Aidan McCullen: Absolute pleasure. And I have, I had an idea just all of a sudden, I remember you shared, you were doing a talk for a company and they gifted you with your own Wheaties box, so
Kate O’Neill: it in the background.
Aidan McCullen: behind, so. [00:25:00] Ask our audience. Okay? So maybe you’ll tell us about that, but I’m gonna ask your co audience for anybody who’s first to email us in at the Learnovate Center and tell us the colour of the box.
I have a copy of Kate’s book, what Matters Next for you. Okay? So that’s, that’s the competition. Maybe you’ll tell it’s such, it was such a cool gift from the client.
Kate O’Neill: it really was. Thank you General Mills for that. I, you know, every once in a while as a speaker you get this lovely speaker gift and it’s just like, well, you already paid me, but you’re now, you’re giving me a gift too. And it is so generous and kind for, for an organization to think that way. But yes, this is a life making, gift right here, so I’m very
Aidan McCullen: Brilliant.
Kate O’Neill: with that.
Aidan McCullen: Absolutely brilliant. Author of What Matters Next. Kate O’Neill. Thank you for joining us.
Kate O’Neill: Thank you, Aidan.
Please note that this transcript is auto-generated – please refer to the audio or video for exact wording.