In this episode
00:15 Debunking the AI Job Replacement Myth
02:53 The Automation vs. Augmentation Debate
05:38 Future Job Constraints and AI
07:27 Evergreen Skills in the Age of AI
10:58 Productivity Gains: Coordinators vs. Creators
15:54 The Workflow Continuity Fallacy
17:18 The Neutral Tools Fallacy
21:30 The Stable Salary Fallacy
24:16 The Stable Firm Fallacy
25:22 Capturing Value in the AI-Driven Future
You can find Sangeet at platforms.substack.com
Transcript
Aidan McCullen: AI won’t take your job, but somebody using AI will. It’s the kind of line you drop in a LinkedIn post or, worse still, you will definitely hear this line in a conference panel and get immediate zombie nods of agreement.
It’s technically true, but it’s utterly useless. It doesn’t clarify anything. Which job does it apply to? All jobs. And what type of AI? What will someone using AI do differently apart from just using AI, and what form of usage will matter versus not? It makes you feel like you’ve figured something out. You conclude that if you just use AI, you’ll be safe. In fact, it gives you just enough conceptual clarity to stop asking [00:01:00] the harder questions that really matter.
How does AI change the structure of work? How does it restructure workflows? How does it alter the very logic by which organisations function and, eventually, what do future jobs look like in that new reconfigured system? The problem with “AI won’t take your job, but somebody using AI will” isn’t that it’s just a harmless simplification.
The real issue is that it’s a framing error. It directs your attention to the wrong level of the problem while creating consensus theatre.
It directs your attention to the individual task level — automation versus augmentation of tasks you perform — when the real shift is happening at the level of the entire system of work. The problem with consensus theatre is that the topic ends right there. Everyone leaves the room feeling smart, yet not a single person has a clue how to apply this newly acquired insight the right way — true, [00:02:00] but utterly useless.
To shed some light on the future of work and to avoid such consensus theatre, we are joined by the author of those words and a fantastic new book that questions just about everything in the age of AI. We welcome to the show the author of Reshuffle, Sangeet Paul Choudary.
Sangeet Paul Choudary:
Aidan, such a pleasure to be here.
Aidan McCullen:
You hold nothing back with that introduction that I pulled. I actually pulled that not from your book Reshuffle, which is an Amazon bestseller, but I pulled it from one of the articles that you wrote about the future of work, and I’d love you to unpack it in more detail, including some of the fallacies that you call out that are inherent in that statement about AI won’t replace you, but somebody using AI will. So let’s get stuck in with perhaps an overview and then we’ll pull out one fallacy at a time.
Sangeet Paul Choudary:
The key point is that very often we think of impact on our jobs as simply [00:03:00] its impact on the tasks that we perform. And so we assume that it’ll either help us speed up our tasks and do more, and so in a way, it’s what we call augmentation.
It’ll augment our work and our ability to perform work, or that it will take over some of those tasks and automate the work that is performed and hence replace us. So we think of things in these binary terms, and that’s where this idea of “AI won’t take your job, but someone using AI will” comes in because it’s a way of saying it’s not automation but augmentation that’s going to happen. So augment yourself because AI won’t automate your job away, but somebody augmented with AI will come and take your job instead.
Now, all of that sounds very neat and binary, but the challenge is that it assumes that jobs will continue to be the way they have always been. They’ll continue to be that way going forward, organisations will continue to be the same, and the only thing that’s going to change is whether a certain set of tasks are being performed by a human or by [00:04:00] AI. And I challenge that in all of my work and in my book Reshuffle because I believe that’s not really true.
Jobs do not exist in order to perform tasks; jobs exist in order to solve a problem in a system. And when that system changes, the constraints that the jobs were supposed to address go away. The job’s logic may not make sense. So I’ll give an example which will help bring this to light because it’s one of the fallacies that I point out. The example I’d like to take is what happened with typists when the word processor was first invented.
Word processors did not automate typing. So if you were to think about it in terms of the binary of does it automate or does it augment you, word processors definitely augmented humans in typing, and yet the job of the typist went away. That is because the job of the typist did not exist because of the task of typing. Even though the name typist suggests [00:05:00] that, the job of the typist existed because of a constraint in the system. The constraint was that editing a document was very expensive, and hence you needed highly trained typists so that the cost of editing could be kept as low as possible because the error rate would be low.
Editing of documents became cheap with the word processor. Just hit the delete key and start editing documents on the fly. Anybody could be a typist. And so the need for specialised typists went away — not because the task was automated, but because the problem that the job was addressing no longer existed.
And I think that’s the key thing that we miss today when we start thinking about jobs as stable constructs and AI either, you know, taking your job or someone using AI will. The bigger question is what will jobs look like? What are the constraints on the basis of which jobs are structured today, and once AI comes in, which of those constraints still stay and hence accordingly, which new jobs emerge that do not exist today? That’s [00:06:00] what we should really be looking at rather than looking at AI’s impact on individual tasks.
Aidan McCullen:
Let’s make this applicable for people because there is a lot of fear. There is a lot of fear, particularly people thinking about their children — what will happen to my children who may be studying today? Maybe there are no onboarding jobs, there are no kind of intern jobs now available because people are outsourcing to AI.
So I felt the fallacies you address help us understand that. Maybe I’ll call out each of the fallacies and you’ll unpack it then, Sangeet. The very first one is the automation versus augmentation fallacy, and the call-out here at a high level is how do we shift from task-level thinking to system-level thinking.
Sangeet Paul Choudary:
I think that’s the real thinking we need today because the idea of focusing on what skills will make you future-ready is also based on this task-level thinking, which assumes that certain forms of skills will be valuable and certain forms of skills will not. A key point that I’d like [00:07:00] to make is that when you are thinking about what kinds of skills to invest in or what kinds of future job scenarios to work towards, there are two things that are important.
One is that you need to really think about which skills remain unchanged no matter how powerful AI becomes or how the job landscape changes. So, which kinds of skills remain completely unaffected? The effect that AI has on a lot of knowledge work today — whether it’s consulting, whether it’s legal work, whether it’s research — the effect that AI has is that it dramatically collapses the cost of getting to an answer. So a lot of our work, whether you’re a lawyer or a consultant, especially professional services, has been built on the idea of billing by the hour, and billing by the hour essentially assumes that getting to the answer takes a lot of time.
What happens with AI is that certain forms of performance and [00:08:00] answer generation become much less expensive. It does not require the same amount of time to get the actual work done of getting to an answer. But the challenge that then happens is that when you use these AI models and you can just throw a prompt and it generates answers on the fly, the constraint shifts away from actually generating answers to asking the right questions. Once you’re generating answers, determining which answers to elevate and which ones to reject. And so those are two things that I think of as evergreen skills: curiosity and curation.
If you don’t ask the right questions, you can go down the wrong rabbit hole because if generating answers with an LLM is as easy as just throwing prompts out, you can keep asking questions, and if they’re not the right questions, you’re just going down the wrong rabbit hole. So having the ability to ask really good questions, which often comes from a good understanding of a domain, but having the ability to ask really good questions and being constantly curious about whether you’re [00:09:00] asking the right questions or not, is going to be very valuable.
And the second thing that’s very valuable is having the sense over time to know what to elevate and what to reject. And that’s the idea of curation that I talk about. So the first point that I’m just trying to make over here is that skills that will really matter will shift away from just generating answers and getting work done to really having the ability to ask good questions, having the taste and the risk-taking ability to choose the right answers and then stick by them.
And the second point that I want to make is that in order for you to determine which skills are necessary in the future, you can’t just say skill A, B, C are skills of the past and skills E, F, G are going to be skills of the future. Skills are only valuable in relation to a system. And so you need to ask yourself, if you’re training to get to a certain kind of work — if, for example, you’re learning and training to get into the legal industry — what is going to be [00:10:00] valuable in the future? What’s going to command a skill premium? It’s going to be determined by how the legal industry will work in the future. What will be the new revenue models that come out? How will companies differentiate themselves if the pay-by-the-hour logic goes away? What’s the new logic that comes in its place?
So if you’re a student, you need to start developing that muscle to look at the larger system because it’s not just about getting your first job. All through your career going forward, because we are in a period where there’s such rapid change, you have to constantly have this ability to keep sensing the system and determining on an ongoing basis how things are changing and which skills are going to be valuable as things change. So, in summary, look for what doesn’t change, and in order to look for what does change, don’t stick to static lists of skills, but really develop a sense of how your target industry is changing and how companies — new forms of company — are coming, and the way companies differentiate themselves is changing. That’s what will help you determine how to be valuable.
Aidan McCullen:
So the next fallacy [00:11:00] is the productivity gains fallacy. And the big call-out here is: why do productivity gains so often benefit coordinators, not creators? I’d love you to share what you mean by that — coordinator, not creator.
Sangeet Paul Choudary:
This idea of coordination versus creation is actually the central thesis of Reshuffle. So, in general, when we think about a new tool, if a tool helps you do more work in less time, we think that that sounds like progress, right? But the challenge is that this is actually progress only if the overall system remains stable — that is, your previous workflows still work the same way, the organisation and the business model still remain stable. So if you do things faster, you’re winning. But if the whole system changes, then the way people used to succeed in their jobs in the past, and the skills for which they were paid, may no longer apply.
And this point actually applies at the level of individuals and at the level of companies. So I’ll take the example of how this played out in companies’ competition, for instance. In the apparel industry, factory productivity has increased dramatically over the [00:12:00] past two decades. They’ve always applied tools to improve productivity, and they’ve applied automation to compress cycle times and push more units through the system. But faster productivity has not necessarily led to faster salaries at that level of the value chain.
So the workers in the factories are not getting the benefit of the faster productivity. In fact, there are other forms of players that are actually getting the benefits of that faster productivity, and that’s the point I try to make over here: even though workers move faster and are able to produce more output in the apparel industry, it’s companies like Shein — who actually coordinate these workers across disparate factories, decide what gets made, when, and in what volume — that capture most of the value. So even as labour productivity rises, as workers actually produce more, they’re unable to capture that as salaries for themselves.
So even when companies adopt AI to accelerate tasks, they soon [00:13:00] realise that when everybody’s using the same tools to do the same tasks faster, productivity becomes a commodity. And this is true not just at the company level, but also at the individual level. So even at the individual level, just augmenting yourself is not sufficient. You need to start thinking about where the new value will be captured — whether it’ll come in your direction or it’ll come in another direction. And that’s the point about the creation versus the coordination. The factories create, but a company like Shein coordinates and captures most of the value.
Aidan McCullen:
You can even tell now, when people use AI to write, you can see the patterns of the writing and it’s become vanilla. And in a way you get angry — I get angry — and you’re like, the difference is, okay, you might use it to simplify; you’re not using it to come up with the ideas in the first place. And certainly the way it writes… and I have a huge fear now with students. So many of the audience of this show will be [00:14:00] teachers or college professors or into the science of learning or neuroscience of learning, and this is something we need to be very, very careful of.
Sangeet Paul Choudary:
That’s absolutely right. Just to make it very practical, and link it back to the typist example that I gave: I use AI to help me with my writing, but I never use AI to actually write. And the reason for that is that I try to look at my workflow and, as a writer, the biggest constraints in my workflow were not the actual task of writing. Once I start writing, it comes out.
The challenges — the first challenge is something that every writer associates with — you open up your laptop in the morning and you stare at this blank page and you don’t have anything to get started. And what AI helps me do now is I can open up my laptop in the morning and I can start just debating a topic with AI, and within a few back-and-forth questions and answers, my mind starts working and then I can start writing.
The other thing very often that I used to struggle with was that I would write in a serial sequence and [00:15:00] then complete the whole thing — or, if I’m writing a book, complete the whole thing over weeks — and then get it edited. But now I can write in a sitting and then quickly get it edited in real time. Those are constraints that used to hold me back. Even if you just address the constraints, that magically unlocks your ability to write. It’s not about creating more output; it’s about expressing yourself while removing the constraints that had held you back in the past.
Aidan McCullen:
It removes bottlenecks and obstacles in so many ways. I also use it for when text is written in a very academic way. And, you know, there are times you read it and you… and then you read the same page again — you go, I think I’ve read this page already. I ask ChatGPT — I’m like, please make this readable for a 12-year-old or explain it to me like I’m a 10-year-old — and it makes just that workflow much, much easier.
One of the other fallacies I thought we’d talk about was the workflow continuity fallacy. We see this with so many organisations — they optimise and optimise and optimise, but they don’t actually [00:16:00] question what they are optimising in the first place.
Sangeet Paul Choudary:
Yeah, I think this is a fallacy that keeps coming out repeatedly, especially when we talk about what is called agentic AI. Today, we look at today’s workflow, then we try to speed up the tasks within those workflows. But we don’t necessarily ask: should the workflow exist in the first place? Because every workflow has existed for a certain reason, and with new technologies coming in, we should ask ourselves: with the capabilities of this new technology, should we even have this workflow, or should we do it a fundamentally different way?
This idea of not stepping away and actually questioning why the workflow exists in the first place keeps us stuck in spinning the wheels faster in the wrong system. I’m not saying that every workflow we had in the past has to change, but before we start speeding up workflows, the first thing we should do is ask ourselves: should this even exist in that particular way, or should this be done a different way?
Aidan McCullen:
The term we often use is “the faster mouse”. Don’t have mice any more, you know? You have to think ahead of if that job will still exist in the future anyway. And to do that, you need to understand the system in the first place. One of the other ones you talked about is the neutral tools fallacy — the question here you ask is: how does AI redistribute organisational power without anyone noticing?
Sangeet Paul Choudary:
This piece is important because we often think about tools as being neutral, but they’re actually not, because tools in general contain default settings and preferences. And I’ll just give a simple example. You and I get onto meetings all the time and we have our AI note-takers getting into the meeting with us. While in itself it’s a productivity tool for us, when you apply the same thing in an organisation, you realise that the organisational priorities are now being determined by the LLM because it’s determining which notes to [00:18:00] highlight versus which ones to deprioritise.
And this is something that we’ve seen across the board, because with previous technology as well — when spreadsheets first came out, Microsoft Excel — people who mastered Excel ended up having a lot of influence through much of the 1990s when process optimisation was a very big deal, because decisions that were previously made with gut instinct could now be modelled and simulated. So anybody who was making those decisions with gut instinct no longer had that power because they didn’t have the data to show it, whereas the Excel jocks now had a lot of power.
And so the point I’m trying to make with both the note-taking example — where the technology itself starts determining organisational priorities — and the Excel example — where, by equipping certain kinds of people, it shifts decision power in their direction — my point is that adoption of technology does not have a neutral effect in terms of just speeding up what you are doing. It fundamentally changes where power sits [00:19:00] in the organisation, and that again changes which kinds of jobs will hold power and will hold the ability to command a higher salary in the future.
Aidan McCullen:
Sangeet, for people who are wondering how does that actually affect them, could you give us some example that might come to mind for you?
Sangeet Paul Choudary:
Yeah, you know, I’ll give a couple of examples that have happened with previous technologies — and when I say previous, they are not that old. We are seeing that happen right now. A simple example is that before the rise of mobile phone usage in the enterprise, field sales forces had a lot of power because they were next, you know, close to the customer. They had to make decisions on the fly, and they understood the customer sentiment deeply, and head office did not have any of that information.
But once field forces were equipped with mobile-based applications where they could capture data right in the field, keep getting information from the customer and keep entering it, [00:20:00] all of that information was going back all the way to head office instantly. And so all the decision power moved to the head office. Even as the number of jobs in field forces has increased over the past decade, the salaries have progressively decreased because this job increase has happened — and the salary decrease has happened — on account of the same technology, which is mobile connectivity that allows an organisation to now manage larger field forces also.
It is responsible for shifting power away from the field forces and back to the head office, which is why decisions are much higher at the head office than they were in the past. Because decisions can be made centrally, you can now have people with lesser training doing much of the field activity. And so the potential worker base that can fill those jobs increases; that again pushes salaries down. So that’s an example where this is actually playing out in real time, where the shift in decision power because of a new set of tools changed the salary [00:21:00] structure in a whole industry.
Aidan McCullen:
That’s one that’s so obvious when you see it, but when you see the system behind it, you can actually design for it and understand it. And to your point, if you’re worried about the future of work, decide where to play in the future because that might not be as valuable as it was — maybe from a parent’s perspective looking at the future for their children.
The next one was the stable salary fallacy, which is very linked to what we just talked about.
Sangeet Paul Choudary:
Yeah, absolutely. The stable salary fallacy essentially is the point that just because you have a job does not mean that you will still have the same salary as you did today. And I’ll give a couple of examples to illustrate why this happens. One example is, you know, back in the days of logging wood, when people — lumberjacks — used to cut trees using saws that they would wield with their hands. So there was a certain skill, a certain level of training, a certain level of instinct involved in [00:22:00] using it well, and then there was obviously the physical power that was involved. So not everybody could be a good lumberjack. But with the invention of the chainsaw, the barrier to becoming a lumberjack went down, so you could easily use a chainsaw to start felling trees without having necessarily the same level of skill and, you know, brute power.
The key point that I’m trying to make is that when a new technology comes in, even if your job continues — lumberjack jobs or any jobs in the entire wood-cutting industry actually increased after the chainsaws came in — salaries declined and were depressed. And the reason that happens is because new technologies, very often by augmenting you, by making you better, tend to make the less trained and less skilled people move higher up in terms of their output than the more skilled people. And so there’s a flattening of skill, and so the premium that was associated with high skill actually goes down.
Another way — we see this [00:23:00] every day actually today — is what happened to London cab drivers when GPS-based driving came in. Because before that, having that knowledge in your head was a huge competitive advantage. And if you were a London cabbie, you knew the streets inside out, and it was impossible for somebody completely new to come and start driving around London. But with a GPS they could. And then when Uber overlaid a market-making mechanism on top of the GPS — where now you had to just log into an app in order to get jobs — the entire skill associated with knowing how to navigate London completely went away. You were not paid for that any longer.
You didn’t gain a higher fee because there were few cabbies. As the number of cabbies increased, the market figured out the minimum price at which the ride would be accepted — and that’s what Uber does. So it’s a combination of both these things. Today, a lot of people who are in knowledge work may feel that the Uber thing does not really apply to them, but we need to keep in mind that the more AI [00:24:00] improves, the more it has the effect on all knowledge work that maps had on the act of driving: it flattens the skill needed to perform that knowledge work. And the more that happens, the more you see this compression of salaries, even though the job continues to exist.
Aidan McCullen:
Which is tightly coupled to the final one, which is the stable firm fallacy.
Sangeet Paul Choudary:
Yeah, exactly. I mean, all of these things bring it back to this idea that AI is not just making today’s firm and today’s business model better, faster — it’s changing the nature of the firm completely. And this works at many different levels. So I’ll give a couple of different examples to illustrate this. One we’ve already talked about: when we think about the impact that faster automation of factories had on business models like Shein. The nature of the firm completely changed because, in the past, you did not have this kind of a central coordinator. You would have [00:25:00] relatively siloed supply chains where a manufacturer would work with the buyer, and any productivity gains would be captured largely by the manufacturer. But the nature of the firm changed when the buyer was no longer a single buyer working with another set of manufacturers, but a global buyer like Shein coordinating a huge number of manufacturers. So that’s one way the nature of the firm itself changes.
Aidan McCullen:
Let’s leave people with a positive — a takeaway that they can actually do something with. Many people are consultants, and we have shifted so much towards a knowledge economy, and in a knowledge economy, when the knowledge is easily accessible, it changes the game and it shifts the value. So you talked about a framework for people to be able to compete and actually capture some of that value in the future amidst the LLMs.
Sangeet Paul Choudary:
Absolutely. As you rightly pointed out, a lot of our knowledge-based industries have been built on the assumption that knowledge is the bottleneck and knowledge is constrained, and that scarcity of knowledge [00:26:00] allows us to capture a premium for that. The more difficult it is to acquire certain kinds of knowledge, very often the salary associated with that also is much higher.
What happens with AI — as AI improves — is many forms of knowledge work become increasingly more accessible or increasingly more commoditised. Not because AI replaces humans, but because again, more people can perform that when augmented with AI. So there’s compression on salaries because of that.
But I believe that there are two other forms of constraints that have always been there and that will reassert value even when AI’s task performance in itself improves. One thing that always has value in any organisational system is the ability to assume risk. There are many examples of jobs where almost every feasible aspect of the job is automated, and yet people who perform the job are paid really well because they assume the risk associated with the decision.
This obviously [00:27:00] shows in CEO pay today, but even if you don’t agree with that and you say that’s inflated, let’s take another example: think of the anaesthesiologist who administers anaesthesia in a hospital room. Every single task that the anaesthesiologist performs is today performed by a machine — so every part of his job is automated. The ability to assume the risk of managing the patient’s condition in real time in the middle of an operation is what he gets paid for, and he gets paid a premium for that.
So automation — going back to this point, “AI won’t take your job. Somebody using AI will” — automation, augmentation… those things don’t matter. What really matters is: what are the most risky decisions that need to be taken in your organisation, and who is taking those? Who is sitting at those decision points? So if you are seeing AI coming in and impacting your job, you can always use this as a rule of thumb to say that if I assume the risk, I can capture value associated with it. But that obviously comes with judgement, which is in its own right a uniquely human [00:28:00] capability that comes with experience and with taste as well.
Every time technology is rolled out and is adopted organisation-wide, what happens is that you start seeing new cracks because every part of the organisation adopts different technology and different ways, speeding up processes at varying speeds. So coordination breaks down. And again, I’ll take an example from healthcare, because over the past 10 years, health systems have adopted electronic health records, they’ve adopted digital billing systems, etc., and more recently they’re adopting AI.
And in all of this, one of the really good examples of where things break down is as we have people with chronic health conditions who need to be managed over an extended period of time, the fact that every part of the hospital and every person has a different set of tools managing it ends up creating a fragmented patient journey. And the patient is left confused because they’re receiving reports from [00:29:00] different types of tools and there’s nobody guiding them through the whole patient journey, especially if you are somebody with a chronic condition.
And so an entirely new job came up over the last 10 to 12 years: the job of a nurse navigator, whose job is entirely to manage the patient journey end to end, and to ensure a seamless experience while coordinating across all these different tools and roles that the patient is interacting with. And so new coordination layers are always very valuable when technology is rolled out. So keep those two things in mind: where can I assume new risk? Where do things break down and I can solve as a coordination layer? And then bundle your valuable skills around those points. So your skills are valuable, but they have to be bundled close to these points in order for you to actually capture that value.
Aidan McCullen:
For that type of thinking and way more, I highly recommend both Sangeet’s Substack, which is just a magnificent read every week, and he tackles different aspects [00:30:00] of how AI reshuffles the entire industry — from your job to an industry, to an organisation — with loads of examples, brilliant analogies and metaphors as well, but also his book Reshuffle, which goes deeper into a lot of these concepts. Sangeet, for people who want to find you, where is the best place?
Sangeet Paul Choudary:
I think Substack is great. It’s called platforms.substack.com. I write every week over there, and you can look up the book Reshuffle on Amazon, and my website is platformthinkinglabs.com.
Aidan McCullen:
Sangeet Paul Choudary, thank you for joining us.
Sangeet Paul Choudary:
Thank you so much, Aidan.
In this episode of the Inside Learning Podcast, we are joined by Sangeet Paul Choudary, the author of the Amazon bestseller 'Reshuffle.' Sangeet challenges common misconceptions about AI in the workplace, highlighting how it reshapes job structures, workflows, and organisational power. Learn why phrases like 'AI won't take your job, but somebody using AI will' are oversimplifications and find out what truly matters for future job security. Tune in for an in-depth discussion on the fallacies surrounding AI, the shifting demands of job skills, and the emergent coordination roles in new technological eras.