Research Talks – Personalised Coaching
For more information on Personalised Coaching, visit our Research projects page.
- Listening time 25 minutes
Transcript
Eric Paquin: Welcome to Conversations and Research from Learnovate. I’m Eric Paquin, head of Research, Innovation, and Technology. In this series, I talk with Learnovate researchers about the ideas shaping our work and the trends transforming the future of work and learning. These conversations are designed to open up our research in a different way, less formal, more reflective, and closer to the real questions emerging from industry and practice. In this first episode, I’m joined by Janet Benson and Ilse White to discuss how AI is powering personalized coaching, where the technology is already adding value, where the risks lie, and what responsible use could look like in practice. Janet, Ilse, thanks so much for joining me.
Janet Benson: Thanks, Eric. It’s great to be here.
Ilse White: Yes, delighted to join the conversation.
Eric Paquin: Okay, let’s get started. So, Janet, before we bring AI into the picture, can we start with the basics? When you talk about professional or organisational coaching in the context of this research, what do you actually mean?
Janet Benson: Yeah, so for this particular research, we define coaching as a combination of psychology and adult learning in that kind of corporate learning and workplace learning space, where a one-on-one development process is facilitated by a human, or in this case, an AI coach. So the aim is to help people increase their self-awareness, their self-efficacy, both for their own benefit and also for the benefit of their workplace or their organisation.
Eric Paquin: I see. So this is quite different from mentoring, counselling, or therapy.
Ilse White: Yeah, exactly it is. It’s important to be clear that coaching isn’t the same as mentoring, counselling, or therapy. Each of those practices have different goals and boundaries, and sometimes they require different levels of qualifications or regulation. Although it’s safe to say that there can be some elements of mentoring within business coaching contexts.
Eric Paquin: Thanks, Issa. You recently asked a simple question at Learnovate Link and Learn, and Janet, you asked the same at Learnovation last October. The question was, have you ever asked ChatGPT for professional advice? Why start there?
Ilse White: Because many people already have and they ask those tools very frequently. Tools like ChatGPT or Copilot or Gemini are frequently asked for things like writing feedback or a tricky e-mail or helping someone think through a career move or maybe something that’s a little bit more difficult, like how to manage a conversation with a colleague. that you have in conflict with. So that behaviour is quite close to the things that people might bring to a human coach as well.
Eric Paquin: So in a way, people are already informally coaching with AI, whether we’ve designed this for it or not.
Ilse White: Yeah, exactly. And that’s why we want to understand the benefits and the challenges of it and what a more, I suppose, intentional or safer use of AI in coaching could look like.
Eric Paquin: Yeah, that’s interesting. And Janet, Can you tell us a bit about the project you both conducted? What were you trying to explore and how did you go about it?
Janet Benson: Yeah, so I suppose at Learnovate we follow up kind of a structured research process. So we kind of move from the desk research to looking at the state of the market, the state-of-the-art, what’s happening in the coaching world at the moment. to workshops and then to validation with professional coaches. So we started kind of by reviewing the emerging literature in the market and then we engaged some practitioners, so some actual coaches in some discussions. We did semi-structured interviews. And then finally we checked our insights with a broader group of coaches, but also business leaders, so those people who would be bringing a coaching solution into their organisation. And then also doing a survey with people who have received coaching, either from a human or an AI or both.
Eric Paquin: So it wasn’t just a tech exercise. It was grounded in both research and practice.
Janet Benson: Yeah, exactly. I mean, we want to understand not only what’s technically possible, which is obviously very important to us, but also what’s acceptable, what’s ethical, and then also what’s useful for real coaching contexts.
Eric Paquin: Okay, so well, let’s get into the heart of it. So when you look at coaching, where can AI genuinely help today?
Ilse White: Yeah, there’s some clear areas where AI can really add value to coaching. One thing that’s consistently mentioned is personalization. So tailoring questions and resources and kind of reflections to the goals that an individual might have and to their particular context. And another one is the benefits of accessibility and affordability. AI can make elements of coaching available to a lot of people in organizations or just, people who are out there who could normally never really afford one-to-one human coaching at a scale or at a frequency that they might require or need.
Janet Benson: And I’d add as well, it’s in relation to, I suppose, that scaling piece that AI can kind of help organisations make coaching type support available to so many more employees. So at the moment we see sometimes it’s just senior leaders, but then you can also in relation to scaling, you can use AI for things like tailored feedback, goal setting, automation of some of those routine admin tasks around coaching. So things like scheduling and note taking or reminders between sessions. So it’s handy for the coach to be able to use AI for some of that heavy lifting of that more administrative side of the coaching offering.
Eric Paquin: Okay, so what you’re saying is that AI is good at the structured part, the repeatable or data-driven kind of aspect of coaching support?
Janet Benson: Yeah, exactly. Especially for like simpler, clearly defined coaching topics or for supporting the coaching process, as we were saying, around sort of some of those human-led sessions.
Eric Paquin: Okay. You also highlighted in your research a set of kind of maybe benefits, things that sound attractive, but that are more kind of unclear, ambiguous. Like, what are those?
Ilse White: Yeah, the two that come up a lot are the fact that AI is always available. There’s a question of whether or not you want that or not, and that it can stimulate or simulate cognitive empathy. The idea that you can always have an AI coach at any hour that never gets tired, it never judges you, it always has time for you. Yeah, that’s appealing, but it can also be a disadvantage or something that people just don’t want.
Eric Paquin: So yeah, to me, that sounds really compelling, but as you mentioned, there are concerns around it. So what are these concerns?
Ilse White: Yeah, being always available isn’t automatically a good thing. There’s a real question of whether we should build deep and vulnerable relationships with the system that really doesn’t truly understand us because it doesn’t think like us and it doesn’t share our duty of care. AI is pretty good at mimicking empathy and language, but it doesn’t actually feel it, right? And that raises questions around the authenticity of the conversation and the safety that a coachee is in. And also, if you’re on holiday or just winding down for the day, do you really want to be constantly receiving, not just from your AI coach? I’m not sure how many people would really be interested in that.
Eric Paquin: Yeah, I probably wouldn’t like that myself. But let’s look at the challenges. Because your research surfaces many. Janet, what comes up first for you?
Janet Benson: Yeah, I think it wouldn’t be too surprising to people that the big ones are those kind of ethical considerations and concerns around privacy. So obviously, coaching conversations can often be quite deeply personal. So we have to be sure that we handle the data responsibly, that coaches understand where their information goes, and that the AI systems aren’t used in ways that undermine, you know, autonomy or confidentiality or even fairness.
Eric Paquin: Yeah. But beyond ethics and privacy, there’s also this idea of the kind of the human element.
Janet Benson: Exactly. And I think it’s kind of touching a little bit, like, I mean, coaches worry about the lack of the human element there as well. You know, there is that cognitive sort of element of it, but the lack of a real human element, which can include real empathy, you know, a nuanced understanding of human behaviour and that sense of duty of care that a coach has to provide to coachees. And there’s also a risk that AI can miss some of those cultural nuances, because they can often be crucial in interpreting what a coachee says and what they don’t say, which isn’t something that’s easy to pick up by technology. It’s more of a personal element.
Eric Paquin: I see. So even if the AI sounds polished, it may not truly grasp the context, the identity of the person, or the cultural background in a way that a skilled human coach would.
Janet Benson: Exactly. And that’s, I suppose, Eric, where some of the deepest concerns sit at the moment.
Eric Paquin: Okay, and you also flagged some very technical challenges like hallucination and jailbreaking. Issa, can you unpack those for us in a coaching context?
Ilse White: Yeah, absolutely. And some people might be familiar with them in a general AI context, but when you apply them to coaching, AI hallucinations can occur when the model kind of produces a really confident answer that’s just simply wrong. And in coaching, that could mean that a coachee is giving misleading feedback or advice that sounds really plausible, but that’s not grounded in reality. And then on the other side of the coachee, it’s jailbreaking. And that’s when the coachee or the user intentionally or unintentionally pushes an AI system to bypass its own boundaries and safety guardrails. And that might, again, lead to the system to violate its policies, to make decisions that are influenced by one particular user, for example, as it retrains itself on those. And it might even carry out malicious instructions.
Eric Paquin: And if that happens in the highly sensitive coaching conversation, the harm could be very serious.
Ilse White: Yeah, exactly. And that’s why safety and responsible design are so central when we talk about AI coaching, it’s really not a nice to have.
Eric Paquin: Not really, no. Janet, you also looked at some empirical research on current AI coaching capability. How good is AI as a coach right now?
Janet Benson: Well, yeah, a recent study we looked at by Passmore and Rutschmann, and I’m not sure if I’m saying that third name correctly, evaluated AI coaching agents against professional human coaching competencies. And they found that like AI coach or or chat bots, coach chat bots, can already coach comfortably at the International Coaching Federation Association, Association, sorry, it’s a long word, Coaching Federation Associate, Certified Coach or ACC level, and for some sort of narrow, well-defined tasks that can even reach professional certified coach or PCC level, which is quite amazing if you think about it.
Eric Paquin: Yeah, well, that’s quite impressive, especially given how much time investment it takes a human coach to reach those levels.
Janet Benson: Yeah, exactly. So it highlights that, you know, while these coach bots are backed by significant coaching and research expertise, they are, as that paper I mentioned, the title suggests, they’re getting better all the time, but that doesn’t automatically mean that they should be used for every type of coaching scenario.
Eric Paquin: Yeah. And one thing I liked in your session was that you showed multiple models of how AI can show up in coaching, not just AI replacing the coach. Can you walk us through those?
Ilse White: Yeah, absolutely. That’s interesting because there seems to be a bit of a gradation in them. So one model is like in the one extreme is coaching emulation. So that’s where the AI coach acts like a fully autonomous coach. So that’s where you get more of the headline discussions at the moment. So it’s This is where an AI chatbot is running an entire coaching session independently. But then there’s other models as well that are a bit more hybris where you have coach support. And that’s sometimes called triadic coaching because it involves the coach, the coachee, and the AI, and they all work together. And the AI ML might help with things like onboarding or needs assessment or administration tasks. or things like sending nudges and reminders and setting tasks for the coachee in between sessions that they have with their human coach.
Janet Benson: And then also there’s like coach education was another one that we saw. AI kind of acts as a trainer of coaches, which is kind of an alternative approach. So coaches use LLMs or large language models to practice their coaching conversations to, you know, and they use it to find activities for specific clients or review years of coaching notes to spot patterns and opportunities for improvement.
Ilse White: Yeah, absolutely. And that’s you. used at like the coach conversation or the conversation level as well with coaching data analysis, where a coach might use AI as a pattern detector, like you mentioned. So looking across coaching notes and conversations to identify patterns in behaviors for individuals that a human might not spot. And then kind of more broadly look at that across an organization and see if there are any behavioral trends that, you know, need to be better supported or addressed.
Eric Paquin: So the story is a lot more complex than just AI versus human coach. It’s also AI with and for human coaches.
Janet Benson: Yeah, I’d say that’s exactly it. And I guess that’s where we kind of see a lot of the potential as well.
Eric Paquin: Thanks, Janet. And you also mentioned a framework that tries to embed quality and ethics into AI coach bot design, the DAIC framework. What’s that in simple terms?
Janet Benson: So the design AI coach bot or DAIC framework, as you mentioned, so proposed by Terblanche, so somebody who’s very well known in the space of coaching and AI and coaching as a researcher. It starts from the idea that the AI coach bot sits within an organizational context and must embed the key aspects of the coach-coachee relationship at its core. And then some of these aspects include things like trust and stuff we’ve already kind of mentioned. empathy, you know, transparency, predictability, reliability, ability, you know, benevolence, integrity, anything like that. And also, obviously, you know, that something we touched on earlier was that ethical conduct.
Eric Paquin: So it’s not just about kind of clever prompts. It’s also about kind of building in all that coaching ethics and quality into the system that you’d be designing for a coach spot.
Ilse White: Yeah, and that’s super important because that the model, the DAIC model emphasizes that the AI should be based on like a theoretical coaching framework or a theoretical coaching model, and that it should also always support a specific coaching outcome rather than just giving general life advice. And even though this particular model dates back to 2020. It’s maybe not that long ago, but in the context of AI, nothing stays relevant for very long. But in this case, that model still very much applies and is very relevant still.
Eric Paquin: So to stay with that DAIC framework, you mentioned it’s paired with five design principles for CoachBot. So can you talk us through those?
Ilse White: Yeah, sure. So the first one is around managing user expectations about what the AI coach can do and what it cannot do. It’s important to realize that AI is not a silver bullet and it’s not a substitute for every kind of human support or human support at all. And then the second and third principles of the model focus on acknowledging and adapting to changes in chatbot behaviour and ensuring reliability and recovery from errors. So in other words, the system has to be able to catch itself, if you like, and correct its own mistakes and prioritize the coachee’s safety. And that’s the thing that’s very hard to do. The fourth principle of the model is to encourage appropriate self disclosure. Research is showing that people really trust AIs with a lot of personal information. So we need to train our models and remind coachees to be very careful and maintain a critical perspective of what they are sharing with their AI coach boss. And then finally, designers have to consider the level of human likeness in chatbot design. And that’s kind of another kind of safety element, I suppose. So, because the more human-like the chatbot feels, the more people may over-trust it or forget that it has limitation and is ultimately still a machine that you’re trusting your very confidential information with.
Eric Paquin: So these principles, those design principles, they’re partly about the tech, but they’re also about educating the coachees and managing kind of the relationship between coach and coachee.
Ilse White: Yeah, exactly. It’s really as much about digital literacy and boundaries than it is about the technical element and the algorithms of AI coach bots.
Eric Paquin: I see. And you also map the coaching market into different segments. So from kind of low tech to high AI. Janet, can you run us along that spectrum?
Janet Benson: So at one end, you sort of have those experienced coaches who use little or no technology and whose practice is centered fully on that human relationship. And then you have coaching platforms that employ human coaches so often at that associate certified coach and professional certified coach level and focus on making the coaching more accessible and affordable, as you mentioned earlier, at an organizational scale. But then In the middle, there’s these triadic or hybrid coaches. So human coaches who use tech and increasingly AI, you know, for admin tasks and then just to enhance their practice. So let me see AI coaching chatbots, which are typically built on coaching models and small language models to constrain that coaching process.
Ilse White: Yeah, and on the other end of the spectrum, you have the large language models that are being used as a coach, like ChatGPT. And those are situations where people just simply tell a system like ChatGPT to act as their coach. And the research is suggesting that many people find this very accessible and very useful, but it also comes with really serious risks to well-being. It can be very high risk because there is no safeguards, as we have mentioned earlier.
Eric Paquin: So you think that people will kind of gravitate to the experience that fits their needs, their budget, and also their level of comfort with the technology?
Janet Benson: Yeah, exactly. There’s no kind of one-size-fits-all here. And then coaches also need a sort of a toolbox so that they can design custom experiences for different, their different clients and coachees.
Eric Paquin: That’s true. So let’s bring in the voices of coaches. I know you’ve interviewed like a wide variety of professional coaches to validate your findings. Issa, what did you hear?
Ilse White: Yeah, that was really interesting. I think broadly the interviewees confirmed the findings and the insights from the research. They felt that AI cannot support all elements of coaching, particularly when you start coaching, start looking at like more complex leadership coaching or like group coaching situations. But they did see AI as a useful, as very useful for specific or very simple activities, like helping individuals with time management, for example. And they also acknowledged some fear about being replaced by technology. But at the same time, they were saying that the coaching needs continuously evolve and using AI is a natural, if somewhat disruptive evolution of kind of the coaching practice. And a lot of them actually emphasize that they feel that coaching skills development is essential and that working responsibly with AI is part of that for them as a professional coach.
Janet Benson: And just to add to that else as well, like, a lot of coaches believe that people or, potential clients would choose the coaching experience that suits them. whether or not it involves AI. And they kind of stress there’s no single model that fits everyone. So as I said earlier, there’s no one-size-fits-all. So hence the need for that toolbox of approaches. And I just remember from doing the interviews with you, Ilse, like one coach summed it up really nicely. I probably remember this one by asking, you know, rapport impacts hugely on success. Can you build rapport with AI? Absolutely.
Ilse White: That’s central to all of the conversations that we had.
Eric Paquin: Yeah, actually. I know someone who actually does build a rapport with AI, but the question really captures the tension kind of in that world. And speaking of questions, your session ends with some big ones. Janet, what are the key questions you think organizations and coaches should sit with?
Janet Benson: Yeah, exactly. we kind of ask, can we protect the coachee’s well-being when AI is in charge? Can AI build rapport as we just discussed there and trust, build that element of trust and relationship like a human coach? And then is AI coaching holding up a mirror to the coaching profession? So that’s something I think the coaches are a little bit… aware of, but also maybe concerned about. So I suppose these questions wrap together the ethical, the data protection and the privacy issues with the very human nature of coaching. So they also highlight that as AI capability grows, it is pushing the coaching profession to evolve and as I said, hold a mirror up, reflect on its own standards. It’s evidence based and again, it’s accessibility.
Eric Paquin: So AI is not only like a tool for coaches or coaches, it’s actually a provocation to the profession.
Ilse White: Absolutely, because it challenges coaches to clarify what’s really unique to coaching, what’s humanly unique to coaching, and try and understand where technology may help or augment that experience instead of replacing it. So it’s bringing a lot of questions to the surface for coaches.
Eric Paquin: That’s really interesting. And I could go on for hours, but to close, I just want to put your own closing questions back to you, but in a kind of practical way. So for listeners who are HR leaders, L&D professionals, or coaches, how should they think about the future? Like, are we heading towards a world where AI would replace coaches, a hybrid model where they’re in partnership? or a renewed focus on the human element and what it means to be human.
Janet Benson: Well, I mean, we don’t see AI fully replacing professional coaches, especially as Ilse mentioned, where the work can be quite complex or, you know, emotionally charged or deeply relational. So I think that hybrid model where AI supports the coaches and coachees, while the humans stay responsible for the more sensitive work looks the most promising right now. I don’t know if you’d agree, Ilse.
Ilse White: Yeah, I do. And while I think there’s a lot of focus on this technology right now, and it’ll definitely continue to play a role, but we do think that in the future there will be a kind of refocusing on the human element of coaching. I think AI will definitely start taking on some of the more kind of structured and scalable tasks that are involved with coaching. But there is– I think integrating AI into practice is a real opportunity for human coaches to double down on what they do best. So things like deep listening and bringing nuanced judgment, being sensitive to cultural differences, and definitely taking ethical care.
Eric Paquin: So the call for action or the call to action for our listeners is to start experimenting, to kind of build literacy around AI, and partially, I suppose, to keep the conversation going about ethics, well-being, and quality in coaching.
Janet Benson: Yeah, definitely. And also, Eric, to kind of, I suppose, to add to that, to involve, you know, your coaches, the coachees, and any organizational stakeholders in that conversation. So not just, say, a tech team or someone who’s focused on the technology.
Ilse White: Yeah, absolutely. It takes a lot of people and everybody has questions. So at this point, it is completely okay to have more questions than answers at this point. I think it’s still, AI coaching is still at a relatively early stage. And the real objective in the conversation that’s happening right now is not centering around how AI will replace human coaching. But it’s really much more about defining or redefining what good coaching looks like in an AI world.
Eric Paquin: Yes, it’s been a fascinating discussion. Thank you both for sharing your research and reflections on how AI is powering personalized coaching.
Janet Benson: Thanks, Eric. It’s actually been really good to chat about the research.
Ilse White: Yeah, really nice.
Eric Paquin: Well, thank you all for listening. And if this episode sparked ideas or concerns about AI and coaching in your organization, we absolutely love to hear from you. You can contact us at info@learnovatecentre.org. And until next time, this is Eric from Learnovate. Abiento!
Exclusive content for our members
If you're already a member, please sign in now to continue.
Or contact our team to find out more.