Transcript
Intro: [00:00:00] The Inside Learning Podcast is brought to you by the Learnovate Center. Learnovate research explores the power of learning to unlock human potential. Find out more about learnovate research on the science of learning and the future of work@learnovatecenter.org.
Aidan McCullen: Today’s guest is an expert in design cognition, an electrical engineer, robotics expert, and former deputy division director at the National Science Foundation, where she co-authored NSFs Artificial Intelligence Roadmap. Her artificial intelligence research has focused on ways to make AI agents assume human , cognitive, and personality characteristics, especially related to different styles of creativity.
It is a pleasure to welcome the dean of Manhattan University’s School of Engineering, Kathryn Jablokow. Welcome to Inside Learning.
Kathryn Jablokow: Thank you, Aidan. It’s wonderful to be here.
Aidan McCullen: Great to have you on the show, Kathryn, and congrats on being the first woman dean of Manhattan University School of [00:01:00] Engineering.
Kathryn Jablokow: Thank you very much. It’s a pleasure and it’s an exciting opportunity to to move forward in history.
Aidan McCullen: I mentioned in the intro that term design cognition, and that you’re an expert in this design cognition. I’m sure that will go over the heads of many of our guests, including myself. So maybe you’ll enlighten us to tell us what that means.
Kathryn Jablokow: Sure. I’ll start with a simple definition and then unpack it a little bit. So the simple definition is that design cognition is the study of how people think when they’re engaged in the design process. So it’s really very simple definition, but let’s unpack the design and let’s unpack the cognition. So engineers, even scientists use the design process. So they, identify a problem somewhere out there in the world. They gather data about that problem. They they generate ideas for [00:02:00] potential solutions, and they might prototype them and they might test them with people. And through that they will decide, this solution looks like it might be best.
Let’s do a detailed design. They might test that some more and then ultimately, hopefully they have a solution that works. So the design process, that’s all it is. It’s from getting from an idea about a problem all the way to a solution. So design there. Cognition, we can also think about as a process. So we gather information and put it in our memory. We recall that information. We generate ideas, we make decisions about those ideas. cognitive psychologists have been studying cognition, of course, for a very long time, just like design researchers, study design, and they have different frameworks. So you can talk about cognitive level, which is what do you know?
What’s the knowledge, what’s the experience you have? You can [00:03:00] talk about cognitive style, which is how your brain prefers to use structure, which I can tell you a little bit more about later. And then you also have what they call cognitive affect, which is the value systems or the belief systems that you use to judge whether a solution is acceptable or not. if you take those elements of cognition and you think about the design process and you say, if I have an engineer. working through that process from idea to solution? What are all the things that’s going on in her mind in terms of those different cognitive elements?
Aidan McCullen: we’ll build on this later on when we talk about AI and how AI plays into this, but neurodiversity or cognitive diversity plays and outsized role if you’re gonna be dealing with high performance teams, for example. And one of the things I read in researching this piece was how your work in design [00:04:00] cognition has led to best practices for high performance teams.
Maybe we’ll build on that and share that with our audience.
Kathryn Jablokow: Absolutely. So one of the things that that a high performance team will do is to understand not only the problems that they have to solve, but the people who are on the team. So you’re looking for an alignment. It’s I have, problems and I have people. How do I make sure I have the right people for the problems? So that means, first of all, I have to look at my problem using the same kinds of or, measurements that I’ll use with people. So let’s take an example. Let’s imagine that I’m here in New York City, so let’s imagine I’m designing a bridge or a tunnel. And I know that to do that, I’m going to have to understand construction.
I’m gonna have to understand the environment. I’m going to have to understand the social impact of, building this bridge or this tunnel. So there’s certain things I’m gonna have to know about. [00:05:00] So now I put my team together and my high performance team would say these obviously I need to have that knowledge.
I need to have those same skills, so I’m gonna make sure that I have them on my team. So that’s. One step, but there’s also that cognitive style piece, which says, as I’m designing my tunnel, am I going to need more traditional ideas, or am I going to have to push the boundaries? What does this problem require? And I’m also then gonna make sure as I put together my team of people who understand construction and who understand the environment. Are there people there whose, brains are also wired to, to give us the kinds of ideas we want. And the beauty is that this is all measurable, I don’t have to guess. And so it’s the high performance team says, I understand my problem, I understand my people, and let’s match the people to the different parts of the problem. One last thing [00:06:00] that high performance teams do is they say, I’ve got this. Whatever cognitive diversity is gonna take to cover the problem, I’m gonna have a little extra, so I wanna make sure I have a little bit of or a little bit of, different types of thinking, a little more than I think I will need I know my problem will not stand still, and I wanna be prepared so that when it doesn’t stand still, I already have. What I need on the team. So that’s another best practice of the highest performing teams.
Aidan McCullen: with a high performing team like that. And before we get into ai, ’cause I’d love to share how we can include AI as a a team member and programme the AI to maybe speak truth to power, which is the next thing I just wanted to share was, so you have a team, and depending on people’s different neurodiversity, that they may be less inclined to argue their [00:07:00] point or to speak up in the meeting, or when there’s brainstorming, it doesn’t ideally suit them.
They need time to let ideas marinate before they present those ideas.
So I’d love you to share what you see. to help include everybody, so you’re getting the best of everybody in a meeting, in a high performance team.
Kathryn Jablokow: And that’s critical of course, because we’ve put this team together because we want everybody to, participate, right? Otherwise, we would not have invited these people to be on the team. We only have teams because there are things we can’t do by ourselves. That’s why we put teams together. So again, if I am the leaders of the team, so I’m the people who, are facilitating this. I may not have a title, but I’m the person who’s facilitating this. The more I understand what different types of people require. So you mentioned, for example if I’m a, if I’m an introvert, you were talking about needing time. So as an, I am a [00:08:00] flaming introvert, and so I really need a lot of time ahead to think about it before I know what I’m going to say. If I am part of the leadership or part of the facilitation, I would then make sure people get information in different ways before we have the meeting. So the people that need to think ahead, I make sure that they have the information in the form. They need the people who are going to prefer to work things out.
As we meet, I make sure they’re given that opportunity to participate. So being a, really good team leader requires you to be a good facilitator of cognition and to be observant. About any time somebody is not participating. So it’s being prepared. You have, you spend a lot of time as a good team facilitator preparing for these interactions of the people on your team.
Aidan McCullen: One of the things I thought when again, I was reading [00:09:00] your work and listening to some previous interviews you did, was it made me think of that sh that movie and actually based on a true life thing about baseball with this famous coach called Billy Bean, the show Moneyball. I dunno if you’ve seen that movie.
Kathryn Jablokow: No, I haven’t.
Aidan McCullen: it’s, it’s a brilliant watch and it’s a re really about a baseball coach who took a different route to success and this guy Billy Beane. And what he did was he hired a data scientist to see what was the best type of player he needed to win the league. And it went against all kind of best practice.
The coaches were in arms with him. The selection team were in arms with him. They said that he was killing baseball, and he sold off what seemed to be the best players. And the reason I share that as an analogy is if I have a team that I feel comfortable with, that comfort can actually be the [00:10:00] downfall of the team because I’m getting answers that align with what I want all the time, versus somebody who speaks up and goes, I disagree, and here’s where we’re doing things wrong. And I wondered how then can we measure or leverage diversity of cognition in a team to build or even build on a successful team.
Kathryn Jablokow: So I, like that you’re bringing up the fact that if I’m comfortable with my team that may not, be the best thing. And in, in the research that I do, I talk about problem A and problem B, and I think it comes back to what you’re saying. So problem A is the problem we’re trying to solve right now. That’s the thing, and ultimately that’s the thing that matters the most. We’re trying to solve problem A, when I bring a team together to solve problem A, I inherit a second problem, and that is managing each other as we solve problem A. And part of that means. [00:11:00] It’ll feel really good. If I’m in a team of people who think similarly to I do, they have the same level and knowledge and they have the same style, it feels really good.
It’s really comfortable. We can finish each other’s sentences and so on. But if I start with what problem A needs, that may not be what we need, what the way I think, what I know may not be what we need, and so it’s, it feels risky. To bring in people who are going to be different than I am, and I know they are, but if that’s what we need, then we have to say, okay, then it’s going to feel uncomfortable.
Maybe. Maybe let’s learn how it, would be less uncomfortable. And I taught this kind of material for over 25 years. At Penn State University, for the most part, I was teaching practicing adults. So these were adult learners who had come back to get their master’s degrees. [00:12:00] And I would talk to them about going back to their companies looking at the teams they were in and identifying, am I so comfortable here?
Is this so comfortable that we’re not getting the job done? And what do I need to introduce into the mix? Your, team, that’s very similar. It can solve a certain kind of problem really well. And if that’s the only problem, you see all power to you, how wonderful for you. But that’s not the way the world normally works. And so learning how to be uncomfortable, learning how to welcome even debate. Welcome the challenge because that’s what’s gonna get us to the solution. It’s learned. But if I know why I’m doing it, it’s easier for me to accept. So there’s, so much persuasion and there’s so [00:13:00] much deep understanding that’s required, but it works.
It works beautifully when you invest that time upfront.
Aidan McCullen: this brings us then nicely to, one of the challenges we’ve all experienced with AI has been a team member. And here I’m even talking about as a colleague, one-on-one. And so I work with myself. I have one other colleague, but I use AI then to, as I’ve asked it, to challenge me, see what I don’t see, push back on some of my ideas.
But if you don’t do that, it becomes a sycophant. And I, I love the origin of the word sycophant. I don’t, I dunno if you’ve heard it. It’s beautiful. So it goes back to the ancient Greece and the idea of the marketplace where say, I see you coming along and. I want to encourage you to be one of my clients, so I might give you it.
It’s called the giver of figs. So I would give you some free figs and I’d be like, oh, miss [00:14:00] Jablokow, come to me and buy from my stall here. You look magnificent today. Here’s some figs. So that’s where the word sycophant comes from. It’s the giver of things. So these false platitudes, these compliments, et cetera.
But that is one of the challenges we get. We go, are you? Are you telling me what I want to hear AI here? Or are you actually pushing back on me? And you’ve done a load of research on how do we in, how do we integrate AI to fit into the successful team and in a way that doesn’t pose a risk to our thinking.
Kathryn Jablokow: And the key of course to AI is that it’s focused on what it knows, right? So it’s like a child whatever information I give it, whatever data that it has access to, that’s what it’s relying on. So in our research, what we did was we said let’s teach AI about cognitive diversity then. Let’s teach it about these different ways of thinking, [00:15:00] then when we ask it to help us let’s, ask it. Let’s tell it. We want you to use that information when you help me. So we chose one particular dimension or element of, cognition, this thing called cognitive style. And I’ll just spend 30 seconds talking about that because it, it, it’s brainchild of a British cognitive psychologist, Michael Curtin, with whom I worked for 25 years, and he talks about how our brains have a different preference for how much structure. They like it, likes to use. so it’s a continuum. So you have people who, their brains just really like structure. They want detail, they want things nailed down. They want to make sure that they’re aligning with everybody else. And it goes all the way through a middle ground where you still have a decent preference for structure, but you’re not afraid to push a few boundaries.
And then you go all the way to the other end. And these are the people who are always challenging everything, whatever you [00:16:00] say. And They, shed structure. So we said to our ai, here’s this theory, here’s this model from this man named Michael Curtin. And they, we taught it about different behaviors that people would have, different types of ideas they would have, and then we said, okay, ai, I’m on a team now and I want you to give me ideas that match a certain place along that spectrum. So not ideas that I like. ’cause I know where I am in the spectrum. I have a place, but I want you to gimme ideas that well are different. I’m not good at coming up with those kinds of ideas. it was brilliant at it because we’d fed it, that information and it basically follows your orders. And so we were able to get for design problems that were different than the types of ideas I would normally come up with. So that was just, that’s just one dimension of cognitive diversity. But if you can do it with [00:17:00] one, then our premises, you could do it with more. And so that’s what we would like to do is to take our large language models and take our AI and say, let’s teach you about all the different types of cognitive diversity, perhaps some archetypes. And then say, as we’re working on this problem, give me ideas that Aidan would offer. Give me ideas that Jennifer or someone else would offer. it’s very possible and we’re excited that we can use an AI as a team member more deliberately rather than just accepting whatever it happens to give us on a given day.
Aidan McCullen: let’s contrast that and I’m gonna. Share the research you did, putting the ghost in the machine, emulating cognitive style in large language models. I’m gonna give a link to that in the show notes for those people watching or listen to us. But how could we avoid. The [00:18:00] threats here are the bias that’s baked into ai.
We know about that. We’ve covered that before in the show. But maybe just to compare there, for example, what you’ve shared is it is the ideal world where we get multiple viewpoints, cognitive diversity, diversity in every way, diversity of ability, diversity of gender, everything. And education, which is the big thing.
’cause people see things as they are not how, how, the world is, which is the big challenge. So how can we both avoid, and maybe you’ll share some of the threats of this, and then how can we train LLMS to be the best possible teammates for humans.
Kathryn Jablokow: So I think we can learn from the risks that we have when we just work with other humans, right? So we all carry biases within us. I am biased because I’m an engineer and I see three things through the lens of engineering. So if you try to ask me to [00:19:00] not do that, it’s gonna be really hard for me to do, or through the bias of culture or through the bias of just my lived experiences.
So what do we do when we’re, speaking with each other one of the things we do is to ex, we can explore those biases with each other. So we can challenge, we challenge each other. We say why did you say that? Why it seems like you’re favoring something. It seems like you’re missing ano a perspective. I, those are the same questions that I would ask my AI teammate. And I think the work that’s being done on AI where the provenance ideas being tracked more carefully. Where did you come up with this ai? Where did you get this information? It is challenging that. looking into it to see is what you is, what you were looking at itself biased?
’cause if it is, then I want you to look at something else before you give me an [00:20:00] answer. And so we run the same risks with AI as we do with humans. But the beauty of AI is that we are the one programming it. It’s harder for me to program you. You have your experience, you have your, lived life, right? I, can’t that, but for the ai I can go in and influence it.
And say, what you are giving me is an incomplete perspective. Go look at this and come back. So it involves us being users of the AI not taking its word as gospel. The same way we do with humans. I don’t see the risks as particularly different as, those we work with humans, and we also have the advantage of being able to, I think, fix them or address them more easily.
Aidan McCullen: there’s a great quote by, it’s a. Attributed to Mark Twain. I’ve, I’ve noticed not to. I’ve learned not to just assume [00:21:00] the person they said said it.
said it. It’s always good to check. And even I thought you kind of don’t know, but it’s attributed to Mark Twain and it’s that history doesn’t repeat, but it often rhymes. And I always think about that when we see what appears to be a pattern of change. And the pattern here is innovation. And
Innovation has, when we look back on it, we can see the pattern and we can see maybe a pattern rhymes over different centuries or different eras, but AI feels a bit different and it’s perhaps because there’s an installed base of technology already.
People have access to high speed internet, et cetera. Like it’s not like automation in the eighties. It has to be different. And I wonder what your thoughts are there. Is it rewriting the script of innovation?
Kathryn Jablokow: I see a lot of similarities actually. Between what’s happening with AI and what happened not only with automation in the eighties, but what happened with the [00:22:00] introduction of the internet. And if you step back far enough what’s happened with the introduction of new major technology.
So if I take that step back and go all the way and say, what are patterns of innovation? And it ties back nicely to what I was talking about with structure. So if you look at, if you look at change. a lot of this comes from Thomas Kuhn and his structure of scientific revolutions where, wherever we are in society, our technology has assumed certain, we have an understanding of it. So let’s, say AI hasn’t happened yet, and our technology is in a certain status. We understand it. We understand where it fits into society, we understand how we’re using it today, and then along comes something new. So along comes robotics in the eighties. Along comes the internet, along comes ai. And if the technology is [00:23:00] disruptive enough that it causes us to question the way we were doing things. So it causes us to question it enough. ’cause a lot of little changes come to technology and we like them, but they don’t really change how we do things very much. But it causes us to question that and that interrupts the structure of the way we do things. And what happens then is that there’s naturally both excitement and anxiety. They happen at the same time. So people are very excited about this new thing and they’re thinking, wow, this is gonna change everything the way I do it. And then simultaneously, when they go to bed at night, they think, oh, I’m terrified. This is gonna change everything I, the way I do it. and so there’s a lot of froth. And what we do as humans when we get into that situation where there’s all this froth is we don’t like to live there for too long. So we try to, pull it back and we try to find out how can I actually use this thing [00:24:00] in my daily life and effect? What we’re doing is we’re taking a a situation where the structure just got blown up a little bit and we’re saying, I don’t wanna live there.
I’m gonna pull some more structure down. So that’s what’s happening where we are with AI right now. At the beginning there was a lot of froth. I don’t know what’s gonna happen. I don’t know what to do with this. And what you see us doing now is just like your show how do we integrate AI into education? How do we integrate AI into science? How do we integrate AI into teams? And so we’re in a stage where we’re taking that kind of blown up and we’re, it’s when we’re done, the structure’s not gonna look the way it looked before. But it will be more structured. will bring it back. ’cause that’s what we do as humans. Now, it’s possible by introducing AI, things we do will look very different, but we will integrate it. And there will come a time [00:25:00] just as it did with robotics, just as we did with the internet, where we say oh yeah that’s, what we do with ai. That’s what AI does for us. And that will become the new norm. And so from that perspective, I don’t see AI any differently than I did robotics in the eighties, which I lived through and build walking machines and all kinds of things. And robotics was gonna take over the world. It didn’t. The internet was gonna take over the world. It didn’t, but it changed things.
They both changed things. So I’m interested to see where do we wind up? Because there’s gonna come a technology after AI that’s gonna blow that up. And so that’s how I look at this. So I am both excited but less anxious, I think, than some of my colleagues.
Aidan McCullen: Great to hear ’cause there’s a lot of fear around it. And in the words of Kuhn, it is a paradigm shift and we’ll have a shift. [00:26:00] Baseline as part of that paradigm shift as well. Kathryn, for people who wanna find out, I mentioned that research, I’ll link to that. Where’s the best place to reach out and find you?
Kathryn Jablokow: The best place to reach out and find me is at Manhattan University. People can reach out to me there and I’ll be happy to talk to people. I would love to talk to people about their perspectives on this.
Aidan McCullen: Dean of Manhattan University’s School of Engineering, Kathryn Jablokow. Thank you for joining us.
Kathryn Jablokow: Thank you very much for having
Outro: Thanks for joining us on Inside Learning. Inside Learning is brought to you by the Learnovate Center in Trinity College. Dublin Learnovate is funded by Enterprise Ireland and IDA Ireland. Visit learnovate center.org to find out more about our research on the science of learning and the future of work.
Please note that this transcript is auto-generated – please refer to the audio or video for exact wording.