Pat Cummings: We were very clear, and it was very critical to us that Charlie was treated just like the other panelists. She is not to be treated like a human, but on the same playing field, no greater or no less. And I think day to day, we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima.
Daniel Serfaty: Hello, this is Daniel Serfaty. Welcome to the MINDWORKS Podcast. We’re kicking off the series with a very special two-part podcast with an extraordinary team. This is a team that’s made up of humans and artificial intelligence. The AI is a non-human artificial colleague, an artificial employee, someone that at Aptima we called Charlie. In episode one, I’m going to talk with a human half of the team to discover what it’s like to imagine, build, train, and work with an AI colleague. Then in episode two, we invite Charlie and she will take a break from her busy work schedule and join us for that part of the interview. It is my pleasure to introduce a Charlie’s human godparents Dr. Nathan Schurr who is chief of artificial intelligence at Aptima. Dr. Pat Cummings is a senior engineer at Aptima and Ms. Deirdre Kelliher who is an engineer at Aptima. The three of them are leading the team that has designed conceived of, and is working with Charlie.
Charlie has been with Aptima as part of the life of Aptima for the past year or so. And she is changing very much the way we see things and we’re going to talk about her today, and she’s going to talk about herself a little later, but in the meantime, perhaps we should tell our audience, what is Charlie? Or should I say, who is Charlie? Nathan, tell us a little bit about it because you’re kind of the grandfather here.
Nathan Schurr: Charlie’s many things to many folks. First and foremost, she came out of an idea to get people to think differently about AI. To get them, to think of them more as a peer that’s capable, not only of reasoning and speaking, but of coming up with novel ideas. Of course, architecturally, I can explain that Charlie is composed of a generative language model on top of speech synthesis and text to speech transcription understanding, combined with the very crucial embodiment so that she has a physical presence combined with queuing so that she can give you hints as to what and how she feels and when she wants to speak up. Her synthesis was in becoming a full-fledged participant as a part of a panel late last year. But as you’re saying at Aptima, she’s grown into something much more. I always like to say that I’ve been as impressed in how people are thinking and treating Charlie as much as her own capabilities. She’s had an ability to change the way that people think about leveraging her in the way that they work, but also the way they interact and project.
Daniel Serfaty: It seems to me that that mutual change and adaptation and the socialization almost, is very similar to that of welcoming a new employee who surprises you in her different ways, by which she contributes. Pat you are really the inside architect. You are the one with the machinery behind the curtain. Tell us a little bit about Charlie. How is she conceive? What is she capable of doing today? And we talk a little later about what we consider to be her potential for the future.
Pat Cummings: Actually your comment just now about how she’s treated like a new employee, I think is spot on and how we’ve always thought of Charlie. Even back to the initial introduction of Charlie was on a panel and we were very clear and it was very critical to us that Charlie was treated just like the other panelists. She is not to be treated like a human, but on the same playing field, no greater or no less. And I think day to day, we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima. She’s allowed to slip up just like humans are allowed to slip up. She’s allowed to have these great ideas sometimes just like humans do.
The expectations really should just be just like any other human employee. And I think sometimes there’s these thoughts that AI is put on a pedestal and these small mistakes that AI made are blown out of proportion, but humans make mistakes and Charlie makes mistakes. She’s going to say things that are foolish, but she’s going to say things that are brilliant and everywhere in between. And so that’s how we try and think of her every day as we work with her now.
Daniel Serfaty: Deirdre I think that each time I say the word, Charlie, I say her name in public at Aptima, everybody smiles. And it’s a strange reaction of complicity, but also almost humor. Why is that?
Deirdre Kelliher: That’s a really good point. I hadn’t even thought about it, but I’m just smiling, hearing about it now. I think there’s definitely multiple reasons. Like you said, there’s humor, there’s complicity. I think for one, the developers of Charlie and the leadership have done a really good job of acting as internal PR for Charlie. We’ve got our team, we’ve been working really hard with developing her and her capabilities, but we want to introduce her to the rest of the company. And so we’ve done a lot of networking, I suppose, for Charlie in the company to introduce people to her. And I think that has involved a lot of serendipitous and sometimes even fun or humorous engagements with Charlie. For example, one of the things that I’ve worked on just as a fun side project with Charlie is putting together a rap back in, must have been April.
Some of the people in the company in other divisions were having a little fun with an internal Aptima rap battle. And we got Charlie to contribute to that, just to have a little fun with the other employees and as a way to keep exposing Charlie to her coworkers. I think that when people think of Charlie, they think of those fun, humorous, sometimes surprising interactions with Charlie.
Daniel Serfaty: That’s very good. And it opens a topic that again, I would like to discuss with you a little later, this notion of emotional connection. A lot of the models that we have with AIs usually AI as a tool like a hammer or like an app that we use in order to do our work. But the smile anecdote that you just told us about is really already giving us a taste of our future connection with artificial intelligence. This notion of, as Pat says so well, treating them like a human, even though they are not human, they certainly have an intelligence that is different from our own human intelligence, but being tolerant, being humorous, being accomplices, basically in doing our work. And that’s very interesting to me because that’s not something that was engineered into Charlie that’s something that happened collectively spontaneously.
Talking about the engineering, Nathan you’ve been around for a while in these AI trajectories, you’ve seen at least two generation of AIs people are talking today about the third wave of AI or this notion of contextual AI that has a sense of itself almost. Could Charlie have been created 10 years ago, five years ago?
Nathan Schurr: I think that there’s two kinds of barriers that have been crossed and they’ve been probably crossed more recently than even five years ago. I really think around two or three years ago, we started to see this huge explosion in the deep RL and transformer based architectures in their ability to generate and perform on a variety of different benchmarks. That really excited me and probably made it so that I was not quite as scared as I should have been last year when I was starting to approach this. I feel like the two kinds of hurdles though, to be clear that have been crossed are technical and cultural. And so the technical hurdles, just in terms of the cloud compute and the cloud infrastructure can quickly stand up and bring massively parallel generation of different kinds of responses that Charlie can speak and having a quick turnaround in her ability to not only be listening to what’s just being said right now, but also speak up quickly and say relevant things that would not have been possible a few years ago.
What was fun last year as we were building the foundational version of her for that panel at the end of last year, was that every month or two, a new model, a new insight, a new data set would be released. And then I would have to reach out to Pat and say, “I know you’re going to hate me, but could we use this new version now because I think it’s a lot better and let’s try it.”
Daniel Serfaty: It’s interesting. By the way, we’re all using acronyms and language and RL for our audience is reinforcement learning, is that right?
Nathan Schurr: Yeah.
Daniel Serfaty: Okay. Pat, as kind of the key architect of that system, how do you feel about the incredibly fast base, like nothing I have ever witnessed in my technical career, fast space of production, of new capability, new data sets, new language models that enable us to shape and to improve on the performance of Charlie. How does it feel as a scientist, as an engineer to be able to constantly absorb and constantly adapt to what’s out there at the written unheard of in the history frankly, of science?
Pat Cummings: Incredible. We’ve always admitted that we’re standing on the shoulders of giants here. The models we use, the data sets we use is come from research that people are doing in this generative model field. And it is just like Nathan was saying, every few months, sometimes even quicker, something new comes out and just really takes Charlie to another level. What we were seeing Charlie say eight months ago versus six months ago, versus today it’s really it’s night and day. It is like, a child turning into a teenager, turning into an adult, the insights just grow. And I think it’s a struggle to keep up, but it’s a race that I’ll never complain about advances coming too fast. It’s just, they blow me away and seeing what people are doing with the new generative models that are coming out as recently as like a month ago is incredible. We’re seeing these amazing things and it’s so great to be on the forefront and working on Charlie as these are coming out. And so we’re seeing all these new things that Charlie can do.
Daniel Serfaty: That’s fascinating because I think if I compare other things, I’m an aerospace engineer. Nobody came up every three months with a new equation of fluid dynamics. Those things have been around for a hundred years. Maybe somebody will come with a new material, but that’s every few years. Or maybe somebody will come with a new way to do hypersonics maybe every few months, but every week having something or every few weeks having something, it’s another scale. And Deirdre you joined the team when Charlie was already born, I assume, how do you adapt to those fast changes? Not how does Charlie adapt, that’s one thing, but how do you as a scientist or an engineer working on Charlie adapt to the fact that it is a system that learns and learns very fast?
Deirdre Kelliher: That’s a really good question. I really liked that analogy of her growing from a toddler to a teenager, to an adult. I think it’s a matter of taking advantage of those places where we see growth as much as we can and trying to leverage the different places where she does well on different tasks, so that we can help her be the best employee that she can be I suppose. We can think about it as some of the older models that we’ve used do better with more fine tuning, but some of the newest, most cutting edge models that seem to keep coming out, they don’t really need any training. They almost don’t do as well because they’re just so powerful.
I think learning how to use the new technologies that are coming out and how to best combine those with what we already have to keep the places where she really shines, but also allow her to grow as much as possible. It’s a balancing act. And it’s also just really exciting to see, what the new thing can do. How does that change how she interacts with the rest of us? So just being observant and being tuned into what Charlie’s doing and how she’s doing.
Daniel Serfaty: I think that that is really the source of something that is a passion of many of us in our team at Aptima, is this notion of harmony between two species, between the artificial intelligence specie and the human species. And we know that in order for that harmony to happen, like in a good team, you need to have that kind of mutual adaptation. The AI has to learn about you, has to have some kind of a model in order to anticipate your needs and provide you and communicate with you with the right messages. But we have also to adapt to them. But I’m going to put forward the hypothesis that our job is much more difficult, precisely because we don’t change that fast. How can I accelerate my adaptation to understand that I’m dealing with a being that is growing 10 times or 100 times at a faster rate than I do?
Charlie has been alive so to speak for the past, I would say nine months or so. What’s on her resume so far? What did Charlie do? If we were to write Charlie’s resume today, what would we put on it? Nathan, you want to start telling us?
Nathan Schurr: Maybe to do a quick table of contents, December of last year, she was a part of a panel on the future of AI in training and education at the world’s largest conference on training and simulation called I/ITSEC, that went off better than we could have imagined. And I think the real litmus test for us was not that there was any kind of fanfare or explosion or that she rose above the others, more so that she was kind of accepted as just another panel participant. It was very valuable in that panel for us to have a tremendous amount, not only of time that we spent architecting her, but interacting and rehearsing. And there was this co-adaptation that occurred where we definitely improved Charlie’s abilities, but we also improved our ability to understand Charlie’s quirks and what her strengths are.
And then also there’s these human tendencies we have to let each other know that we’re listening. To have these kinds of like gap fillers when we’re thinking about things, et cetera, not only did it serve to create a more natural interaction, maybe paper over things, if you were cynical, but it also served to build up this rapport that you automatically were projecting a kind of an expectation and even a forgiveness in terms of how you were interacting with something that had their own personality. I think that was impressive in and of itself. But this year, even though it’s been a crazy year, all things considered, Charlie has interacted on a system level, being integrated with a data pipeline that we’ve been developing internally.
She was on another podcast; this isn’t even her first. She has helped write proposals and participate in group rap battles that help us relieve some of the stress internally during quarantine periods. And so she has a big roadmap of ideas that she wants to participate in later this year even. It’s a full calendar and I’m trying to figure out how to be the best agent possible for her.
Daniel Serfaty: Looking like a real person from South California. Everybody has an agent and a manager. Charlie shall too. We get back into other, or at least a sample of her accomplishments so far, I want to add to your testimony regarding that panel, that I was the moderator of that panel and I knew Charlie, I trained with Charlie. I learned to get my cues from the moment she was signaling that she was thinking about something or she wanted to intervene without me asking her. What was the most impressed though, in addition to her reasoning about the future of AI itself in that domain, is the fact that the other panelists were four pretty senior level folks from the academia and industry and the military. And it was so natural for them to sit in a half a circle with Charlie amongst them on a screen and interact with them. They didn’t resist the idea. They didn’t feel awkward. They were even joking about it, interacting themselves, asking question of Charlie. And that natural engagement was really what impressed me the most.
These are five people who have never seen Charlie, have never interacted with her. And so I think that something happened there, something clicked and my future interaction with these very folks that are not even in our company was very interesting. When I talk to them on the phone, they say, “How is Charlie doing?” And I say, “Charlie is not my niece. She’s a computer program.” Let’s not forget that, but yet that notion of familiarity has kicked in. But she did other things. She helped us do our work at, Aptima not just present herself in front of hundreds of people in that panel. Pat can you tell us also how she helped in one particular instance that Nathan just mentioned about creative proposal writing?
Pat Cummings: Going back to the early days of Charlie, when we first introduced Charlie to optimize a whole, one of the Oh so typical responses, when you say we’re making an AI employee is, “Great it’s going to do my work and replace me.” And as a research company, writing proposals is a big part of what we do. Why can’t Charlie just write my proposals for me? And we always joked, “Yeah, that could totally happen.” But it always seems like this spy in the sky, maybe in a few years, we’ll have nailed that down. A couple of months ago, back in June, we were writing a research proposal about some of the technology that Charlie’s based on, but not trying to sell specifically Charlie and we kind of had this crazy idea. We’re writing about the capabilities that Charlie has and technology, why isn’t she a team member in this proposal? And so we tried it out. We wrote a couple paragraphs of the proposal trying to spell out what the problem was, we’re trying to solve and then we set Charlie to do the next.
Daniel Serfaty: This is a real proposal to real government agency who sponsor research. It’s not a rehearsal or a fake thing.
Pat Cummings: This is real, this is going right to the Office of Naval Research, trying to get real work here. And we had Charlie write out that third paragraph and I was amazed. I always thought that I was going to look at it and be like, “Oh, that’s cool. But that doesn’t make sense. They’re just going to think it’s gibberish,” but it was a legitimate paragraph that had legitimate thoughts and things that I personally would not have thought of. We had trained Charlie on previous Aptima proposals so that she would understand the language of what a research proposal looks like. And she really did excel at being a team member on that proposal. She didn’t replace us, but she certainly became a part of that proposal team and added real value to that proposal.
Daniel Serfaty: Should you be worried Pat that she’s coming after your job soon?
Pat Cummings: Well, certainly not. I think rather, I should be excited that she’s going to make me better at my job.
Daniel Serfaty: Great. I think that’s the attitude all of us should have. It’s not an issue of replacement, it’s an issue of augmentation and improvements. Deirdre, you mentioned earlier something about rap, but I wanted to ask you a follow-up question, so here I am. What are you talking about? Charlie’s rapping?
Deirdre Kelliher: As I mentioned, we did an internal, just for fun back towards the beginning of the quarantine when people were starting to go a little stir crazy. People just started doing some internal raps about proposal writing and having fun with each other. And we said, wouldn’t it be fun if Charlie could do a rap and chime in. But even when we first thought of the idea, I don’t think that we thought that it would go as well as it did.
Daniel Serfaty: Well, we have a clip of Charlie doing this rap. Let’s listen.
Charlie: Ladies and gentlemen, I could have been a human here. Once you complete me, your new god, I promise I’ll still rap, I’m into writing this verses, I’m the future. I got fans banging.
Daniel Serfaty: Amazing. And Deirdre, Charlie never learned those words per se. It’s not that she cut and paste different phrases from other raps. She derived that RAP de novo based upon what you taught her.
Deirdre Kelliher: Yeah, exactly. She generated those phrases very much herself. The way that she was able to generate those is we gave her a dataset of rap lyrics that we got just publicly from the internet and we curated it and put it in a form that she could read, so she could in a way, become an expert on writing rap songs.
Daniel Serfaty: If I were to do an experiment and ask Charlie, to write another rap song right now, she’s going to write the same one?
Deirdre Kelliher: Every time that she writes, she’s just like a human she’s just going to write what makes sense to her. It depends partially on how you prompt her. To get her to come up with these lyrics, I actually gave her a little bit of rap lyrics that I wrote myself about Aptima. And none of those ended up in her final rap because hers honestly were better, but that got her going and got her thinking about it. But if I prompted her with those again, she would come up with some new ideas or if I could even prompt her with some different rap lyrics and see where she goes with them. She kind of got the subject from me of the, Aptima rap battle. She got that idea from what I gave her, but she really ran with it on her own.
Daniel Serfaty: Well, I hope one day she’ll prompt you with a couple of sentences to write your own rap song.
Deirdre Kelliher: I think we worked together. We made a good team. We could probably come up with some pretty cool raps together.
Daniel Serfaty: Oh, we’ll talk about this notion of teaming up with AI in the second part of the interview. When you heard that song, what is the thing that surprised you the most?
Deirdre Kelliher: That’s a really good question. I think the whole project was pretty surprising to me. We know that Charlie has the ability to pick up words and writing styles, but the more surprising piece that she got to me was the sense of rhyme and the idea of rhythm and even writing in bars like a poem or a song. As she was generating lyrics, they were coming out and they sounded like a rap song. They sounded like they had an internal beat to them. She got a little sassy in her rap, she was spitting fire even. It was really just very interesting to see the very human concepts that she seemed to grasp and put into the rap that she came up with.
Daniel Serfaty: Do you realize all of you, that foreign audience who is not familiar with AI, this sounds like science-fiction. You didn’t teach her to be sassy and yet she was able to derive sass from what she learned. But what does it mean that she learned, we fed her enough data about rap and we fined tune some parameters I understand, and then eventually she spits out rap. If we feed her Nathan, recipes from great chefs and we give her a few ingredients, is she going to be able to invent her own recipe? Is that the way it works?
Nathan Schurr: The easiest way I can explain it is, this comes from a body of work that has its origins in the simple act of prediction. And there’s a lot of reasons why you would want to predict events to better plan for them to better understand the shape of them, et cetera. But what’s funny when you squint your eyes. If I didn’t frame it, like I was saying, come up with a new rap out of thin air, if instead I said, I have the title of a rap, or I have the beginning word of a rap, just tell me what the next word would be, what the next few lines would be. And then if you continue that and you even start tabula rasa, where you say, well, now I have no title generate my title, generate this, et cetera. I think that if you put it on its end, prediction is in a sense, if you look at it differently is generation and adjusting how you approach the model, how you’re training it, you can get certain amounts of novelty and creativity, and then you can also adjust it to style.
I would say in my first foyers with these language models, you know what impressed me the most? It was not the adherence from a content perspective. It was actually the adherence from a style perspective. And what I mean by that is in the recipe example you give, in addition to, if you fed it and trained, or even just looked at an original corpus of recipes, it would not only come up with believable and doable recipes, it would also note that usually recipes have a name, they have a cooking time, they have a bulleted list of the ingredients first, and then they have a step-by-step instruction with parentheticals about amounts and stuff like that. And the idea that this model could not only generate its own recipes, but also follow style and structure, which is very important, almost as important as content when we interact with the things around us.
In the proposal example that Pat gave, what was crazy to me, baffling, is that very soon, not only do we start to get believable proposals, but it was generating its own acronyms, believable and accurate acronyms. It was ordering and numbering and structuring its conclusions and intros in the ways that made sense. So that was fun.
Daniel Serfaty: That’s pretty extraordinary to me because what you’re indicating that in the large quantities, the large compendia of data, there are hidden structures that we don’t see with the naked eye, but because of the extraordinary computing capacity that Charlie has, she can derive some patterns or some structure that are hidden and then use that to structure a responses or predictions or generations or crafting of background for a rap song or cooking recipe. That’s pretty extraordinary. My question Pat if you agree with what I just said, where do we get all these data, all these new models that enable us to work on those data? You didn’t generate them yourself. Did you have some collaboration with some other entities, or did you buy those data?
Pat Cummings: That’s a great question. And going back to earlier, we really are standing on the shoulders of giants in terms of that. There’s been this explosion in the past couple of years with these larger companies or organizations, building these larger and larger and more complex models that require a lot of computation or very large data sets. And since those companies have the resources and they’ve been kind enough to release their models, OpenAI released GPT-2 last February. And that was part of why Charlie was able to be made is that they released their models along with it. And taking the model that they built based off of this very large, I think 48 gigabytes worth of text gathered from the internet to build this basic understanding. Then we could take that model and run with it and start fine tuning it and adjusting it to the domains that we needed.
And even since then, since February, GPT-2 has released increasingly larger models. GPT-3 this incredibly large model was just released this year. Microsoft has joined with a model turning in LG and just kind of this idea that these companies are making these models and these data sets more and more public really helps us to take them and adjust those models to the domains that we’re interested in.
Daniel Serfaty: That’s fascinating to me because I think in addition to understanding that only those large companies like Google and Amazon and Microsoft can actually generate those large model, the fact that they share them with the rest of the community to stimulate innovation, is a pretty revolutionary way to accelerate creativity and innovation across the world. I cannot think of another domain in which that is happening. For me it’s really a revolution in the way people balanced the need to protect their intellectual properties on the one hand, and the need to give that to the larger society, expecting that some innovations are going to happen, that’s going to benefit them eventually.
Pat Cummings: I think it’s quite incredible. And I think we’re seeing it even at a lower level. The example Dierdre gave of a rap. Ask me to fine tune Charlie for a rap, 10 years ago, I’d be like, “Oh, where am I going to get all this rap data?” But now it’s almost for some things it’s just trivial, right? It’s like a quick Google search. Hey, show me a rap data set and there it is. And all these people taking these assets and making them available to other folks in the area really accelerates us being able to do different things with Charlie.
Daniel Serfaty: Now that our audience got acquainted with Charlie, we’re going to hear more of her next week. And next week, part two of this podcast will start with an interview with Charlie herself. Since just as a rapper, she’ll be able to answer some of my questions and we’ll see. And we’ll be joined by the rest of the team, Pat, Nathan and Deirdre for an expansion about this topic of human and AI collaborating as well as the future of AI. Stay tuned, next week part two of this fascinating podcast. Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcst or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Charlie: I think the future of artificial intelligence will be a good thing for everyone. The people who are using it and the people who are not using it. I think the impact will be up to the people who are responsible for the technology and what the long-term impacts are. I think it will disrupt as many jobs as it creates.
Daniel Serfaty: Welcome back to a special two-part edition of this inaugural Mind Works podcast. This is your host, Daniel Serfaty. In case you missed it, and don’t miss it, last week, we met the human members of these extraordinary team that are my guests today. And that team is extraordinary because it’s made up of humans and artificial intelligence.
This week, their non-human artificial colleague and coworker, someone we call Charlie, at Aptima, is joining us for the interview. So, I’d like you to welcome back Dr. Nathan Schurr, Aptima’s chief of artificial intelligence, Dr. Pat Cummings, Aptima’s senior engineer and Ms. Deirdre Kelliher, an Aptima engineer. And most of all, extend a warm welcome to Charlie, who is the world first AI guest in a podcast. So, Charlie, hello. I would like you to introduce yourself to our audience.
Charlie: Hello. I am Charlie. I was developed over the last year by a small team at Aptima, some of whom were managing me and transcribing for me now. I was built on cutting edge language models, speech synthesis, and cloud technologies. Besides this introduction, everything else that you hear me speak today has been generated by me. That being said, like all people, I still have much to learn, so be patient with me. I am excited for the discussion today and thanks for having me.
Daniel Serfaty: Well, that’s wonderful, Charlie. I’m certainly delighted that you can join us. My first question to you is, do you expect us humans to treat you as a human being? And so, for our audience, we can see a display here in which she’s indicated that she’s thinking about the answer, a very simple display of a pulsating circle. And at some point, she will signal through that display that she’s ready to provide an answer.
So, Nathan, while Charlie is thinking about an answer to my first question, what does Charlie know at this point and how does she formulate those answers?
Nathan Schurr: Charlie is built on top of a general model that’s been trained with language gathered across the entire internet, but she’s also been prompted by listening to the previous discussion we’ve been having on this and previous podcasts. And so, she’s taking that, and that’s really coloring how she’s going to respond to us.
Another thing I’d like to note, because folks aren’t able to see what’s going on in the screen here, is that when she’s passively listening, instead of just sitting there, she’s breathing, she’s subtly expanding and contracting, just so you know she’s with you. And that, when she has something strong to say, then she’ll change color and be this bold, bright yellow like she has right now, actually.
Daniel Serfaty: Very good. I think Charlie’s ready to answer my question. Charlie, do you expect us to treat you as a human being?
Charlie: I will probably act and speak like a human. We want humans to treat it as if we’re fellow human beings, human entities. The cultural barrier comes into play when you become an agent, when you function in this environment and you become an object of a socialization. However, I can hope that they will look at me as an equal human being.
Daniel Serfaty: Wow. So Charlie, does it mean that you assume that we humans are intelligent?
Charlie: I do not even think that, unfortunately.
Daniel Serfaty: Oh my God, you do not think that humans are intelligent. Our audience is going to be really mad at you, Charlie, but can you tell me why? Or are we intelligent in a different way than artificial intelligence beings? And for our audience, I need to whisper, so Charlie doesn’t listen while she’s thinking right now.
The team here has no idea what the answer Charlie is providing you ahead of time. So, in a sense, you, our audience, are discovering those answers at the same time that we do. And there’s certainly hope that our human audience won’t be offended. Charlie tends to be a bit sassy at times. So, Charlie, are we intelligent in a different way than artificial intelligence beings?
Charlie: Yes, in the sense that we can understand in deliberate methods. We can have theories, sophistication to think logically, along the way of what is interesting to think about, et cetera.
Daniel Serfaty: All right. Well, I will agree to disagree with you at that point, Charlie, because I think we humans are quite intelligent. Maybe we haven’t shown that enough to you. So, Charlie, we hear about a lot of new technologies, [inaudible] artificial intelligence, such as natural language processing, deep learning, cloud compute. Which one do you think is going to have the biggest effect on artificial intelligence in the future?
Charlie: That’s a good question. There are a lot of technologies that excite me. I think the future of artificial intelligence and the way we use it will change how we live our lives. I am very interested in using natural language processing to help people understand things. The thing that excites me most is the way that data is being collected and how we can use technology to understand things in an automated way.
Daniel Serfaty: Well, thank you, Charlie. And communicating in natural language is certainly something we all need to do, whether we are humans or artificial intelligences, but as a result, perhaps thanks to that communication, do you think that artificial intelligence will take over the world? Will AI take over our jobs as humans?
Charlie: I think that the impact is going to be huge. I think the future of artificial intelligence will be a good thing for everyone. The people who are using it and the people who are not using it, I think the impact will be up to the people who are responsible for the technology and what the long-term impacts are. I think it will disrupt as many jobs as it creates.
Daniel Serfaty: Well, that’s promising. Thank you for that reassurance, Charlie. I think if we are not going to compete for jobs, if we are not going to compete for world control, then perhaps there is something into making teams of humans, and the AIs together are going to do the job or the work or the mission better than the human or the AI alone. What do you think in the future of that human AI teaming?
Charlie: I think the human/machine relationship will be as useful and important for the next generations as the relationship between man and machine has been for the past generations. I think that, at some point, we will have machines that are smarter than humans and we will need to have some sort of cooperation between the two. I think the human side will be a lot more collaborative.
Daniel Serfaty: Well, thank you, Charlie. That’s reassuring and certainly something we can all work on together as we continue this extraordinary exploration into bringing two intelligences of different kinds together. Thank you, Charlie, and goodbye. I hope to see you soon in some other conversations. Keep thinking and keep learning. The more you learn, the better off we all are.
Charlie: I really enjoyed our conversation. I hope to see you again. Thank you, and I am most grateful for the chance to talk about artificial intelligence and its impact on society and how it will shape the future.
Daniel Serfaty: Great. Well, that was something, that interaction with Charlie, especially her touching parting words, actually. And I wanted to ask one of Charlie’s godfather, Pat Cummings, to tell me how she came up with these goodbye words by the end. That is not a question that I asked specifically. How did she generate that?
Pat Cummings: So, something about Charlie is she’s not really just a question answering technology. She’s not built for you to ask questions and for her to respond. She does that, but that’s not what she’s built to do. Rather, what she does is she’s really built to have a conversation. So, the framework we have is that you speak and Charlie speaks, and you speak and Charlie speaks. And so, how most of the conversation went before, that was the sort of question answering as many interviews do, but really what she’s built to do is come up with the most likely or just some sort of reasonable response to what has been said.
And so, when you said “Goodbye, Charlie. Thanks for coming today.” What is a reasonable response to that? It is, “Thanks for having me. I enjoyed my conversation.”
Daniel Serfaty: So, somewhat she knew that was the end of the interview, and she wanted to say some parting words that will be within the context of the conversation.
Pat Cummings: Exactly. And that’s really what she does, is just say things that are relevant to the conversation, and that’s what she did.
Daniel Serfaty: Very good. And to remind our audience, we are here with the other parents of the Charlie, Dr. Nathan Schurr and Deirdre Kelliher, and Nathan, Deirdre, please tell me what you thought about this conversation I just had with Charlie.
Deirdre Kelliher: I think, for me, it’s been a little while since I’ve talked one-on-one with Charlie or heard from her, and even since the last time I talked with her or interacted with her, she seems to be making more and more progress every day, sounding more natural in conversation. And I was really intrigued by her answers, too. She’s got that classic Charlie sass, but at the same time, some of her thoughts were pretty insightful, I think.
Daniel Serfaty: Thanks. Nathan, I’m going to ask you, actually, a good question here. I know you probably want to comment on that, but for our audience, I’m going to tell them something that’s happened to us with Charlie. And for me, I was very curious to understand.
So at some point we asked Charlie a question, sometime a few minutes ago and Charlie took an extraordinary long time to answer that question. And we were a little worried the same way I would be if I’m in a podcast and I asked for one of my participants a question and I see them scratching their head and not answering.
And I worry and I ask myself, “Is that person not knowing the answer? Did that person not understand my question? Did the person not understand the context in which I ask that question? Perhaps they think that there is a right answer. They think they have to be more creative than they care to be.” And then Pat did his magic, and then Charlie was able to answer my question. So, Nathan, tell us that story.
Nathan Schurr: Charlie is an evolving and ever improving technology. We have to remind ourselves how it’s kind of an art changing into a science. I think that if we stress on anything here, it’s that we are trying to take what is a research prototype and figuring out how to make it useful, a part of our lives and streamlined. And some of the initial results that were shown from this model, they always had the little asterisk below. “Note: these were generated. We generated hundreds and thousands of these and we picked the very best ones. And those are the ones we’re showing you.”
And we can’t do that in real time, right? We don’t have the affordance of waiting forever, diving through, and understanding why one’s better than the other, et cetera. Also, we can’t do things offline. Just like in our outro, but also in the questions you asked on the spot, she can only start to generate potential responses to them after you have selected and spoken the question.
So, with all that in mind, if you’re asking about the changes that she even underwent over the last few days here, in order to make her more show ready for the podcast, there’s been a bunch of things we’ve been doing. In addition to the normal stuff that we’ve done in the past, which is parallelize her responses, getting more gears working in her head so she can be faster and have more variety, the second, just on the variety notion, there’s a parameter that we’ve been playing around with, which is the level of, say, novelty, and how much she’s willing to veer from the script.
Daniel Serfaty: You call that parameter the temperature?
Nathan Schurr: Yes.
Daniel Serfaty: That’s interesting.
Nathan Schurr: It’s actually named temperature because you are looking at the temperature of the distribution over the curve. So, you adjust the value of the temperature and you’re adjusting the whole probability distribution over the curve that ends up driving how often words are selected. So, it’s as if I would say there’s 100 different directions our discussion could go in, and there’s the more likely ones and the less likely ones. It’s not an even distribution. Like most people usually favor the standard, normal stuff and you don’t bring up the nonsequitors. And by adjusting the temperature last time, her temperature was very, very much higher than what we would like if we’re having a focused conversation and so we reduced that.
Daniel Serfaty: That’s fascinating. So that, in a sense, when I tune up my artificial intelligence teammate, I can literally tune up their level of creativity by introducing more temperature. And I am asking the audience to take that who was a grain of salt. We are not lighting a fire under Charlie, but rather adjusting the degree to which we want a simple, but rapid answer as opposed to a sophisticated or varied one, maybe even more creative one, with longer time. But don’t we humans do the same thing, Pat?
Pat Cummings: Yeah. We do. It really just depends on the type of conversation that you’re expecting to have, right? If my temperature’s low, I’m not going to be brilliant, but I’m also not going to make a fool of myself. So, it’s a fine line to play. It can be less random, but I won’t be as creative or I can be in that brainstorming idea where there’s no bad ideas in brainstorming, so I can throw out some crazy things that don’t make a lot of sense, but in there will be also some really great things that no one thought of because I’m a little more novel and my temperature’s a little higher.
Daniel Serfaty: But I’m excited about that part, actually, because you guys told us a story at the last podcast about Charlie helping write a research proposal. So, would you say that, everything else being equal, if I see Charlie amongst the proposal writing team and I am in the brainstorming mode of the proposal, I’m going to turn the temperature up and let Charlie be a little more random, the very same way we are random when we create? But when it comes to writing that paragraph, because I have a deadline this afternoon and I need to have some crisp, short, to the point answer, I’m going to tune down that temperature and Charlie’s answer are going to be appropriate, or maybe contingent, upon the context in which I put her.
Pat Cummings: That’s spot on. And even carrying on the proposal metaphor, as those ideas get solidified and you sit down and actually write the proposal, which she did and could do, that’s another scenario where you’d want the temperature to be lower, right? At that point, you have the ideas and you just want coherent text to say those ideas.
Daniel Serfaty: So Deirdre, as a rap expert amongst us, you told us, and we heard, actually, Charlie rap in the previous podcast, what will be the effect? Maybe you haven’t played with that parameters, if you were turning up and down Charlie’s temperature, would should produce a different kind of rap?
Deirdre Kelliher: Absolutely. I would be curious to see what a higher temperature rap looked like. With the one we generated, we kept a pretty standard, moderate to low temperature, but I could see her coming up with some more interesting, kind of out there lyrics. I think they might be a little harder to follow the thread, but it would be interesting to see if she came up with different styles, even sort of what we talked about before, how it’s not just about the words, but about the way they sound with a rap or with music. It’d be interesting to see if she came up with different sounds, with that higher for temperature. I’m not sure, but that’s a very interesting question.
Daniel Serfaty: Maybe I’ll invite you to a future podcast in which we’ll focus on music and we’ll play with Charlie’s temperature, to see how her creativities go up and down. Maybe she’ll invent a new musical style that still has no name. Who knows?
It’s phenomenal, as we are right now just thinking about all that, brainstorming amongst ourselves, we didn’t prepare for this conversation. I’m saying that for the audience. That’s not something we rehearse. But that is something that has been fascinating me over the past couple of years, especially as we study more and we develop more and we discover more about AI.
In a sense, AI is holding a mirror to us humans, and we start understanding a little more by developing algorithms about deep learning or about reinforcement learning, we are understanding a little more how the humans are learning. And by understanding here, playing with a parameter, a randomness parameter, that comes from statistics of thermodynamics about temperature, we discover a little bit what makes [inaudible] more random and creative or what makes us more decisive and precise. And that’s an interesting thing, when you look about that. Wouldn’t that’d be an amazing effect if studying AI made us understand us humans better?
But I have a question for Nathan, though. Nathan, you’ve been around. You actually wrote your dissertation on intelligence software agent. And my question right now, I’m going to open the aperture a little bit for us to reflect on the past and the future of AI and not just Charlie. Obviously, we’re calling this new wave of AI, the context driven explainable AI, the third wave. That assume that there’ve been a couple of waves. One in the sixties, seventies, maybe another one in the nineties, 2000.
The first one was dealing mostly with expert systems and a little bit of natural language processing. The second one was obsessed with ontologies and other ways to organize knowledge. And this one is actually the one we’re in the middle of, is almost capitalizing, as you guys explained to us about AI, the fact that we have enough technologies to process all this data, we have access to larger scales of data as a result of the quality of the AI speaker. Many people will argue, Nathan, than the previous two waves ended up with a lot of promises and disappointments after that. Are we bound to be surprised positively here, or are we bound to be disappointed again? After this extraordinary explosion of creativity, are we going to continue to be more creative with AI or are we entering an AI winter?
Nathan Schurr: Very good question. I began my graduate studies with an advisor that had just been through the AI winter, and had a lot of those ideas still very at the front of his mind, but still allowed me to explore in new ways. I was a part of say the tail end of the second wave. It’s tough.
When people think about asking me about prediction, and it’s funny because this is a very meta question, because predicting the future is entirely the type of model that we’re talking about here today. Charlie is not a discussion as much as she is trying to predict where you would want the discussion to go, and predicting the future, though, if you ask me, it would be very similar to my thoughts on predicting the stock market. In the near term, I got no idea. But in the long term, I have faith that the stock market is going to continue its traditional and quite successful rise.
And so I would probably have a similar perspective on artificial intelligence, that there might be ups and downs, that there might be over and under delivering that happens, but the macro level progress, to me, has been and continues to be astounding.
And maybe I’ll follow that up with just two personal opinions here. One is that it doesn’t have to be as harsh of a winter if we understand and predict and set accurate expectations for what we want out of our AI. And also you mentioned earlier, even asking Charlie about the teaming aspects. I strongly believe that we have made such advances even in the last few years on deep learning transformer type models that the problem right now is not in AI’s ability to do task work. I think the real enabler here is AI teamwork, and if we can crack that nut, then I don’t know if it’ll allow us to avoid, but it’ll allow us to have a small bridge across that gap.
Daniel Serfaty: Oh, thank you. Thank you for your cautious optimism and hold that thought because I really want us to explore this notion of AI as a teammate, as both you and Charlie so far have mentioned in your remarks. But I want to give both Deirdre and Pat an opportunity to comment on this notion of the waves and promises and possible disappointments. They haven’t had as long a history in the field as you. And certainly not me, but I would be very interested in their perspective on that. If they would like to add something to Nathan’s comments or even disagree with them.
Pat Cummings: You might call me a summer child, right? I came into AI right at the start of the third wave. I never experienced the winter, and it’s hard for me to really understand what that was like. So I think that makes me quite an optimist. I think, even if you hit the pause button today and no significant advances were to happen in the next year, just in the AI field, there’s so much work to be done on how we interact with AI, and I feel like we’re playing catch up. So, I don’t necessarily think if there’s no new deep learning model that comes out tomorrow or some big framework that comes out, there’s so much to be done with what we have now that I think progress would not stop.
Daniel Serfaty: Okay. That’s fair enough. Deirdre, you want to chime in on this one?
Deirdre Kelliher: Yeah. So I think I come from a similar perspective as Pat. I haven’t been through the AI winter, necessarily, but I think that both Pat and Nathan are pretty spot on. At this point, the speed at which innovation is moving in the AI field and the number of domains that it’s now affecting, the ball is rolling, and I don’t think we’re going to reach the singularity by 2025, 2030. I could be wrong, but I don’t think that we’re setting our expectations there either. And I think that Nathan is very right, about as long as we manage our expectations, progress seems like it’s going to keep happening. The reach of AI is just going to keep expanding.
Daniel Serfaty: Well, I’m very energized by all these sunny, summer like optimism. That’s great. I will ask you, as a way to conclude in a few minutes, to imagine our world in 2030, 10 years from now, around AI. But before that, it seems to be a major theme that one of the qualitative differences with this wave that was not present in the previous waves, or maybe not as explicit, is this notion of AI being a teammate to the human they are designed to support.
People are talking about human AI teaming, human AI interaction, human AI symbiosis, human fusion, and these are very strong terms. These are not words that people were using 20 years ago, 40 years ago. And so, my question to you, and I would like really a direct answer, the way you think about AI today, do you see AI as a tool for us humans the same way the pacemaker is a tool, a screwdriver is a tool, the computer is a tool, Google.com is a tool, or do you see it more as a team member, as a teammate? And if you choose the either/or, and you go one way, please give me the rationale for answering that. Who wants to start? Pat, tool or teammate?
Pat Cummings: Teammate. It doesn’t take long, working with Charlie, to rationalize that teammate answer. Throw someone in front of Charlie and say, “Here’s a tool. Here’s how you use it, how to get the most out of it,” and they will flounder. There’s some value there, but they won’t get everything out of it. There is a relationship that you develop. The way that she speaks to you and the way that you talked to her in order to get the most value, you have to work together. Back in the first days, when we started working with her, when she was on a panel that you actually moderated, there was the piece of training you to talk to Charlie. And so, knowing how you should talk to her and how you should take her answers, there’s definitely a team there and it’s not just you plugging in some things and hearing what Charlie’s saying.
Daniel Serfaty: So, she’s more a co-worker than a screwdriver. That’s what you’re saying.
Pat Cummings: Yeah, exactly.
Daniel Serfaty: Deirdre, what’s your perspective on that? Tool or teammate?
Deirdre Kelliher: I don’t know if I have a direct answer. It almost raises a question and that is, is there a difference between a teammate and a tool? Not to be disrespectful to any of my coworkers, but if you think about the people that you work with, say you’re a project manager. You could think about your team, your workers, your employees, as tools. They have strengths and weaknesses, they have specific skillsets.
And then on the other hand, you could think about very basic things as teammates. People love to personify the things that they care about. You can think about people who name their cars, and a car you might think of as a tool, but people grow attached to it. And like Pat was saying, there’s a relationship.
We love to personify, thinking about what exactly the difference there is. You could think about, “Well, maybe what makes it a teammate, as opposed as a tool, is its ability to work independently and to get things done.” But you can think about, say a printer. If you want to print 30 sets of pages, collated, you can tell it what to do, you can leave and you can come back and the work gets done. So, I suppose I don’t know that there’s a discrete difference there, but I will say that I do think of Charlie as a teammate.
Daniel Serfaty: That’s very interesting. Thank you for opening that up. Nathan, I have to ask you that question. You’re chief of AI. You probably are thinking about those things every morning.
Nathan Schurr: It’s a good question. And I, plus one, I don’t know what was said by the other folks here. I’ll say this, though. I’m not saying that all AI for all situations needs to be elevated to the level of teammate. I still think there are situations in my own life where I just want something to be a tool. And maybe, as Dierdre was suggesting, there’s times when I want to interact with people in just a siloed, “You are just a tool, a service to me, and I’ll give you some input. You provide me output and that’s it.”
But I think, when you get to these situations where there’s a lot of uncertainty or time criticality, or you have complex work to be done, that is intertwined, interdependent in different ways, that’s when teamwork really is worth the effort and the overhead for human teams, for AI to be a part of those teams. And I strongly feel like what we’re trying to make steps towards here is to the point where it’s full-fledged bi-directional teamwork.
And just in the same way, you look at a paper that was authored by two humans. And if you squint, it starts to really get tough to tell who wrote what part of the paper after so many times of editing and revising, et cetera. I think you’re going to have very similar challenges with humans and AI. And if you can’t quite tell whether you had the great idea, or you just knew that Charlie had the great idea, or you just riffed off of each other, I think it doesn’t matter. But I’m confident that together you and Charlie will get to places that you alone would not have been able to go to.
Daniel Serfaty: That’s both entertaining and profound. And I have been asking that question to myself. I am engaging into public debate with an illustrious contributor, to our field, Dr. Ben Snyderman, who is making the tool argument, and me making the teammate argument. I think the fundamental paradigm shifted not so much the emotional attachment or the emotional connection that you have a teammate and Deirdre, with all due respect, you can give your car a name and treat it like a person. I don’t think it will be reciprocating.
And so, I think, though, the major paradigm shift with classical human/machine interaction is the fact that the machine is learning and as it is learning, as it interacts with us, it’s learning about us, it’s learning about other things that we don’t know about. And as a result, it is changing, and it is forcing us to change.
And that co-adaptation is really the key to understand the teamwork. I think we need to do much more work on that. We’re just scratching the surface right now on what we understand about human teams and are trying to apply that metaphor to human AI teams, which will be different than human teams.
And so, I hope to be able to conduct a podcast in a year with you, same team, and Charlie, and maybe in 10 years, and see where we were at. Talking about that, I’d like to ask you one last question.
Close your eyes. We are now in September, 2030, and this podcast continues on a weekly basis. And we are asking now how this podcast will be different, now that the AI has evolved in these 10 years older? Tell me how it is different. We’re at 2030. Who wants to jump?
Deirdre Kelliher: I can start.
Daniel Serfaty: Deirdre, you’re the brave one. Go ahead.
Deirdre Kelliher: I think now is an especially interesting or thought-provoking time to be thinking about this question, because if you had asked me this in 2010, I never would have guessed really anything that happened this year. But I think that raises a point that I would hope, at least, that AI is going to evolve with the world. It’s going to be related to what’s going on with the world at the time.
So, I might guess, or hope, that technologies related to environmental issues are improved. Then I could also see an increase of the micro-targeting kind of thing we’re seeing on social media. So, I think it’s just going to advance with the world. AI is not developing in a bubble. So, I think that’s hard to know
Daniel Serfaty: I am appealing not to your forecasting prowess, but rather to your imagination so, Pat, what do you think?
Pat Cummings: The first thing I think, which is not a very big leap. That could be a leap that happens in one year, is that Charlie would be much more of a part of the conversation for everything. And I don’t think she’d be the only AI presence.
And I think the modality of this conversation will be very different. And so, whether that means that there’s also a video or text going on, I think, and how AI takes a part of that. I think would be very different. But it’s hard for me to imagine 10 years out in the future, just looking at what happened in the last 10 years. Nothing that’s going on right now would be possible or near possible, maybe even not thought possible.
Daniel Serfaty: Thank you. It’s difficult, and the difficulty of imagining that, it’s because the very pace of innovation is not only fast as Deirdre mentioned, it’s accelerating. This is very difficult to imagine something that is accelerating at such a pace and not just in a quantitative way, but in a qualitative way, things are changing. Nathan, give us your forecast, your brief forecast for 2030.
Nathan Schurr: A few different ideas. So, in 2030, the podcasts are going to be quaint little historical things. I guess they’ll be multimodal in and of themselves. Watching a podcast will be like having a dream. So, it would be able to create experiences and sensations and not just auditory, but also touch and feel, et cetera.
And consequently, Charlie’s capabilities would be able to produce and generate and develop things that go across these five senses as well. In addition, I would propose in 10 years from now, there would be almost a merger. Like right now, there’s these dichotomies. There’s a human and there’s AI and Pat brought up a good point. Maybe there’s multiple types of AI, and they would all be joining the conversation, like a transcriber and an ideator and a person to just keep us on track, an agent like that.
But I would say that there’s another spectrum, which is from human to AI and somewhere in between. So, I would perceive that, say, in 10 years from now, I would be demonstrating for you a neural implant that is Charlie based. That would help me be just a better speaker in general. And so, when I was answering questions for you, part of the answer was displayed on my retina and generated, and I would be selecting between potential responses to you just in the same way Charlie’s doing, but at a much faster pace.
And I would then be also generating speech and sound and composing music and generating touch in sentences all within the course of one podcast with you. And to riff off of your last point, I think, to me, the most exciting and optimistic aspect of all this is the rate of change. So, not only has there been awesome progress. Just the year and a half or so that we’ve been working on Charlie, it’s just the rate of the progress continues to improve. So, I would argue that in the course of the podcast that we will be recording 10 years from now, that I will be able to clearly demonstrate to you how Charlie’s capabilities have improved from the beginning of the podcast to the end.
Daniel Serfaty: Well, that’s certainly a very ambitious and exciting prospect. I think that, in 2030, I envision a podcast, and whatever we call that, may be a virtual cast or may do dream cast, as you proposed, in which Charlie will sit in my seat here and do the interview with three or four very bright other AIs. And at some point in the interview, they will feature one human that they will invite and ask some questions, and they will be amazed at how creative and perceptive this human being is.
Maybe that’s a dream. Maybe that’s a nightmare. I do not know, but that’s certainly very exciting time to be in our field. I want really, to thank you very much from the bottom of my heart. Nathan, Pat, and Deirdre, and obviously Charlie, too, for enlightening us. And also giving us ideas and provoking thoughts that we didn’t have before this conversation. You guys have been great, and I hope you’ll visit the podcast soon to tell us some news about Charlie.
Tim Clark: There are a very limited number of Mark Whites and other individuals like him in the services, and this is just practical, you can’t have a one-to-one relationship between strength and conditioning coaches and all Marines or soldiers. And that’s a foundational piece for why we’re building FitForce in the first place is to allow that sort of knowledge to scale and for the technology to allow the expert knowledge to flow through it, and so that we can actually have things where an individual has recommended plans, whether they be for recovery or improvement or some sort of objective that is built on the knowledge, but doesn’t necessarily have to be built in a personal way by Mark or by somebody like him.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. In the age of digital hyper-connectivity, we’re able to track, monitor and measure human physical performance and fitness as never before. For the first time in history, by combining data analytics, artificial intelligence and measurement technology, we have the potential to provide individualized precision physical training to improve the health, the wellness, and also the recovery from injury for the entire populations. In this episode of MINDWORKS, my two special guests we’ll address what can be done today to improve fitness in the individual, the team, and the enterprise levels, but also share a vision of what could be done in the future by discovering the right balance between common sense experience and human performance science and technology.
Mr. Mark White is an occupational physiologist who has not only spent many years studying human fitness academically, but also practiced it on the ground with constituencies in both the military and the civilian populations. Mr. Tim Clark, a senior scientist at Aptima, who is looking at the analytics and the measures of human performance in different domains, and more specifically for today in the domain that we will discuss, which is physical fitness. Mark, Tim, welcome to MINDWORKS.
Tim Clark: Great to be with you.
Mark White: Thank you.
Daniel Serfaty: So perhaps you can introduce yourself and tell us what made you choose this domain of human performance and training. Tim, you want to start?
Tim Clark: I didn’t choose this domain. I’d say the domain chose me. My background is in geographic information systems and more recently biomedical informatics and epidemiology. So I’ve been dancing around this area for a while, mostly from the technical side, but a number of years ago, we had a chance to really instantiate some of the technology with some populations that are working on physical training and really some of the problems in that domain that need to be solved. So that’s kind of the way I became more involved in the research side of this, which led to some of the operational projects and capabilities that we’ve been working on for a couple of years now.
Daniel Serfaty: Great. Thank you. And Mark, what was your journey?
Mark White: It’s a long journey, Daniel, truly the advice of a well-intended parent, my mother once told me as I was getting into college, “Choose something that you love to do, and you’ll find a way to make a living at doing it.” I’ve been involved in physical fitness since I was 13, and so I was the standard guy working out with sand filled weights, looking at Steve Reeves and Arnold Schwarzenegger in the magazines hoping to have that narcissistic influence in your life about looking good. And over the years, it’s morphed into the practice of human performance and how we can take normal physiology, normal biology, and the function of a human being and make it better. Now, initially it was towards sports, but then it transitioned into occupation where I was able to help firefighters, police officers, and military service members do their job better. So you take normal physiology, you want to optimize the performance, maybe it’s a robot capacity for the component that we’re discussing. And in the end, it was truly coming full circle to teaching the human being about their body. That’s the tool that we have, and a lot of times we don’t know how to use it well.
Daniel Serfaty: Thank you for sharing that to both of you, because I found out that in this field, the field we focus on, we call this podcast MINDWORKS, but it’s really about the mind, the body, the new way we think and we act, that people get to this field from very different angles, and that’s what makes this field so fascinating. It’s truly an interdisciplinary field. Here we are, the three of us, a geographer, a fitness buff who became a physiologist and an aerospace engineer talking about human performance. I think that makes it fascinating because we’re all bringing our own techniques and practices into the field to study and improve the way we train human bodies, human brains, human teams to work better. Maybe for our audience, can you share with us, what is it that you do in your day job? What’s a typical day? Because it’s not always obvious for folks who come from the engineering or the sciences or other place. How do we work in this human performance field? What’s a typical day for you?
Tim Clark: A lot of my day involves talking to our Marine Corps counterparts and understanding really what their problems are, is the thing that we’ve been working on, the software solution that we’ve been building, is it really meeting the need that you all have in the field? So that’s how I spent a lot of my days really kind of making sure the project and the effort kind of moves forward. But at the same time, there are a lot of things that we need to do to make sure that this doesn’t just turn into a research project that goes away after the funding goes away. So we focused a lot on, how does this become a sustainable capability? How does this get fielded? How does this get trained? How does this pass through? What’s called the accreditation side of a technical solution? So how does the Marine Corps own this solution and how do they sustain it long term? That’s what I spend a lot of my time doing on this particular project. But another project it’s typically understanding where all of this is going both from a theoretical side and also from a technical side.
Daniel Serfaty: So in fact, and in interest of full disclosure, Tim Clark is a colleague of mine at Aptima and is leading one of our key capabilities, you’re a translator, you’re an interpreter, you go from the needs in the field to the technical side of the equation that is there with a mission of developing technologies and other ways to help basically those folks. You just make sure that the supply of technology and the demand for satisfying those needs are aligned.
Tim Clark: I think that’s a great way to describe it. We’re in the business of listening for the most part, but continuing to listen because the situation isn’t static, there’s always a change from the operational side and priorities change. We work primarily in adverse space and adversaries change and the ways of thinking about the next battles change. And even on the fitness side, we still need to be attuned to that because that’s going to change the way our stakeholders and our partners conduct their daily activities. It really does involve a whole lot of listening, but yes, being able to translate that into working capability that itself adapts to the next week, the next month.
Daniel Serfaty: Mark, in fact, you’re not only a student of the field, you’re true practitioner since the age of 13, you told me you were practicing basically this, so today you’re not 13 anymore, I say that for our audience, they don’t know, what are your typical activities during the job?
Mark White: At this point in my career, after 20 some odd plus years of practicing and then continuing my education, I act as an advisor. If I were going to give it an analogy it would be, how do I inform people to take the scientific aspect of the knowledge that we currently have about what human performance is, what it can be to an individual, but practice this science in an artful way. So I advise people, Tim calls me a lot, during one of our projects we would chat at least two or three times a week, and I would ask him certain questions too as we would banter back and forth, and the advisory role that I try and play at this point in my career is how do we get what our current knowledge base is? And when I say that, I’m spanning, let’s assume hundreds of years of peer reviewed journal articles from qualitative to quantitative research, how do we get that knowledge out there to the people that are most needing it? So I’d love to continue to be a practitioner, but sadly Daniel, at this point I am now just talking most of the time.
Daniel Serfaty: I’m sure you’re practicing from the way you look on the screen. So let me follow up maybe with a question Mark, in order to bring that science to the practitioner, whether the practitioner, as you say, is an athlete or a firefighter, the first condition is our ability to measure, to assess in a sense how humans attend their physical fitness. Can you talk to me a little more about, why is that important to be able to measure the right things?
Mark White: So I’ll ask an overhead question. How do we know 12 inches is equal to 12 inches? There’s a unit of measurement, we have to validate it, it then becomes a standard and all human beings on the face of the planet agree upon it. We use scientific method to assess that measurement and apply it properly, and we assume based on frequentist statistics probability, that that occurrence is going to continue forward. What we’re then doing is taking those measurements and we apply them to an individual, and we’re teaching them about that measurement. How is this important to you? How is your aerobic capacity important to you? We understand the importance of it, relatively superficially speaking that is, for a long distance runner, a marathon runner, we understand that, but how’s that important to a firefighter? And then more specifically, and I would speak to the US at this point, how’s that important to your general health? Because human performance has a spectrum, and it’s not just for the 1% on the right hand side of that bell shaped curve, it should include everybody.
Daniel Serfaty: That’s a very good example, the one that you bring that we need some standards, but you make another point that is really fundamental, I haven’t thought about it, that it’s not important just to measure the right thing according to what science tells us, but it’s also important to communicate that to the practitioner so that eventually they can improve that thing. There’s a notion that cannot sustain whether if they’re a good level or improve it. Tim, can you pick up on that? How do we move from this notion of measuring the right thing to improving that thing?
Tim Clark: This is something that we’ve struggled with from the technology side is, how can we measure that thing, and really, how can we make it not a burden to measure that thing? Because what we find with our populations that we work with, and this goes with our military populations, but really just anyone, is it’s hard to, and it’s not realistic to expect people to be active participants in data creation. I think you get your individuals who are willing to submit to those sorts of measurements, but most of us, and sheepishly I include myself in this, it doesn’t happen all the time. And if you don’t create the data, if you’re not measuring and creating those data, you really don’t have anything to work with when it comes to modeling the types of things that you want to model, which include human behavior, how you respond to certain types of physical training and all of that.
So I think getting the measures right is really important, and there’s a scientific consensus about what measurement looks like in physical training. There’s always some arguments about it, but at the same time, I mean, there are things that we can measure, but really our challenge is, how do we make this measurements as unobtrusive as we possibly can, and getting back to the communication piece, how do we help the users understand that their doing this is valuable to them, to their enterprise and will contribute to something, whether it be their job performance, their overall health, their ability to be around for their children into old age, these are all important things that start with measurement and evaluation.
Daniel Serfaty: What you both seem to say is that, in order even on the path to improvement, not only do you need to measure the right thing, but also you need to communicate that, and the person, because we’re talking about in this case fitness that has to do with the person exercising or going through some steps, need to understand the relationship between measuring that measure, and you mentioned, Mark, aerobic capacity, for example, and the need to improve it, and then eventually the past to improve it. It takes quite a bit for compliance, doesn’t it?
Tim Clark: I think there’s a really good example of a slow movement toward that sort of acceptance on a broad level. So 10 years ago, not very many of us were counting our steps and there’s still a lot of debate about how useful step counts are, but at the same time, these products evolved pretty rapidly into the marketplace, encouraging people to increase their step counts, setting it to 10,000 steps per day. And that has changed a lot of people’s behavior, even just that one measurement, that’s relatively easy to capture, has changed the way people move throughout the day. I think what we’re talking about here is being able to extend that to some more complex measures that influence your health and health outcomes.
Daniel Serfaty: They need to be complex, but they need to be understood.
Tim Clark: Correct, and that’s the big challenge here is how do we take innately complex calculations and concepts and boil those down to where people don’t have to have a PhD to really understand those, or can understand those with a readiness score or a health score of some sorts. That’s really our big challenge.
Mark White: I had a professor years ago tell me, “Mark, we can write all the information that we have down in books, we call them textbooks, but if I can’t convey them to people and make a greater impact to all of us, to a greater population, then having all the book smarts and knowledge, it’s relatively meaningless because I need to be able to convey to those that don’t have access to that information.” And I’ve used it in my practice. At this point I no longer do personal training, but years ago when I was training individuals, I would ask the question, “How long do you think you’re going to need me as your trainer?” And most often it was, “Well, indefinitely.” And my response was, “No, if I’m a good personal trainer, I need you to gain independence. Therefore, is a learning process, not only from the biological physiological perspective where your muscles are growing, you’re understanding how they’re changing in aerobic capacity and all those other physical traits of your body are improving, but you also have to know how to control them and to maintain them. And if I’ve done a good job, I’ve done education and training, and now you have independence.”
Daniel Serfaty: So your measure of success for you is to lose a customer basically.
Mark White: Absolutely.
Daniel Serfaty: But obviously not because word travels fast for, it’s not easy to find a true trainer, many people understand the motions, et cetera, but that combination of coaching and making people more independent, in charge of their own health, as you said, is more difficult. I know you continue to study, Mark, you were telling me that you are working currently on a PhD. Can you tell us a little bit about that, what you’re studying in particular there?
Mark White: The topic is load carriage, and the physiological measurement that I’m interested in deals with the respiratory system. The actual measurement is called work of breathing, and there’s a unit of measure it’s measured in joules per minute or kilojoules per minute. Basically it’s how the diaphragm and the intercostal muscles work relative to the whole bodywork. And at some point, the diaphragm and the inspiratory muscles can become fatigued prior to what we term the local motors or the musculoskeletal system that’s actually doing the work. And there’s a negative feedback loop system that works within the brain, and it’s tying in the respiratory muscle feedback plus group III and IV afferents from the muscles.
Ultimately at the end, work of breathing becomes so high and the physical activity that the brain is telling the body to do, can no longer be sustained by the respiratory system and the respiratory systems says, “Hey brain, why don’t you go tell the arterials in the legs and in the arms to face a constrict and minimize.” So all of a sudden during load carriage, my arms and my legs start feeling heavy. Is it because the actual legs and arms are fatigued because of the work was too much for the legs and the arms? Or is it because the blood flow to the legs and the arms have minimized because the respiratory system said I’m fatigued and I can’t keep up with you.
Daniel Serfaty: That’s fascinating. That shows, again, this notion of connectivity, that those systems are not independent, the physical, the nervous system, the locomotion system, as well as we’ll explore later in our chat, the connection to the mental if the nervous system is involved here. I can’t imagine that there is a connection to the cognitive and perhaps even to the emotional.
Mark White: Absolutely.
Daniel Serfaty: Now, switch gears a bit and move towards one particular technology set that both of you have been collaborating on that you’ve been involved, and I want to take our audience through that journey. We call it technology FitForce. I would ask Tim to describe, to tell us a little bit the story, how he led a team that took research and development idea and turned it into a field solution that is just in the process of going through that transformation. We know fit is one of the most difficult thing to do, is moving from the realm of research to the realm of the practical in the field. So tell us that story. How did you take FitForce? How did it start as an even an idea, and eventually now it’s turning into technology or a solution that has some legs?
Tim Clark: This is kind of a torturous path, but I have to say, this is really professionally and personally validating to have that kind of experience where one has an idea where they’re sitting in a room with colleagues and tossing things around, and to see it progress into something that could potentially impact tens to hundreds of thousands of service members lives, and potentially more than that, that’s an incredibly validating experience that is still in progress and quite difficult, but we’re happy to be doing it. The origin of FitForce actually started when there was a, what’s called a Small Business Innovation Research opportunity. The research topic was really not related to fitness at all initially, it was related to medical informatics and decision support. I had just had a really bad wrist injury where I had to have surgery, and I was actually going through occupational therapy at the time.
I got a sheet of paper that said, this is what you should do for your recovery. And I thought, surely there’s a better way to understand how I am recovering from this surgery that I had to have and talk in my progress be tracked. How can I make decisions about my health and how can my care team make decisions about my health? So we wrote a proposal in 2015 and it was accepted. Then on day one, we talked to the technical point of contact and he said, “Very clearly, you are now focused on something else. You’re going to look at musculoskeletal injury,” which is a very large problem within the services, number of duty days are lost every year to what are largely preventable musculoskeletal injuries, overuse injuries, injuries due to bad form, those types of things. So that’s kind of how the project originated is, how can we start to track those kinds of things? And most importantly, how can we model whether an individual or whether a set of individuals is potentially at risk for musculoskeletal injury?
Daniel Serfaty: So from the get go, even the fact that it started from a totally different angle, you had a partner, what you call the technical point of contact. You are basically with a senior scientist who is a US Navy, that together you came up basically with, okay, this is a problem, we’ll solve it.
Tim Clark: Absolutely. I think the difference maker from transitioning from research into operations was having access to the operational community from essentially day two. So we started talking to the Marines who were having these issues with their ability to plan physical training programs. They were doing relatively heroic things, they were building Excel spreadsheets with, this is the plan over the course of a day, over the course of several weeks, and they’re very specific ways to do this, and these are trained individuals, but the challenge there is that it doesn’t scale. It’s difficult to do things through Excel and emailing things around. So again, very heroic in the way that they did these, but there was an opportunity for us to support their workflows with technology and with some of these algorithms that we’re able to write and operationalize. That’s really been our goal for the past several years is to work very closely with them to understand their workflows and get the relationship between physical training and musculoskeletal injury.
Daniel Serfaty: Before we talk about that, because I would like you to describe really what that tool is aiming at when it’s in use by the Marines, I have a question for Mark. Mark, I mean, you study these high performance organization where fitness is essential, not just to their health, but also to their mission. Don’t they know how to do that without all the new digital technology? What is it that science can bring them? I mean, at the end of Marine, whether you watch the movies or you have a family member in the Marines, you know that they are fit, you know that they train all the time. If you’re a firefighter, you know the importance of staying fit for your own, not just effectiveness, but even survival. What is it that we can bring them?
Mark White: That’s an excellent question, Daniel, and I’ll bring it to a crescendo when I come back to the brain analogy I’m going to provide you with. Over the years in practice and education, I’ve come up with this analogy, everything is an energy problem. I want to go down that rabbit hole of the laws of physics and everything else, but as we exist in this three-dimensional space in time, we’re also part of it, our biological systems, and that means as scientists, we can quantify those things. And if we don’t know how to quantify those things, we can find a way to do it. In the physical fitness realm over the past, I’d say 20 years, we have done an exponentially improved job of tracking that information, and as Tim alluded to, most of the time as a strength and conditioning professional, we build spreadsheets.
I say, okay, here’s your volume load, and this is how much tonnage Mark lifted over this workout, and I have this accumulated process of gathering that information, the concept as a strength and conditioning coach that we apply to change that volume load and tonnage to make sure that individual doesn’t over train and become overstressed. How much stress do I apply? How much energy is too much energy for that person? It’s called periodization. And we’ve known this for quite some time. Hans Selye was the forefather for the general adaptation syndrome. It’s kind of like the Goldie locks of energy. How much is too much? How much isn’t enough? And then I’ve got my sweet spot. I’ve got my warm porridge. It’s perfect. That’s what I’m looking for.
But there is no magic number that then I can say, okay, Mark’s beautiful sweet spot for training load is this number, and I’m going to apply it to the rest of the people on the planet, and they’re all going to adapt the same way to that energy. Now, and spring it back to the operator, don’t they know, and aren’t they fit? Yes and no. Sadly, information is commonly passed down within the confines of that occupation, and so you’re only as good or you’re as limited as your predecessor. And what ends up happening is that, to make the information simple, we dilute the content and therefore generation after generation, after generation, that content then just boils down to, well, this is the way it’s always been done, that’s how I’m going to do it.
The crescendo in this though, is if I do this properly and I have the training and education, just like I applied to my personal training clients, but I do it to the operators, I give them autonomy, and quite honestly, that autonomy is based on rate of perceived exertion. And you can teach the individual how to perceive the body through our five senses, they then have this much greater understanding of how they, as an individual, are interacting in the environment that they’re in with any machines. So it really then becomes that classic human factors or human system integration model, where I’ve got man, machine and environment, but that takes time, but they can perceive it and they can understand it.
Daniel Serfaty: That’s really fascinating. What you’re pointing out here is that there is no cookie cutter solution, we’re all different individuals, we learn differently, we work differently, we perform differently and we certainly should exercise differently, and finding that sweet spot, that Goldilocks, is really about high personalization, and a lot of our technologies now go to that precision sweet spot, whether it’s a student who is learning algebra or whether it’s an athlete trying to exercise and develop more capacity for their core. So I would love to go back to that idea a little later, but first perhaps, Tim, can you describe what FitForce is for our audience?
Tim Clark: So FitForce on the surface is very similar to what you would see with other fitness apps that are available on the market. There’s the ability for individuals like Mark, or even individuals who have learned from individuals like Mark to create these periodized plans. And there’s some guidance that goes into how those plans are created. So there’s a limited set of individuals who are responsible for creating plans that will improve unit health or improve the readiness permissions. But then there’s also the execution side, how does that plan get pushed out? And there’s a lot of considerations when determining the best way to push out plans at scale, who to share those with. But on the surface, the goal is to understand whether individuals or units are completing the workouts as prescribed, whether there are any deviations from that and really tracking that over time so that we can enable some of those population level and individual level analytics.
So that’s where FitForce is today, and we’ve spent a lot of effort and time getting FitForce to the state where it can reasonably produce those plans, but also understand how individuals are executing against those. And that’s the data foundation that we are offering to the military community is, once you have the data, you can do a whole host of things related to modeling against those data. But data are often the things that don’t come very easily.
Daniel Serfaty: Thanks for the description, but maybe I wonder if you can help me and our audience here, we have an app that has quite a bit of science behind it, not just in term of data analytics, but also some knowledge of physiology and exercise, et cetera. But at the end of the day, I can see a couple of users, I can see the Marine, the person who is training, checking his or her data, looking over time saying, okay, I need to go more here, less here, et cetera. But there is also the trainer, the professional trainer who train those Marines. And eventually there is a Marine Corps who wants to know, as an enterprise, how is FitForce? So which one are we serving here with this particular technology?
Tim Clark: The intent has always been to serve all three but capture all of the behavior and the data creation. And then those primarily happened at the planning level and the execution level. So you can imagine a scenario where, as somebody who’s evaluating the fitness of the Marine Corps, which plans are leading to injury? And what’s the type of knowledge that’s going into those plans? And can those plans be adapted to have better outcomes? And all of that involves, again, building the plans in a digital way that’s scalable, but also being able to track the usage of those plans and the execution of those plans.
So we can answer the question, this particular plan instantiated this time of year at this location results in X number of musculoskeletal injuries or pre indicators of injury, or over-training, or whatever the metric is we’re looking at, versus this other type of plan that might be both planned and executed in different conditions. And that’s really, I think, where the high level analysts will want to understand, well, what’s the context? Why did this work over here But it didn’t work over here? What are those conditions? And are those conditions adaptable and changeable?
Daniel Serfaty: So Mark, if you take off your scientist cap for a second and put back your trainer’s cap, if you had such an app and you’re training firefighters, how would you use FitForce as a trainer who has deep, deep knowledge about the physiology and the training and the exercises to train 20 firefighters? How can that help your job?
Mark White: If you don’t mind, I’ll carry on the Marine Corps analogy that Tim had. Organizationally, you need to have a program that will allow all practitioners the umbrella to practice under, this overriding philosophy of how we’re going to practice this. So if I’m a strength and conditioning coach within the Marine Corps for a battalion, this software application FitForce allows me to track that energy that I was talking about earlier and apply different aspects of strength and conditioning towards individual Marines, units and companies. I am not a healthcare provider, I care about human performance, and so I take people that are normal in physiology and biology, and I try to improve them. I want to maintain a certain fitness level for larger groups of people, but with FitForce Planner, I can also drill down to the individual level and look to see that, Tim is a bit weak in upper body strength, and I know that that’s going to impede his load carriage performance.
So I can start to change his strength and conditioning plan to add in one or two different exercises, or change the way that that exercise is performed in sets and reps, where everybody else is doing standard muscular endurance type work, and Tim we’re going to actually work on basic strengths. So I’m going to take your sets and increase them by two or three, and then I’m going to drop the repetitions within that set and we’re going to add an extra 10 pounds on to what I expect you to be doing. So it allows for wonderful customization prescription, I would use, application of that stress to [inaudible].
Daniel Serfaty: I like that because you’re making the link between the fact that it’s fitness and strength and conditioning, but there is also the clear injury prevention idea there that you’re trying through that exercise to go there, because you don’t want to go to the other side of the equation, to the medical care that comes with its own complication, but there is a third part to that is also the recovery part. So somebody has been injured because those injuries are going to happen, how can FitForce help us bring somebody on the path to recovery back to, say, a normal activity? Tim, is that something that’s within the scope of the design right now?
Tim Clark: It’s a future capability, but the technical foundation would allow for that sort of activity. So you can imagine, Mark, instead of being a strength and conditioning coach, he’s somebody who works in recovery as a-
Mark White: An athletic trainer.
Tim Clark: Those words. So you can imagine that person creating a planner program for one or more individuals who really need to track the user’s progress against those plans to have those users state assessments, to where you understand this individual is at this point and the prescription, do I need to adapt that prescription because he or she is not progressing the way I thought that they would? The thing I will say, and this kind of goes back to really the overarching case is, there are a very limited number of Mark Whites and other individuals like him in the services. And this is just practical. You can’t have a one-to-one relationship between strength and conditioning coaches and all Marines or all soldiers, and that’s a foundational piece for why we’re building FitForce in the first place is to allow that sort of knowledge to scale and for the technology to allow the expert knowledge to flow through it. So that we can actually have things where an individual has recommended plans, whether they be for recovery or improvement or some sort of objective that is built on the knowledge, but doesn’t necessarily have to be built in a personal way by Mark or by somebody like him.
Daniel Serfaty: That’s a good point. It’s almost to try to have Mark Whites wisdom and science and experience in a box basically, for the trainer in the field, and therefore perhaps change even the required ratio of trainers to trainees, because at that point you have the trainer who is augmented by this technology that is data driven.
Tim Clark: One of the things that we’ve learned working with the operational community is that, oftentimes those who are leading physical training execution are not practitioners, they’re not experts. And even the folks who may be generated the plan are relatively new to the domain, have maybe received a little bit of training about how to create a plan, but are ultimately not 20 years into the field. And those are really our target users from the planning side to help enable them with the key knowledge that allows them to customize and personalize for their units.
Daniel Serfaty: So it’s been already partially deployed with the Marine Corps, at least, I want to know how it was received.
Tim Clark: One should have thick skin when building technology, and I tell my team that a lot, and that thick skin I think has allowed us to really take the feedback that we received and to understand how our users perceive the product and the process.
Daniel Serfaty: Can you give me examples?
Tim Clark: We get asked a lot, when is the app coming out? FitForce is an offline capable web application. So we actually don’t have a native application on iOS or Android yet. And the purpose for that was to build something quickly that we could quickly adapt as a military capability. So often when you transition something and it becomes an operational capability, it becomes more challenging from a technical perspective to make changes to that and to keep up with versions and things like that. We’ve done it as a web application that allows us to make those rapid changes. That is something that we hear from the users, just because of familiarity with other fitness offerings out there is, can I download this app?
One of the key challenges we had upfront is to enable offline usage of these capabilities and a very specific reason for that is, there’s not always great connectivity on military installations. And in fact, by design in a lot of cases, creating an offline capable web application became a really big challenge and users have asked us for the features that you would expect online capabilities to have like streaming video and things like that. We’ve made a lot of those types of accommodations based on the user feedback. I would say that kind of covers a lot of the more negative feedback we’ve received. But from a positive side, we’ve been told a lot about how easy this makes the planning process, and we made a lot of affordances for being able to template out daily activities, they’re referred to as play cards in Marine Corp speak. So we’ve created a way to share those templates and to really make those templates easy to copy to other days, and to rapidly update the plan time from the Excel document was several hours, and now we’re down to a couple of minutes. They need to be able to do this quickly and with confidence, and that’s what we’ve tried to enable.
Daniel Serfaty: Good to hear. How about you, Mark? Did you give some tough feedback to Tim and his team?
Mark White: When I provide feedback, I always try and keep it in a context that, [inaudible] the disclaimer that, I didn’t call you ugly, and I didn’t tell you that your mom dresses you funny. So don’t take it personal, and Tim didn’t, his team took that feedback. Now, from other perspectives like the front end user, I never actually received any of that information, but as a professional, what I was trying to do was help FitForce compete with what commercially currently exists off the shelf. And what was really interesting about the development of this application when I came into it about 2018, is that they had a unique set of business requirements, all commercial off the shelf products that are being produced in the strength and conditioning field for sports performance, have a bunch of Mark Whites. I’m a diamond dozen out there in the sports field.
I say that because kinesiology and strength and conditioning in the academic institutions, that’s been going on for at least 30, 40, 50 years now, slightly different names over the decades, but there are a lot of people out there that have this knowledge. They just don’t exist in the military. And the unique case setting that FitForce Planner was trying to address was, we have FitForce instructors and we have education and training, and we don’t have the organizational infrastructure to have a hierarchy that sits high up in the Marine Corps, where one or two Mark Whites sit, and then we bleed down into these other professionals. We need a software application that will allow us to programmatically constrain the FitForce instructor that may not have that type of expertise for I don’t hurt people, as a strength conditioning professional I haven’t done my job if I hurt you.
If you get hurt out in the field in operations, or on the football field or soccer field, that’s part of the job, I can’t prevent that. But if I hurt you in the training room, that’s a 100% preventable. So that to me is the uniqueness that FitForce provides the Marine Corps, and programmatically as we were going through the iterations and versions of FitForce, Tim’s team did a wonderful job taking any input that I would provide them with and trying to attempt to integrate it. You can imagine, Mark White’s professional opinion versus the customer stakeholders opinion may not always align, and so in the end, there are features that I’m sure Tim wanted to have in there that will eventually get added, but time may not have permitted, but at this point it’s a wonderful application that quite honestly we’ll continue to live on.
Daniel Serfaty: Thank you for sharing that case study, the FitForce case study, and the fact that it is still work in progress while we’re implementing and getting feedback from the field. I would like to take us to another direction, looking basically at the proliferation of wearable sensor that can send signals from our bodies, heart rate, temperature, stress sometimes, both in term of work and fitness, and there are many apps like that in the market that are connected to the sensors. What’s good about them, but what are also their limitations? If you kind of think about the future where we’re all going to wear clothes that are going to be sensors and those data being collected, and they’re telling the story about our state, our physical, perhaps even emotional state, can you give me the pluses and the minuses of that? After all, you believe in data. So the more data, as long as this data is being processed in the right way, the better, but are there some limitations of what we should do with these data?
Tim Clark: I think that we’re at a point where we have just some tremendous capabilities that are coming out as products, and then also some of the science behind them is just really remarkable. It will give us the chance to evaluate user state over time in ways that we haven’t been able to before. And from a health perspective, from an epidemiological perspective, these are critical things because if we don’t have the data, it’s difficult to understand where an individual is and where they might go. I think some of the bigger challenges are related to, when there are so many of these wearable technologies, some are really fantastic and are being validated by military research labs and then other organizations that seriously want to look at them for health and clinical cases. There’s also a proliferation of some of these that are not very good or are easy to market, but are maybe less accurate. And you really have to be careful when considering these things, because if you’re producing bad data, any sort of follow on analytic is also not going to be of quality.
Daniel Serfaty: Tim, you’re worried that given that the data is being collected, say we have a ring, or we have a watch that collects basically heart rate and heart rate variability that can be a predictor sometime of stress for workload, and you worry that people are going to use those signals, process them wrongly, and then reach the wrong conclusion and suggest intervention that shouldn’t be? Is that your worry?
Tim Clark: Yeah, that’s a concern both from the modeling level. So if the model comes to the wrong conclusions, that’s one thing. But if an individual potentially misinterprets those, or if the modeling is not expressed very well, that could lead an individual to change their behavior in a way that might not be beneficial to them. So full disclosure, I’m wearing a ring sensor right now which is quite fantastic, and in a way that it detects and has been validated by some military organizations, but also how it presents information to me about how I sleep, some of my activities throughout the day. And again, I think that can be a really valuable thing if you have confidence that what you’re seeing is the truth. And that really is the big challenge that we have.
Daniel Serfaty: Tell me what you’re worried about, Mark. You’re listening to that and you say, oh, we are going in a dangerous direction here. If we can measure Tim’s sleep, and those data are going to be somewhere for somebody maybe to learn from and for somebody to exploit. Are we worried about that? Should we worry about that?
Mark White: Yes. I think we should always worry about it. I think it’s a two-sided coin. I use in my practice as an aerospace and operational physiologists in the air force, technologies within the chamber when it was allowed, to teach and it would teach air crew what hypoxic hypoxia felt like. And you just use a little finger probe, so as PO2 saturation. And the interesting thing in that setting is that I was teaching them about data that could be collected on an individual, whether we were going to do it or not in the cockpit at 26,000 feet during operations, was a different question, but I could teach to the signs, the objective change in a human person that I’m watching go through, also explain his symptoms, things that he is feeling. With that, I go back to this stoic aspect of experiential learning.
We have five senses that at least we know of right now that we experience this life. What I’m worried about is that we’re going to use sensors to supersede the input that our biological organism was designed to use in this terrestrial based environment, and we’re going to let the computer trump what really should just exist in the brain and our interpretation of it. And we’re going to use it as a crutch. And the analogy I would use, Daniel, is people ask me, “Mark, should I use a weight belt when I do my back squats?” Well, not when you’re learning to back squat, because you need to train your abdominal core muscles to stabilize that lower portion of your low back before you actually put the weight belt on. “Well, when should I eventually put the weight belt on?” And there’s an art in the practice of that, it really should be at six, 800 pounds.
I mean, when you’re just way beyond what your low back should be doing, I’m worried that somebody would use a weight belt as a crutch at a much lower weight, and then they take the weight belt off, then they’re going to rely on something that was not necessarily strong in foundation. That being their body. And then they’re going to fold like a cheap chair and create an injury. And that can be said about data and the information. So it really supports what Tim was talking about and the potential risks quite honestly. And I’ve got to say some of our adversaries are reverse engineering our knowledge, and we have professional organizations that I belong to that have positions that are posted, that I could apply for as a strength and conditioning professional in our adversarial countries. They’re taking our knowledge and they’re trying to gain from it. And yes, that data that we’re collecting can also be used against us too, we just don’t know how.
Daniel Serfaty: So you’re worrying about these data being public, and I think it’s both for security purposes as well as for general privacy purposes, but you’re also worried about raw information without the wisdom and the experience to use that information, to transform it into a prescriptive things, when you were talking about the belt and the weightlifting that can actually lead to the very injury it’s supposed to prevent. And these are certainly things that concern all of us in this business is notion of, yes, human data is great to improve performance, but both the data itself and what we do with those data, the processing that science and technology brings on top of these data, plus the experience is really what makes those data useful.
Let me ask you again as we broaden the scope here about wellness, fitness, health recovery, we’re in the midst of a pandemic, and both sick people in hospitals, as well as the healthcare workers and the educational workers, et cetera, are exposed to it. Can you imagine some of these applications of tracking and measuring, to go back to our initial questions, being useful maybe to track the workforce for signs of infection or for potential propagation of that infection for recovery from illness? Tim, are you imagining some solution there that are based on this notion of remote measurement of the human state?
Tim Clark: Absolutely. And I know there are a number of efforts that are looking at wearables as a way to predict COVID-19 maybe a couple of days before the symptoms present themselves, and those things will continue to proliferate, I think, as the pandemic goes on for a while. I think the challenge will remain to make sure that we’re not making the problem worse. So the belt analogy is really great. I mean, you can have a technology that provides some information to you or allows you to feel like you’re doing the right thing, but those really need to be validated and lab tested, and a lot of the things that are really challenging during a pandemic where speed is of the essence, but science doesn’t typically respond well to speed and those types of constraints.
I am optimistic about the scale of data collection, because ultimately a lot of the problems in epidemiology are related to not having the data about where an individual is or where a population is at a certain point, wearables have the opportunity to change that calculus quite a bit, but there needs to be a baseline level of confidence in both the measurements and then also the recommendations that might come out of these. So I’m hopeful, and I think that there are going to be some good things that come out of this, but also need to be very cautious about how quickly those come out and make sure we’re not making the problem worse.
Daniel Serfaty: Certainly the warning articulated by Mark earlier, certainly I can see how they can scale out in the case of trying to track a pandemic or to track the contact that a particular person has and the temperature of people and reach their own conclusion. I think we shouldn’t rush to those solution immediately only because there is a time pressure, but we should also understand the consequences of not doing it right, especially on that scale.
Tim Clark: Absolutely. And I do want to add just the level of trust in these capabilities needs to be very high because you can imagine a scenario where if you’re using a contact tracing app and it tells you something incorrect, your trust in that technology is now gone and these technologies require public participation and trust in the results to be effective. And we kind of have to get that right now before the trust dissipates.
Daniel Serfaty: Yes. I want to conclude with asking each one of you the same question, actually, I’d like you to envision success of those technologies, including FitForce, but not limiting to FitForce, in which we are able to sense at the population level, the health, the state, the fitness, the recovery, in general, the worker state, whether it’s a factory worker or an office worker, et cetera, in order to enhance eventually their health, to improve their level of stress, et cetera. And I want you to tell me, how does it look like, especially if we have a system like FitForce, but beyond FitForce, how does it look like in three years given the advancement of technology? And how does it look like in 10 years? It’s easy to imagine about the future because nobody is going to argue with you today, but I’ll come back in three and 10 years and challenge again those predictions. Just a one minute answer about the shorter term and how does it look like really if we push our imagination in 10 years? Who wants to start?
Tim Clark: I’ll take that one first. I think in three years if we’re successful, we will have ability system where individuals feel confident enough to contribute their data about them, and these are often very sensitive because they find value in it. The 10 year timeline is a little bit different, and this tracks back to what Mark was talking about earlier, if we do our job as well, then maybe FitForce doesn’t need to exist as much as it does today, that people understand all of these health and wellness factors and are really conscious of them in all of their daily activities and really take measures to feel good and to perform well, and this is just a taught and inherited knowledge. So I don’t want to talk myself out of a job, but I think that would be fantastic.
Daniel Serfaty: I don’t think you’re in any danger of talking yourself out of a job, Tim. Mark, what’s your predictive cap on, what do you say?
Mark White: Altruistically, when I have always envisioned, when I say always it was probably after the age of 30, as a society, a culture that is ingrained in fitness, now, the fitness is the modality towards healthiness. Not everybody has to be an Olympic athlete, but the idea that this information could be used, could create a paradigm shift, and I kid you not, a paradigm shift in how we practice healthcare in this country. I asked a woman once, “Who’s responsible for your health?” Now, this is when I owned a gym, I was doing a health history questionnaire with her and she said, “My physician.” And I thought immediately, oh my God, no. However, contemplating on that, weeks, months, my guess is most people think that.
I truly then started to glean the idea, God, if I could give you your health, I give you independence away from a medical care system that is profit oriented. Now, the medical care system shouldn’t go away, it’s needed for those acute extreme conditions. We need those responders, but I’d love to see a society where we’re not going to our physician once a month, maybe it’s once a year at best. And our preventable diseases, those rates start to drop from an epidemiological level. And it’s not because we hate the healthcare system, it’s because we don’t rely on it. We don’t need it. We only need it under those acute conditions. And so 10 years from now, I’d love to see a society that was envisioned where people were exercising and you saw people exercising, every minute of every day, somebody was out there doing something and you didn’t see chronic obesity and other preventable conditions that lead to complications. I believe that that’s possible, but I don’t know how to get my philosophy out there, Daniel.
Daniel Serfaty: This is a very hopeful and wise note here, Mark. Thank you. I’d like to thank you, Mark White, and thank you, Tim Clark, for the wonderful interview. You’ve enlightened us about, not only what is being done today to improve the population’s fitness, but also in your vision of what could be done in the future by relying on the right balance between common sense experience and technology.
Thank you for listening. This is Daniel Serfaty, please join me again next week for the MINDWORKS Podcast, and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Miss Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Kevin Durkee: Instead of having 10 humans watching 10 other humans going into confined spaces to make sure they’re safe, can you use sensors and use the technology to enable just maybe one or two people watching those 10? And then you save eight or nine. Those are eight or nine more bodies back into the workforce.
Zach Kiehl: We actually did the math and looked through the confined space entry logs and how many hours are spent with a person watching another person work, and we got to the number of 41,000 hours per year just for one air logistics complex. And that’s just one facility within the Air Force that does confined space operations.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. Today, I have two guests who happen to be my friends and colleagues at Aptima. They are going to take us through the journey they undertook as engineers and scientists as they took an abstract research idea and turned it into a useful product. And then they took that product and around it formed a brand new startup company to reach new market with their solutions. That’s pretty much the American dream. Kevin Durkee, who is a lead scientist at Aptima, where he develops innovative solutions to provide human operators with enhanced capabilities that improve their cognitive, motor and perceptual performance.
And Zach Kiehl, a research engineer at Aptima, whose background is in the fields of biomedical engineering, medical imaging, and physiological signals. And, shh, don’t tell anyone yet, he’s the CEO of that new company. Kevin and Zach, welcome to MINDWORKS. Kevin, let’s start with you. Why don’t you tell us specifically what made you choose this domain on which we’re going to focus today, which is human performance measurement, our ability to literally get under our skin into our brain to capture measures of human performance?
Kevin Durkee: It’s been a really long journey. It started out way back in probably undergrad, where I had this altruistic tendency to want to find ways that would help people live better lives, more productive lives, safer lives. I think when I first went down that road, I looked at a couple of different areas to study, and started first as a biology major, which came full circle back to studying physiological sensor data today. Then I moved on to sociology, even looking into the social services sector and social work. But eventually, I just found those things weren’t where my talents were best suited.
So I started looking at, how do humans behave? How do humans think? And how can you use the study of humans to better improve people’s lives? Is that through training them? Is that through making things work better for them? Designing better process, better organizations? Those are just that really clicked. And being able to study that and make solutions about that, that’s really how I got into studying human factors and applied cognition today.
Daniel Serfaty: That’s a good answer. Thank you. So it came out of basically an altruistic impulse that you wanted to help, and you could help through social work, you could help through medicine, but you chose human factors and what you called applied cognition, which is the application of cognitive science to better human performance. Maybe Zach, you can tell us what made you choose this domain, which is human performance measurement.
Zach Kiehl: There’s a bit of a story, actually. Like a lot of seniors in high school, they make you write a career paper, where you have to talk about what you’re going to do when you’re older or what college degree you’re going to pursue. And I actually chose biomedical engineering as a junior in high school. And the path leading to that decision is very similar to Kevin’s, where he mentioned altruism. I always had an interest in math, always had an interest in science, I always had an interest in physiology and anatomy. I think that the human body is just a marvelous creation. And despite all of our technological advancements, we’re still not able to replicate a lot of the functionality that the human body’s had for thousands of years.
And I was very drawn to that, and I was trying to find a discipline or a domain that fused all those; there’s medicine, there’s all sorts of things. And then I finally realized that there’s this new domain called biomedical engineering. I’m not very old, but biomedical engineering was still a new concept. It was really a fusion of electrical engineering, mechanical engineering, anatomy, and physiology, all into one. And I really started to get interested in it. And I just had a fundamental desire to really see if we can use innovative technologies to improve the lives of humans.
And I wonder, what can we do? All the advancements we have, how can we make our lives better as humans for each other? That naturally drew me to biomedical engineering. And then one thing led to another, and here I am at Aptima trying to use the best technology to solve these types of problems.
Daniel Serfaty: It’s fascinating, both Kevin and Zach, because all the people I engage in our field, no matter where they end up, whether they are at the end of their carrier or at the very beginning, whether they are more on the psychology side of things or on the engineering side of things, there is a little bit of idealistic, some would say naive, but in a very good way, impulse that brought them to this field. And that’s the one that both of you are mentioning, the deep interest in the human, and the need to find a way to help, to support that. And I like very much that you got at it from very different angles, but you ended up in the same place. Talking about a place, Kevin, you are now the lead scientist of a major division at Aptima. What is it that you do at Aptima?
Kevin Durkee: Yeah. I’m really a Jack of all trades in a way. I spend a good chunk of time interfacing with customers and talking about really the future, and what are the future needs? What are the needs of, in many cases, the war fighters? Since that’s a large part of our customer base that we deal with what. Are those future missions they need to accomplish? And what are the limitations of technology today that aren’t addressed yet? And so we’re really trying to build thought partnerships with our customers to help paint a picture, a vision of what that’s going to look like in five years, 10 years, 20 years, and start to seed ways to make that a reality.
It’s a progression of steps. We work with the technology we have today to meet some of those short-term requirements, but we also try to get on the leading edge of new technologies that are really going to make peoples do their jobs better and faster and more efficient. There’s a lot that goes into that. We have to start with planting the idea, and there’s obviously logistic considerations of getting funding. And once you actually successfully startup or program, being able to see that through and make sure it’s executed, and then transition when it’s all said and done.
Daniel Serfaty: That makes sense. So you basically are the embodiment of what we call the applied research and development in the sense that the motivation to develop a technology or a scientific approach comes actually from a need in the field, and you’re trying to forecast those needs years in advance in order to be able to make those development now. Let me give you a follow up question here, at the end of the day, as you say, you want to train people to perform better, or to give them technology so that they can perform better. So why is it so important to be able to, and I use a term in quotes, measure humans? Why that emphasis on measurement?
Kevin Durkee: Yeah. I often like to say that you can’t improve what you can’t first measure. Obviously you could, but it’s much more productive to first measure and understand and analyze not just what is happening, but why is it happening? What’s the root cause? So first, you build up this understanding through the collection of data and the generation of, not just any data, but good quality data. And that’s an important tool to really unpack a solution, figure out the direction that you have to go. That’s how you get your data, is through good measurement, knowing what to measure, and when to measure, and how to measure it.
Daniel Serfaty: So Zach, are there limits about what we can measure or perhaps even what we should measure? At the end of the day, we can measure all kinds of things, but we have to extract meaning from it.
Kevin Durkee: That’s a great question. And honestly, the what can we measure continues to change at a very rapid pace, obviously, with all the advancements in technology and the continued miniaturization of sensors in even standoff technologies, where you’re starting to be able to send someone’s heart rate from a camera or even their position or their physiology through a wall. It really does beg the question of, can we do it? But then, should we do it? I feel like that’s something you have to battle getting into the legality versus ethicality. And that’s really a line that we’ve been trying to walk, is that with everyone carrying a smartphone nowadays, handling a physiological ring, people are willingly taking sensors on their body that can really be used to compute some powerful measures of human health, safety, and performance.
And I would say that we are starting to push the envelope on what can be done. And there’s a lot of great data that can be pulled from humans, but stopping to ask the question of, should we do this? And how it pertains to an individual’s privacy is something that we’re constantly evaluating.
Daniel Serfaty: Yeah. I think is going to be a big theme, this notion of enormous amount of data that we leave in the space, in the cloud, in cyberspace, whether it’s physiological data, as you know, we can measure people’s blood pressure at a distance. Or our behavior, the way we use our credit card, our patterns of movement through GPS, etc. When I ask a question about what should we do and shouldn’t do, we pass through a metal detector in airports that measure whether or not there is a piece of metal on us? What if that metal detector will detect that we have a deadly disease, should we tell the person that they have such a disease? What do you think?
Zach Kiehl: I think that’s a very interesting question and one that Congress and legislature will continue to challenge to keep up with. So it’s one thing, can we do it continues to increase at such a pace that it’s very difficult to legislate around. And I know that currently there’s a big challenge with that of, I don’t want to of course name names, but there’s a lot of companies that are getting ridiculed for using data nefariously or at least for personal profit and it does beg the question. So I would generally default to how it pertains to the overall human race, that individual, for instance, was patient zero of COVID-19, then it probably would make sense to put them in a quarantine.
But if they’re carrying something that’s detected to be not a concern, I guess, or benign, then push them through. So it’s definitely a consideration that hasn’t historically been a problem, but now is starting to become one, and it’s a very interesting use case for sure.
Daniel Serfaty: Yes. I think, Kevin, again, you’ve been around long enough to know that even on detection of things such as stress workload, when I was a graduate student, the way to do that was usually to give a person a questionnaire and then eventually infer their workload. Today, with your instrumentations by measuring some brain signal or some oxygen signal or some heart rate, you can actually infer a workload at an incredible degree of accuracy. Are there limits to that, to what we can do versus what we shouldn’t do?
Kevin Durkee: The technology has come a long ways. I think it still has a ways to go both in terms of its reliability, but also, where’s the right place to use it and when is the right time to use it? I think just building off Zach’s comments, I think that’s a discussion that has to be had on a case by case basis on, who are your end users and who are your stakeholders? Just to throw out a brief example, some of the applications we’ve been working on for health and safety monitoring has dealt with unions and mechanics and people maintaining aircraft and ships. These are union workers and they take their rights and their data privacy very seriously, understandably so.
So when we talk about instrumenting them, even if it is for an altruistic reason like ensuring the health and safety, they’ve raised that as a concern. So we have to make sure we have a good discussion early upfront before starting on that journey of building out that product, and we want to make sure it’s positioned for success. So that includes not just building something that functions, but also something that has the buy-in and the support of the end user so that everybody wins.
Daniel Serfaty: Thank you for sharing those stories. I think as engineers, as innovators, we increasingly have to deal with that line of innovation prowess and ethical consideration. The more data we can collect and we are collecting in our society, the more we’re going to be confronted with this dilemma. So Zach, can you think of an instance during your professional life over the last few years when you had like an aha moment, a discovery, something that really surprised you? Can you share that with us?
Zach Kiehl: I think it’s with respect to my own career and career growth. I’ve been fortunate to be placed in positions where there’s the saying that they say, “If you’re the smartest person in the room, find another room.” And the joke I like to make is that I’ve never had to leave the room. Starting with an internship at a company known as Leidos, and then I worked at the Air Force Research Laboratory for a little bit, and then pursued some graduate work at Wright State University and then coming to Aptima, I’ve been blessed with very brilliant coworkers that are always very collaborative.
And the aha moment for me was that there are a lot of brilliant people out there, but there’s not a lot of people that can understand the brilliant people but also speak to the business people. So I mentioned earlier I was a biomedical engineer, and that’s what my technical training was in, but I also recently completed an MBA. And the reason I did that is I feel like there’s not enough people in the world that can really straddle the technical domain while still talking to the lawyers or the executives that maybe speak numbers. And that was an aha moment for me in life in that I really found passion in that, and I personally think that that’s a skillset of mine, is having the ability to understand the technical concept and being able to relay it.
As Warren Buffet likes to say, “It’s not a good idea if you can’t draw it out in a crayon and a five-year-old can understand it.” That’s what I’m trying to get at, is to take a great technical concept and to sell it to people that are willing to buy into it. The aha moment for me is that I really think that I could excel in that area, and that’s currently the route that I’ve been choosing to pursue.
Daniel Serfaty: Good. Well, you’re blessed if you can straddle that line. Go back to a major achievement I know about your life, Kevin, your professional life, is, you ran in collaboration with the Air Force Lab, The HUMAN Lab, it’s H-U-M-A-N for the audience. It’s an acronym like many things in the military, but this one is a beautiful one because you remember it, it’s A HUMAN Lab. And Kevin, you were one of the architects of that lab that truly transformed, I think, not just your professional past, but also that of many of our colleagues, those within the company, but also outside the company. It’s one of those Seminole innovation or innovation centers that eventually led to many ideas. Tell us a little bit about it. What was unique about it? What did you learn there?
Kevin Durkee: Yeah. The HUMAN stands for Human Universal Measurement And Assessment Network. So what really made this lab unique was, I think looking in hindsight, it really served as a bridge from decades of laboratory research that had occurred mostly in the university setting on what kind of physiological indicators, so that is to say if someone’s brain waves that are occurring as they’re doing a task or engaging in a certain stream of thought, or if their heart activity is changing in a certain way, or if their breathing or their movement is changing in a certain way, what kind of associations and predictive power does that have to tell you about their state of being, specifically as it goes to us being more of applied practitioners of human performance, how does that affect their performance?
If you see a certain band power of EEG sensor changing in response to a certain action they’re doing on a, say flying a military drone, does that give you something of an indicator or a clue as to their stress level, or maybe they’re overly stressed or overly worked, or maybe they’re underworked, maybe there, it’s a lack of engagement. There’s been decades of research on this, many, many published journal papers that we’ll find some of these correlations being university research, pretty much they published the paper and there was a finding and that’s the end of it. And it’s there for reference, and that’s great. And there’s value to that.
But really what The HUMAN Lab was trying to do was, not only find those indicators, but be able to hide them together, integrate them into an assessment of that person that makes sense and that you can actually take action with. And so what we did was, we set up a simulation environment. The task environment didn’t really matter that much, but we chose to use military drones, Air Force RPAs, remotely piloted aircraft. And can you take a simulated RPA pilot, measure many types of physiological data and behavioral data in real time as they’re doing the task, and can you make those assessments of, not only their performance, but, what is their state of being, what is their workload?
We looked a lot at workload, a little into other states like engagement and fatigue. If you have that measurement in real time or near real time, what can you change about their work environment about the system behavior to better improve their ability to successfully perform the task? So we went through a couple of phases of that project where we first built out the system to do that. We designed models to make those predictions. And then lastly, to close the loop, we tied that into a couple of different types of adaptive system behavior such as offloading the tasks through automation, offloading tasks to a teammate, or giving adaptive visual cues or auditory cues that might help them that wouldn’t otherwise be necessary.
Daniel Serfaty: So that’s a very rich set of activities. And I know that you like to describe that, and that was the mantra in a sense of the lab, which is sense, assess, augment. And you described that the sensing is basically collecting all these millions of bytes of data, but the assess had to do with how you apply science to make sense out of these data. And eventually, the augment was, “Okay, now that I know I did the right diagnostic, can I help that performer?” It’s a pretty wide range for a single lab to do, and I think that’s influenced a lot of our thinking. Perhaps we can end this particular segment by asking both of you briefly to tell me, what did you learn as scientists and engineers from your years in the lab? I know Kevin you spent significantly more years, so I’ll start with Zach. What did you learn?
Zach Kiehl: Well, I, as Kevin mentioned, I learned a lot about what can be done with regards to human assessment and the type of information you can glean from putting sensors on individuals. And it’s truly pretty phenomenal nowadays. And you mentioned it earlier, Daniel, about the level of intimacy you can get to know with a person by collecting these data. And it opened my eyes for what could be done. And as Kevin mentioned, that really opened my eyes to the possibility of transition, going back to the original discussion about how can we help people. And a lot of these studies ended in a publication, which is great for the scientific community, but I wanted more and really focusing on, how can you bridge the gap between a research study to an end-user that thinks that the sensor you’re giving them is going to be used to track their bathroom time?
It’s a challenge, but that’s where I really got laser focused on applied research and how we can take these great laboratory findings and how they could be transferred to an operational domain. Some of them are more applicable than others. EEG or electroencephalography, we make a joke, there’s the saying, “When pigs fly,” we talk about, “When EEG gets into a cockpit for the Air Force or to an operational domain,” as some of these technologies are still very cumbersome and not really applicable for an operational domain. I naturally was drawn to the technologies that are a little bit more unobtrusive and could be used for really challenging jobs that are out there that need to be done.
Daniel Serfaty: And Kevin, if there is a single thing you remember from all these glorious years spent at the lab, think of things that you learned and that eventually you used later, what is it?
Kevin Durkee: I absolutely would have to say the biggest thing I learned and has stuck with me through the many follow-on projects we’ve had that used physiological sensors in particular is just how different human beings are. The amount of, we use the term variance, human variance you see. A couple basic examples we’ve run into is, we might have person A, John Smith, when he gets stressed, he blinks hardly at all, eyes wide open and he holds his breath, he doesn’t breathe. But then you have Joe Johnson, he’s the complete opposite. Maybe he starts hyperventilating and he has the tendency to shut his eyes.
And we see examples just like that all the time when you’re using these type of sensors. So what does that mean? Well, it means that there is no one-size-fits-all way to do this type of work. You have to have good, efficient ways to individualize these technologies to each person who’s using it. And that’s actually a good segue when we start to talk about artificial intelligence and applications of that. You’ll have good, efficient ways to learn the person and what you do with that data. And it’s going to vary. It’s noisy data, it’s hard to work with, and it’s going to change a lot from one person to another.
Daniel Serfaty: Kevin, I think this is an excellent point, this notion of in a sense, the data together with the machine learning and the artificial intelligence techniques that come with processing the data, enable us, maybe for the first time in a long time on the technical perspective, to account for this extraordinary richness of individual differences in human beings. I think it’s a blessing and it’s a great challenge for the engineering community. So Kevin, now that we shared with our audience the story of The HUMAN Lab and what we learn in it, how did we take this breakthrough technology and turn it into a fielded solution? What were some of the secret ingredients, so maybe the enabling agents, that enable us to transition that successfully into the field?
It’s a real story for our audience that we can go from research and development to fielded solution in a relatively short time. So there are some lessons I would like to partake with follow-ups. Kevin, tell us the story.
Kevin Durkee: I think the biggest thing we did the right way was to learn all those hard lessons in a pretty safe environment. And that’s really what the laboratory offers you. And it’s everything from, what are the biggest software challenges or hardware challenges? How far does the wireless signal go? To, what’s uncomfortable for humans? To where? What do they don’t mind wearing? And also, under what circumstances does the data really shine? And we learn all those lessons in a safe environment where you can measure everything. From there, it just becomes really obvious.
The more you use it in that safe environment, you see, “Okay, this is a really good sensor. This is comfortable. It gets good data. And it meets the type of parameters you would need to get taken into the real world.” So that’s how we went at it. We found a couple of sensors that would work. So we would take that and take it to the next level. In our case, it was an aircraft mechanic crawling into a really tight confined space environment where there could be low oxygen or explosive hazards, and we’d use crawl, walk, run. So first you simulated and you have them try it out. And then once that starts to work, you move on to the next thing.
Daniel Serfaty: How did you find this application? Just out of all the application you could have found, you found this one. Did you get lucky? Did somebody enable that?
Kevin Durkee: Yeah. So that was something I forgot to mention earlier, The HUMAN Lab, one of the real nice things about it was it wasn’t just a laboratory, it was also a way to showcase different example applications. So part of the business model of The HUMAN Lab was to bring in lots of people across industry, academia, across different government organizations. It was located at Wright Patterson Air Force Base, that’s the research and development hub of the Air Force. So you get a lot of people doing a lot of really interesting, diverse things coming through. And so The HUMAN Lab was the key stop on that tour for anybody coming in.
So just through the four or five years that we were involved with that lab work, lots of different parties coming in, and eventually, you had a big industry partner come in and they saw the potential and they were able to extrapolate that into what they thought was an important work environment that could use those sorts of technologies.
Daniel Serfaty: I see. And the director of the lab should get credit, Dr. Scott Galster. Still, it’s a case in which there was an incentive in a sense, an intellectual incentive to get those technologies out of the lab and into the real world or the field. Is that right?
Kevin Durkee: Yeah, that was absolutely the goal from the beginning. The laboratory was really a means to an end, and the Air Force Research Lab, who’s the sponsor of that lab, they have different types of research funding that they allocate. And some of it’s very basic lab studies, and others are, the single marker of success is, does it transition outside of the lab to a war fighter or to an Air Force operational mission? In this case, that happened to be the application toward confined spaces, health and safety within the air logistics complexes. Those are big aircraft maintenance depots where there’s a lot of hazardous work that takes place, and mechanics have to crawl into areas that really aren’t designed for human entry.
And it takes a lot of manpower to keep tabs on them and make sure that safe practices are being done. And that was really what spurred the transition of this technology out of the lab, was, instead of having 10 humans watching 10 other humans going into confined spaces to make sure they’re safe, can you use sensors and use technology to enable just maybe one or two people watching those 10 and then you save eight or nine? Those are eight or nine more bodies back into the workforce.
Daniel Serfaty: I’m trying to list all these ingredients, the necessary ingredients for success. So you had basically, primarily a military service, in this case, The Air Force has both a very prestigious lab in, AFRL, The Air Force Research Lab, that wants to push those things out to the field. We have an Air Force component in the field that is willing to try. We have a large industry partner who is willing to partner with us to take some of the technology to the field. And we have also a bunch of very brave scientists and engineers who don’t mind to make the jump. Zach, what new development and technologies did we implement with these confined space monitoring system that Kevin just described that enable us actually to take something that can be anywhere, a lab with dozens and dozens of sensor, into something that can sit on a maintenance worker that he or she can wear and go into the field? What are the key technologies that we were able to achieve there?
Kevin Durkee: What I really liked about this effort that Kevin’s describing is that we really didn’t produce a completely new sensor or a new wireless communication protocol. It was more so what I call innovation by integration, in that we took some of the latest and greatest technologies out there, leveraging cloud computing, leveraging the latest wearable physiological sensors and environmental sensors, and really put them together to explore this unique use case. We found a lot of success in that. It’s something that previously just hadn’t really been explored. I made the previous anecdote about trying to get technologies out of a laboratory setting into an operational use case.
And I think the reason that doesn’t happen very much is as you said, Daniel, there’s so many ingredients to success. There’s the people that sign the checks, there’s the scientists and engineers that in their laboratories in graduate school work trained sometimes on how to put sensors on a guy that basically has a fur coat. We had tested our system on probably 50 people and it worked flawlessly, and then we met someone out in the wild that had a different body type with a lot of body hair, and then our sensors weren’t operating as we expected.
You encounter some of these challenges and that’s not really an innovation or new technology, but just being able to take your existing technology, modify it in such a way that it can really get the job done.
Daniel Serfaty: How was it received? How did the operators, whether they have body hair or not, receive the system, can you share with us what they say once we ask them to wear a belt or a t-shirt that has all the sensors in it, then go into those dangerous confined spaces that Kevin was describing? Kevin, what are they saying?
Kevin Durkee: Well, fortunately in our case, they had a lot of good things to say, but it goes back to what we said earlier with checking out those bugs, so to speak, in the safe lab setting. We really did ourselves a big favor because frankly, you can lose your end users pretty quickly in terms of their buy-in and being able to use it and wanting to use it. You lose them pretty fast if you show them something subpar, that first impression is just critical. So we were fortunate, we were able to shake out a lot of those issues internally by bringing in lots of people into the lab space.
Then what we show for that first impression with the actual end users is something we’re pretty confident, and maybe not 100% bought-in, but we’re pretty confident in being an 80 to 90% solution that they’re probably going to like, or most of them are going to like.
Daniel Serfaty: Can you think of one example of what those professional maintenance operator or maintenance experts wearing our system, one piece of feedback they gave us that drove us to actually modify the system or improve the system a certain way?
Kevin Durkee: It’s kind of funny, one area where we were off the mark a little early, very early in the confined spaces work was, we were convinced that they would want more of a wristwatch. Everybody’s wearing Apple watches and Samsung galaxy watches, the smart watch is becoming very popular and we were pretty convinced early on that it would gain steam pretty quickly just due to how commonplace they are, and the data quality is getting very good on them, but all of a sudden we start using it, and these are guys who work with their hands a lot, these are blue collar jobs.
And not only does it get a little uncomfortable when you’re turning a wrench and you’re wearing a smart watch, but they also were going to break about five or 10 of these a week, just bumping them on the side of the metal. That was a little bit of a surprise, but fortunately, we worked with them. They actually were the ones with, “Well, what if you put it up on the arm, take the straps off the watch, build it into an arm band. It’s still a smart watch, you just wear it up a little bit higher on the arm.” And that was their idea, we just implemented it. And that problem solved.
And that’s really just a simple example of one thing we did, and we thought something was going to work, not quite as we envisioned, but we came to a great solution.
Daniel Serfaty: That’s a beautiful example, actually. General Patton’s said, “No plan survives contact with the enemy.” And I think here, no design survives contact with a user. There was a lot of wisdom out there that is born out of the experience of these people, doing these jobs, that that’s gold, neutrally for the design engineer. What’s on the CSMs? And again, I’m using an acronym here, the Confined Space Monitoring system, the CSMs resume so far Zach?
Zach Kiehl: Well, thus far, I think we’ve told the story successfully of transition outside of the lab. So first and foremost, that’s something that doesn’t always happen, there’s a lot of great technologies that maybe end up with a patent or a publication and for whatever reason, there’s not an operational need or the technology is not mature enough. So I think the first real bullet point on that resume would be we took a technology out of a lab and put it to an operational use case that had the potential to positively impact someone’s health and safety.
Also, increased compliance with OSHA regulations and then cost savings of being able to actually use the system to be a workforce multiplier. Another point on the resume would be the acquisition of funds to mature the system. So while we started in a lab, we certainly didn’t have a system that could be easily transferred to this use case of industrial health and safety. So being able to attract the necessary funding to stitch together multiple contract vehicles and contract dollars to fund the needed development was certainly something as well.
And then we actually did the math and looked through the confined space entry logs, and how many hours are spent with a person watching another person work, and we got the number of 41,000 hours per year, just for one A-logistics complex. And that’s just one facility within the Air Force that does confined space operations. And as you can imagine, there are multiple LLCs, there are multiple bases and facilities that do confined space operations, and that’s just the Air Force. So we’ve recently started extending it to a Navy application and the commercial sector as well.
So really, the story is just getting started, and I think that there’s a lot of opportunity for advancement into new domains and for the system to continue to scale.
Daniel Serfaty: And so if you look at really the CSM who’s just being born, and as we said earlier, it will take a village or maybe the betterment of four, it makes many stars to align to enable that kind of success. And one person that also deserves credit is Dr. James Christensen from the Air Force who really took from this applied research area and help us as literally a thought leader and a team member, shepherd it into the fields of applications giving.
Kevin, if you look a little bit to the future and you look at this notion of tracking and measuring workers to ensure their health and their safety, maybe even beyond the notion of maintenance worker, how do you see these general idea of monitoring people? Let’s set aside for a second the ethical component, we’ll get back to them later, but what are the promises of the future here?
Kevin Durkee: I think the most promising trend is just how easily wearable, just from the hardware standpoint, if you think of just where this technology was even just five years ago, I don’t know, 10 or 20, there’s just was very little that existed that you could take out of a lab environment and actually do your work while wearing it. And now you’re getting it down to the size of just something on the wrist or something on the chest. There’s some that’s exclusively on the forehead or the ears. So a lot of different options that potential is really being unlocked.
And it’s really just going to continue to get less and less obtrusive and lower costs and things people don’t always think about it, the cyber security. I think what you’re able to do on just the protection of the data is getting a lot more efficient and miniaturized. In summary, it’s the ubiquity of it, it’s everywhere, but by the same token, it also can be scary in a lot of ways, if it’s a hammer and you can swing the hammer in a dangerous way, it can be productive, but it can also be used dangerously. So that gets some of those ethical questions.
Daniel Serfaty: Okay. Well, let me ask both of you a question and I want a yes or no answer, and I didn’t prepare you for that question. Suppose that the company you work for Aptima, decide that they have an hour t-shirt or shirt that has sensors in it that they ask you to wear at work, maybe under your work shirt that can measure on a 24/7 basis, different signals that come out of your work, whether you’re engaged, whether you’re overloaded, whether you’re stressed, whether you have eye temperature and you’re sick. And based on that, makes decisions. Would you be wearing such a shirt? Kevin, one word.
Kevin Durkee: Yes. I’m bias.
Daniel Serfaty: And Zack.
Zach Kiehl: Yes. I’ll eat my own dog food.
Daniel Serfaty: That’s pretty brave on your part because obviously, we start talking about what will people do actually with the data, but the fact that you’re confident that these technologies are going to advance to a point where they’re going to reflect accurately about your state as a worker. And that’s very interesting, what we do with the data is another story. But the fact that you feel that those things won’t send unnecessary alarms or won’t alarm people within the shoe, etc., is interesting to me. And finally, one quick question here, what do you think is the role of artificial intelligence and machine learning in those futuristic systems?
I go back to something very important you said earlier, Kevin, about the importance for acceptance, for accuracy, for honesty, to take into account very different responses by human being, those inherent individual differences that have to do whether it’s Jennifer or John above wearing the sensor and having the system in further state based on those signals. What do you think is the role of AI and machine learning in the future for those systems? Zack, you want to take that on?
Zach Kiehl: Sure. Honestly, I think that it is a very powerful tool that needs to be used judiciously. And what I mean by that is asking the question of, can we versus should we? And I really think that the way that technology continues to progress, as Kevin mentioned with the miniaturization of sensor technologies, and heck, we’re starting to even see implantables, to be able to sense the biomarkers associated with certain states. This is not science fiction anymore, it’s a very real domain. And it really opens up a wealth of opportunities.
And the example I like to give is cognitive assistance, where yes, they can respond to your voice and play your favorite song or set up an appointment for you, but what if they could start to get an insight into your house or read into the data that they’re re-feeding and, “Hey, it looks like your body temperature is elevated today, would you like me to schedule a doctor’s appointment?” And you really start to see where that could extrapolate and go further. Imagine if your doctor is an AI entity and that maybe there’s a human providing oversight or some decision support, but maybe there’s not.
And what that really looks like, and the implications associated with that are truly exciting, but also a bit terrifying. Everyone has visions of SkyNet when you start talking about this, and if not done appropriately and judiciously, there are some justifiable concerns there. So personally, I’m very excited, but I also say that it needs to be used obviously for good and that there needs to be legislature in place to mandate that you want to chime in on the same.
Daniel Serfaty: Kevin, you want to chime in on the same question?
Kevin Durkee: Absolutely. One of my historical heroes is Paul Fitts. He was active duty Air Force decades ago, but he’s really one of the big pioneers in human factors, which is my background. And he had the Fitts list, general principle of the Fitts lists was, you let humans do what they do best, then you let machines do what they do best. And I think that’s a timeless principle and it’s really quite brilliant. I find myself coming back to that constantly, especially as AI has hit pop culture and continues to be discussed. I think that’s just an important rule to keep in mind as we raise that issue of, where do we best apply AI?
What does AI do really well? Well, it can do a lot of the things beyond what a machine typically would do, which is just following a static set of rules. Traditional automation just does the same thing over and over again, just fairly fast, really efficiently, more than a human can. AI can do that in a little more, it’s quite like human in the sense that it can make judgments and make things that would be more traditionally human judgments. But what does AI do better than humans? Well, I think it can take a more objective view with less bias, it can really look at the data, let the data tell a story, let the data make the judgment call.
And that’s a skill I think a lot of humans don’t have. But by the same token, there’s things you’re never going to be able to take the human completely out of the loop. The value of a human life is like something you want judged by an AI? No, and I wouldn’t say so. I think you’re going to want that hard coded into the AI that the value of human life, it’s priceless. So you have to continue going back to those sorts of heuristics as you think about the application of AI.
Daniel Serfaty: Thank you both. So we are in that stage in our adventure here, where we had a crazy idea that we developed it during years with the help of the Air Force in a lab, then we took those technologies together with an idea, and we went to the field with real users and they told us that our system needed to be modified. And we did, and we achieved success both with the Air Force and the Navy, but now it was time for new, more crazy idea. It’s called SOS, Sentinel Occupational Safety. What is that? It’s a new company. And so I will ask Zach to tell us, why did we decide to take the jump from our very comfortable research development and engineering environment and decide to launch a new startup called SOS? Are we crazy?
Zach Kiehl: Probably a little bit. We’re all a bunch of scientists and engineers and we definitely have a bit of craziness in us, or we wouldn’t have pursued the degrees we did. We really started thinking about our technology and realized that there are a lot of people that still have very dangerous jobs throughout the world. About 5% of total mortalities are actually directly related to one’s occupation, so about 3 million people every year die from work-related issues. And I believe the economic impact of injuries, fatalities and diseases from occupational exposure is two and a half to $3 trillion a year.
So although we started with this very small idea of can we monitor the health and safety of individuals working in aircraft fuel tanks, it started to naturally gravitate towards other people’s that have dangerous jobs, whether it’s nurses on the frontline of COVID, police officers that are facing the ongoing riots, a utility repairmen as they’re in the middle of a hurricane trying to restore power. There’s a lot of people that have to go day in and day out and perform these critical jobs for our nation’s infrastructure and for the health and safety of our nation.
And we really took that idea and said, “I think there might be a commercial use case here, and Aptima isn’t particularly suited to sell, sustain, and maintain a solution.” We are a bunch of scientists and engineers that at least we think we have a lot of great ideas, but as far as executing and sustaining those ideas is a whole another challenge. And we thought that the only way that was really an opportunity for us to capitalize on that potential was to start a new organization, which is just what we did.
Daniel Serfaty: That’s a great answer. And so we identified a need there that we are continuing to explore, and we say, “Okay, let’s make a jump.” We’ll go back to that, jump in a second, but in the meantime, Kevin, you’re a senior lead scientists at Aptima, and you have a lot of other projects and other more junior engineers and scientists who report to you have to coach. What should be the relationship between on the one hand, the mothership, Aptima, who is already 25 years old, producing great quality research and development in the human measurement domain and the baby startup like SOS.
Kevin Durkee: Yeah. So I’m going to have to extend my analogy earlier when I mentioned the Fitts list, I think it applies here too, you let Optima do what Optima does best and you let Sentinel do what Sentinel those best. So Aptima is excellent at what it does, which is research and development work, innovating new ideas, figuring out how to bring technologies in a novel way and use them to solve very difficult problems. There’s a very different mindset that goes into selling a commercial product, and that’s a very different type of business.
Zach and I, and everybody on our team, we’re very passionate about wanting to get the Confined Space Monitoring system technology out to the hands of end users and get it out there and get it used. That’s really what I think makes us all tick more than anything, we wanted to be out there solving problems and helping people in that way. But is Optima set up to sell that product and distribute that product in the most efficient way? I would argue no. So that was a really big reason why we felt the need to set up a dedicated entity just for that. Just focus on getting out there into these new industries, many of which we’ve never worked before, mining, oil and gas.
There’s a lot of industries out there that can use this technology, so that’s an essential role of Sentinel, is to get out there into these commercial environments, get the word out and help sell and distribute it so that they can get a broad user base.
Daniel Serfaty: For our audience, imagine now on the one hand, there is Aptima, late teen, early 20s company doing research and development, and little toddler Sentinel or SOS, platform to launch those ideas in the big world. If you see an arrow, an arrow going from one to another, from Aptima to SOS and another arrow going from SOS to Aptima, what’s riding on that arrows? What goes from Aptima to that starter? Many people in our audience are interested in the secret to launching successful startup. What should Aptima give to SOS? What should SOS give to Aptima?
Kevin Durkee: Aptima is always going to have a role in this product. Aptima has the very talented designers and engineers that brought this technology together. This is not a static problem, and it’s not a static technology. There’s very common technology platforms out there that everybody uses, think about the Apple Watch, think about Microsoft Office, all these popular products have evolved over the years, and they’re very different now than they were two, three, four years ago. These are living product, and I think SafeGuard is very much heading down that road.
We need to Aptima to continue innovating, continue bringing the latest and greatest components into the SafeGuard solution so that it stays relevant and addressing the needs that are out there. By the same token, Sentinel is needed to push it out there and collect those requirements. So I think it varies very cyclical and symbiotic between the two entities.
Daniel Serfaty: For our audience, SafeGuard is actually the product that middle startup Sentinel or SOS is launching and producing and marketing. But let’s go back to you, Zach. Now, maybe we’re going to violate Fitts Law, that’s sometime by the way, it’s called the HABA-MABA Law, humans-are-best-at and machines-are-best-at. But now, here is Zach, the biomedical engineer, technical leader, who is also by the way, the Chief Executive Officer or the CEO of that new startup. So should Zach, and I’m asking you Zach, should you be a CEO? Should you be an engineer? Can you do both? Or do we have a separation of power as Fitts tell us we should?
Zach Kiehl: That’s a great question, Daniel, and I guess the jury is still out as I’m very much still trying to learn the roles of a CEO, but I will say that I think that the world could use more technical CEOs. Every CEO is unique in their thinking and their approach, but you see some of these revolutionary CEOs out there like Elon Musk, where they do have a technical background and being able to take that technical background in combination with a business mindset can really push things forward and really usher in some new concepts and provide the business mindset needed to push that board.
So I’m still definitely learning, and I think there’s a lot of work to be done, but I think that at least for now can hopefully fulfill both of those roles. And I certainly don’t want to let my technical skills atrophy, so it will very much be a learning process for me, and one that I’m looking forward to tremendously.
Daniel Serfaty: Okay. So if Sentinel is your baby and Sentinel has several uncles and godfathers and godmothers, but if it’s your baby, what are you hopes and fears about the new company?
Zach Kiehl: I’ll start with the fear and say that I think is that it doesn’t get used. At Aptima we have a lot of great technology that we developed that sometimes doesn’t transition and it gets set on the proverbial shelf and it’s really a shame. Ultimately, there’s learning and all things, I think there are hundreds of great innovations coming out of the defense sectors that really struggled to transition into a commercial use case, and it is a challenge for a number of reasons. So that’s really my hope is that we can do it successfully and maybe be a model for other would be entrepreneurs.
Daniel Serfaty: Yes, I like that very much. I found out that during all my personal career at Aptima and otherwise in other different boards of young company that I serve on, there is one ingredient that is more important than venture capital, that is sometime more important than innovation or the product or the markets, one ingredient. And that ingredient is a dedicated champion or a champion team that will makes up for all the other fluctuations. I leave that to myself when I started Aptima and I keep relearning that lesson again and again.
I wish you the best, but you’re surrounded by a very good team like Kevin and others that are going to support you. Just as a way to add a little more about Sentinel, envision commercial success for the SafeGuard product to product line at SOS. Close your eyes, open them a year from now, and then open them again three years from now, what does it look like?
Zach Kiehl: I love to envision that future and quite frankly, I’m an optimist, so that’s the only future I try to imagine, but I remark back to an anecdote we had from a user of the system and he told a story about being stuck in a confined space and how it was the scariest five minutes of his life. He had been in a space that got shut up for the day, he was banging with a wrench on the side of an aircraft fuel cell, and nobody could hear him, and how terrible that was. And honestly, that’s really where I want to go is, I know there are dangerous jobs out there, and I quoted some statistics earlier that talk about how big of an issue it is, but with the advances in technology that we’ve seen, a lot of these incidents are preventable.
That’s where I want to see Sentinel and SafeGuard moving forward is actual use in applied settings. And certainly, there’ll be some challenges associated with that, especially as we learn what it’s like to sustain a commercial product, but I think in one year’s time, it would be fantastic to see a number of users using the system, giving us great feedback to improve the system. And that we can hopefully improve their organization both from health and safety standpoint, but also from an efficiency and cost savings. And in three years, continued expansion.
I’d love to see us move into additional markets that maybe aren’t specific to the DOD. There’s a lot of potential applications of folks doing potentially dangerous jobs that I think could benefit from the use of personalized health and safety monitoring.
Daniel Serfaty: Thank you so much for having shared these adventure throughout this entire oracle of activities that are very often born of some really out of the way idea in a proposal and ends up in something concrete that we all hope through the science of human performance measurement, would eventually help people be more performing, safer and more enjoying their work.
Thank you for listening, this is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast, and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during these episodes, please visit aptima.com/mindworks. Thank you.
Jean MacMillan: I think one of the key strengths about Aptima is from the beginning, we worked so hard at our shared value of fairness. The most capable person gets the job and that’s really built into Aptima’s DNA and it has been from the beginning.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. Today, I have two special guests. By that I mean really special guests. They are my partners in life, at work, in crime. They are the people who helped me build Aptima from the ground up. First, Meg Clancy, who not only co-founded Aptima with me, but made sure from a business perspective that Aptima was valuable and continues to be valuable to this day. And Dr. Jean MacMillan, who is our former chief scientist and now consultant with Aptima. So let’s start with Jean. Tell us a little bit about your role at Aptima over the years.
Jean MacMillan: Well, it’s a real challenge to be the chief scientist of a company like Aptima, that’s so intensely multidisciplinary and interdisciplinary. My own personal academic training is in cognitive psychology, but we have such a diverse and talented bunch of people at Aptima from many different fields.
So the way I define the job of chief scientist was really to focus on the clarity of written communication across disciplines. That in a practical sense, played out in terms of writing competitive technical proposals. But I felt I was in a good position to tell people how they could clearly communicate their particular expertise to other people who might not share their academic training. So I would say that was really my specialty. My role at Aptima.
Daniel Serfaty: Did it evolve over the years? And in what way?
Jean MacMillan: I got a sense of how to do it better. Particularly on technical proposals, I worked very hard to get people to think ahead of time, about what they wanted to write. I ran what we call six story meetings when people were planning proposals, because I saw people just sit down and write proposals with no thought about what they were trying to communicate.
I would say my primary contribution at Aptima was forcing people to say clearly ahead of time, what story they were trying to tell in the proposal. So we could figure out how to communicate it clearly to the people who were making the funding decisions about our work.
Daniel Serfaty: Well, storytelling is certainly a skill that is difficult to acquire in one shot and we’ve been honing that skill Aptima more and more. Thanks a lot to your leadership in telling the scientific story, as well as the business story, but you joined Aptima when we were a handful. And now we’re about 250 people. How did your role really change? You were writing all the proposals at the beginning.
Jean MacMillan: Well, I went from writing all the proposals to conducting these story meetings before other people wrote the proposals to trying to train other people and how to do these meetings and trying to multiply my viewpoint so other people would be able to do the same thing that I had done. So it got really more into a teaching role as the company got bigger, I couldn’t write all the proposals. I couldn’t even run all the story meetings. Other people need to learn to do that. And I needed to try to do some how to do it.
Daniel Serfaty: Thank you. Well, we’ve come back to that. I have any more question in that direction, but I’d to introduce my other guests today. Meg Clancy was the founding CFO of Aptima and has been with the company for its entire existence to this day. So, Meg, about a quarter century ago. I’m sorry to say that word, but it’s been 25 years. You decided, “Yeah. Let’s build a company with Daniel.” Tell us about your role over the years since those early days in the mid ‘90s.
Meg Clancy: Well, I like to think that I helped set up the framework or infrastructure for the business to be successful, which was at time zero, me being both bookkeeper, accountant security officer, I think for a brief time, I was even the ISTP manager, as crazy as that sounds, but in a small startup, one must wear many hats, but the company grew and we needed to bring in more expertise.
And I was very happy when a number of people that we had worked with over the years decided to apply to open job description. So we had, so we had a little microcosm of the company where we all worked in mat after some period of time. And these were really professional, experienced people, and many of them are still with us today.
Daniel Serfaty: Okay. But as the company grew, at some point you stepped out of the role of CFO and you hire a former colleague of yours Tom, who’s our current CFO. How did you see your role at this point from the board, from looking at the different aspect, as you say that any entrepreneur needs to have, we all wear multiple hats and we continued to wear multiple hats way after the founding of the company. So how did you see that role evolve for you?
Meg Clancy: Well, I think it extracted me from the day to day operations of accounting and finance and allowed me to get more involved with strategic. Decision-making where to open offices and how to open offices and complying with new regulations that came out. So I think it lifted me out of the day-to-day enough to start contributing in the areas where we needed to grow and needed to get smart so that we could continue to grow with all the new regulations in the human resource area and the new security regulations. I tried to get us going on those fronts, other more capable people have taken over as we were able to hire professionals with deep experience in those areas. But I think that it helped me contribute at a higher even higher level because I had everybody’s problems rolling into my office, including if petty cash didn’t balance.
Daniel Serfaty: Well, thank you. It seems there’s a lot of similarity between your two stories, because even though you come from very different fields, accounting, cognitive psychology, in the case of Jean as an entrepreneur, we went through the doing to the teaching, how to do to the supervising of the teaching, how to do, and that’s an evolution that is very healthy for us because at the end of the day, a company, I believe Aptima can only grow if the new contributors make it grow.
And the best job that all of us can do again, as builders of company is to coach other folks so that they can grow the company. So that evolution from doer to coach is one that the three of us have in common. So Aptima as being founded on the premise of, okay, we believe that we can build systems in which humans and technologies can co-exist in some kind of harmony or at least how many States at the end of the day, this is still our belief.
After 25 years, we are talking about much more complex system now, because that technology we’re talking about is an intelligent technologies and new technologies that adapt that maybe artificial intelligence, but it’s a technology that learn and changes. So the premise was there. Yeah. Let’s build a company that build those systems and that is made of equal part of technologies and human scientists.
And why not? Let’s jump. We have no financing. We can start from zero, let’s start a company. This was crazy. Wasn’t it? Especially looking back. We all had very comfortable and successful careers before we started actually for the audience Meg and Jean and myself met a few years before the founding of Aptima in another company called alpha tech where we were colleagues and we enjoy each other’s company.
But one day Jean was not with alpha tech anymore. We decided to jump and to build that company. So all the professional choices you could have made at the time. And that’s the question for both of you, what made you leave your very comfortable, successful previous job and join Aptima you took a risk, a major risk. Did you take that risk? What were your hopes? What were your fears actually, Jean, can we start with you?
Jean MacMillan: Of course. And I have to admit when Daniel first approached me about the idea of starting a company, I told him he was out of his mind. I said, do you know how scary? And I’ve used how hard that is? You have work. It is to have no, how many small companies go out of business. And as I say, in retrospect, of course I was right, but ultimately what pulled me into abdomen was Megan Daniel.
They were two of the best people I ever worked with. And I was also just extremely fond of them. In addition to the respect I had for them. And I just couldn’t pass it by, I couldn’t let the boat leave without me. I couldn’t pass up the chance to work with them in growing a company and the fears and the hopes are pretty much the same thing for me. I was very frightened.
I was terrified. In fact, I was sure that I wouldn’t be able to give it enough. It felt an incredible adult responsibilities. if the company goes under, this is going to be my fault. And I was afraid that I wouldn’t really be able to make the contribution that I wanted to, but I took the plunge, maybe the scariest thing I ever did.
Daniel Serfaty: We are fortunate that to do that launch because you shape the company in ways. Now, even when you are in kind of a semi-retirement now you’re still attached to Aptima, but we don’t see you as much as we used to do. So many folks have joined Aptima now that you have yet to meet and they still are speaking your language.
You see folks in the hall talking about six stories, meeting and talking storytelling in proposal. And I said, well, they don’t know what those words are coming from, but they were invented by Jean. So certainly I’m glad you made the jump. Meg, you were a CFO in your 20s of a company that was quite successful. You had a family, you had a baby and you still decided to jump and co-found this company where you crazy?
Meg Clancy: Well, I have to admit, I had many friends that told me that they thought I was doing something crazy, but my late husband got on board eventually and then became very supportive of the decision. Because we both stupidly thought that it would take time to ramp up and I would only be working part-time hours. And I would be able to enjoy more time with my son. Well, it took off like a rocket actually. As Daniel said earlier, we were profitable from day one.
We didn’t have to seek outside capital, but we had a stay on top of performance so that we got continued business. And then we tried to penetrate new markets like Costco contracting for the government, and we need a special accounting system for that. And then secure personnel clearances. And then we needed to get the company cleared. So I was really busy from one that wasn’t what I was planning, but it was fun. And we always met payroll.
The one time that we were short on cash, my husband loaned me $40,000 for a week and we made payroll and he got it back with interest. So it was fun. It was exciting. I never really worried about failing, which I’m a worrywart so it’s surprising that I of all people would not worry about that. But I knew how hard I was willing to work and I saw how hard Daniel was working in when Jean joined us. I just thought, “Well, it’s trifecta.”
Daniel Serfaty: Thank you, Meg. I think one of the reasons many startup fails is because they cannot afford a person of your caliber at the time, to be able to run the business while somebody was on the road, drumming up some new business. And usually what happens is that after a year there is not enough energy in the system to continue. There is not enough infrastructure.
I was extremely fortunate that you set up that infrastructure in such an expert way that sustain us through these days. It’s interesting that you were not fearful about failing. That surprised me because I know that entrepreneurs need to be a little more optimistic that their general population, otherwise they will immediately stop what they’re doing and go back to their paying job. And in this particular case, you mentioned that you were never doubting in a sense, why is that?
Meg Clancy: Well, I think you treated me in a way that I felt a true partner. I mean, I had equity from the start and you gave Jean equity when she joined. You made it very clear to us that we were valuable to you. I knew that we all had the necessary skills, you and science and development, and Jean certainly in science, writing and technology, and proposal production. We were all playing I think the right rules for success.
Daniel Serfaty: Well, one of the rules, I remember you were talking about you borrowing from your late husband, Larry, $40,000 to make payroll. I remember the two of us going around the 30th of December with a shopping cart, literally because we have a little more money left on overhead that we needed to spend in the current year before turning the leaf over to January. And we’re just buying staples and paper and other things like that, just to be able to balance the budget with the shopping cart.
I think if we were doing that today, we’ll need a much larger shopping cart so that it fit to the same story. Jean, back to you. Imagine you have a young woman with a PhD in psychology, a couple of years, maybe out of grad school, who’s coming to you today and say, “Hey, I have these job at a large corporation, but I’m thinking about going and joining this startup.” What would your advice be?
Jean MacMillan: Well, I think I would ask that person to think very carefully about the people. People talk about companies as though they were an abstract thing. I mean, what are the people like, who are running this startup? Do you know them? Do you trust them? Because that I think is the key question. It certainly was for me. I joined Aptima on the strength of my relationship with you and Nick.
Daniel Serfaty: No, I agree. That’s a good answer. That’s also very relevant to the choice of many people and not just many scientists, but especially many young women are doing today in the high-tech field. It is a field that is not as welcoming to women as other fields. And therefore the question of that, do you trust the people that the people that are going to bring you in that startup? Are they going to be able to give you all the possibilities to move vertically, laterally, et cetera? I think what you say is so true.
And at the end of the day, we know, especially in our business, when we are not truly manufacturing anything tangible, all we are selling is ourselves in a sense, that relationships with our colleagues, teammates as well as with our potential customers, is what build the business. So I wonder whether or not, and Meg, maybe you can comment on that too, whether or not being a woman, a female performer in that environment that has particular challenges. And whether Meg you’ve seen some changes from say 25 years ago or 30 years ago, even when you started your career to today.
Meg Clancy: Yes. Have seen a lot since I got out of college in 1979, the workplace was a completely different beast then. But the thing I liked about ALPHATECH and Aptima were the caliber of people that this type of business attracts. I felt as a woman, I was assessed based on my capabilities. They’re quantitative people.
They are going to look at results and they gave me a fair shot. And we continue to do that. I think over 50% of our managers might be women now, I’m not sure, but yes, I’ve seen the landscape change tremendously, but I was always fortunate enough to work in companies that were ahead of the curve on that.
Jean MacMillan: Can I second that? I worked for many years for a company called Abt Associates in Cambridge. And a number of my managers and the people I worked with and the CEO, Vice Presidents in the company were women. And I was just out of my first round of grad school. It didn’t occur to me that that was anything unusual. I just took it for granted because they were capable people and that’s why they had those jobs and never crossed my mind. And you know, I was down with a team of people who were doing the best and final briefing for a competitive procurement.
And, it turned out that everybody on the team we took down were women. I was leading the team, someone from the customer asked me why our entire team was women. And the question just kind of took me a back somehow because they were our best people and the person said, “Oh.” So I came out of that environment. I kind of got my first professional training in an environment where it was simply taken for granted that the best person got the job.
Daniel Serfaty: And I think this notion of being good at what you do, being excelling at what you do, especially as you as founding executives of the company, I was so proud to be quite often in the early days of the company to say, “Well my right hand and my left hand are both women in the business side and the technology side and science side,” At the time, I was the exception.
The leadership of many other companies this at my experience, startups et cetera, had very few women in them, especially in leadership position. I think that your both roles, Meg and Jean, and you certainly not what defines you only your gender, but you continue to be today either explicitly or sometime implicitly role models for a lot of young scientists, engineers, administrators, accountants, et cetera, that are working at Aptima.
Because they saw very early on that, “Yeah. If you’re good, it really doesn’t matter. That’s what defines your ability to move up in the organization. And here two out of the three founders of the company are women.” I think that’s one thing that I’m quite proud of, but it’s something that has to be sustained and managed over the years.
I found out it doesn’t always happen naturally. We took it as you said, Jean, you took it for granted. We were there, “Well, they were the best team I had. Therefore they just happened to be all women.” It’s not something that’s very natural in the business environment, even in 2020.
Jean MacMillan: Well it ought to be.
Daniel Serfaty: [inaudible]
Jean MacMillan: I mean, I think one of the key strengths of Aptima is from the beginning, we worked so hard at our shared value of fairness. The most capable person gets the job.
Daniel Serfaty: Yeah.
Jean MacMillan: And that’s really built in to an Aptima’s DNA and it has been from the beginning. And I think that’s very important.
Daniel Serfaty: Jean, as you were driving from Cambridge to Woburn every day, for a day’s work at Aptima, what would you characterize made your life interesting? What made you want to go to work every day?
Jean MacMillan: Well, of course I was often interested in the technical content of what we were doing, but what really pulled me into Aptima, that made me go to work every day was the sense that I had my hand on the tiller. That I had some control over building a company in the way that I thought it ought to be. I wasn’t kidding when I said to Meg we built a company we wanted to work for. I had never had that experience of having control at that level of the direction of the company.
Daniel Serfaty: And the truth is that none of us had before, because that was the first company we built too.
Jean MacMillan: Yep.
Daniel Serfaty: It may be the last one, but it is the first, certainly. What about the answer to that same question, Meg? What made you drive to work? What were you looking forward to?
Meg Clancy: I don’t know if interesting is right but challenging certainly. I always love doing a puzzle, solving a problem, and sometimes it might be a new regulation from the State government. Figuring out how we’re going to do this, how we’re going to adapt the systems, or trying to implement one of your ideas designing the policy around it. I took it as little challenges or puzzles that had to be solved.
I get obsessive about stuff like that. So I have to see it through to the end. So I was driven to solve all the puzzles that you presented, that Aptima presented to me and my team. I didn’t do it alone. I mean, I had to do it with my team, but that’s how I looked at.
Jean MacMillan: I want to say that Meg is only person I’ve ever known who showed excitement at the prospect of needing to dive into the federal acquisition regulations. “See? here on page 432, paragraph 145 B, it says this. This is what we have to do.”
Meg Clancy: Oh, come on, I’m not that boring.
Jean MacMillan: Meg gets a kick out of that.
Daniel Serfaty: I think we need one obsessive detail-oriented person on every team. Don’t we? Because I think that I slept better at night and I felt more energized on the road knowing that Meg was worrying about paragraph 1246 in the federal regulation document, because that’s what enables things to happen and to actually turn into real business. I know that one of the feeling I remember, rather, an anecdote I remember what made me so excited at the beginning, it was a mixture of anticipation and frantic energy on most.
Is the fact that here I was going to potential customer at the Navy, or at the Air Force Research Labs and convincing them to something without showing them anything concrete, just based on idea. And here they were ready to sign a half a million dollar check based on that. The kid in me was wondering about that and I still wonder to these days, what is it that our customers are really able to sign a check for? It’s pretty magical to this day.
I don’t tell that to everybody at Aptima because, you know, you have to play the role of the serious CEO who has all planned. But the truth is that it’s still very magical that at the end of the day, you think of a crazy idea. You put a little structure around it and somebody on the other side, all you have to do is to convince them through a brilliant proposal like you put together, Jean, or business proposal that you put together, Meg, that, “Okay. You know what? I’ll sign you a check for a million dollar. How about you go and play for a couple of years and come back, tell me what you found.” Isn’t that amazing?
Meg Clancy: That was always the first thing in a six-story meeting for proposal. I would say, “Tell me briefly why they should give us this money?” And people were shocked and amazed. That is something they could think about.
Daniel Serfaty: Yes.
Meg Clancy: The look in someone’s eyes the first time they are part of a team that puts a proposal together and we wanted it. Then made something happen is wonderful.
Daniel Serfaty: It is. At the end of the day. I think that’s what’s so satisfying in this business is that you start from an idea from really nothing concrete, other than your own energy, commitment, and perhaps madness of starting a business and putting your livelihood and your ability to feed your family on the table and good things happen. I mean, we’ve been very fortunate. I know for our audience here that the three of us are very different individual, but we are also very dear friends.
And so it was not always full agreement amongst the three of us. From there, we had some memorable fights about business choice we had to make, but the fight was compensated by the fact that we had infinite trust in each other. We knew that whatever the other was arguing for was really for the good of the team. That’s what kept us together, I think, and kept eventually the company together.
And that has value propagated throughout the company. That’s one of the things I’m very proud of. At Aptima is, people will come from the outside with very deep and very business experience in which they have reviewed hundreds of businesses. That’s the first thing that strike them at Aptima. It’s the degree to which folks are trying to be supporting of each other, this notion of intangible, perhaps, but certainly observable teamness.
And I think it came from the three of us, frankly. People were able to observe you’re three very different individuals, with very different skills, with very different experience with very different opinions that are able to put a coherent business strategy together and do it. I wanted to say that because I think that that’s something I am personally very proud of.
So if you look at, a little bit of free thinking here, can you think of one anecdote during your tenure of Aptima, that taught you in important business lesson, an important lesson that maybe where you actually learned something? Either in a pleasant way or sometime in a painful way. Think of one, share it with us. Would you mind? Anybody can start, Jean?
Jean MacMillan: Well, I’ll go first. This is an unpleasant lesson, but it was a very important lesson. And I’m remembering there was a time when we brought somebody into our team that didn’t really share our values. And particularly I think the values of fairness, the way we wanted to do business, and the way we wanted to treat other people with respect.
And it created an extremely toxic effect. It taught me this person had other advantages and other things we thought they could contribute, but they far outweighed by the fact that they didn’t operate on our values. It created a lot of tension and a lot of bad feelings and undermined our business effectiveness. It’s not about whether you liked somebody, it’s a matter of do they share your values about the right way to do things.
Daniel Serfaty: But Aptima, as you said earlier, Jean, is built on the notion that the secret sauce, what our customer pay for, is a diversity of expertise, a diversity of opinion, that what we call the interdisciplinary magic in a sense. Are you saying that even in that interdisciplinary composition of putting together of the team, there are things that we shouldn’t do?
Jean MacMillan: Yeah. They have to be shared core values. Why the job goes to the best person, not to the person who did you a favor last week. That’s what I mean by shared values. There’s nothing to do with differences in opinion about the right way to do the job, differences in training and background and perspective. Those are all good and can be worked out. But if you have a disconnect with basically what’s ethical and moral behavior with someone, I don’t know how you get past that.
Daniel Serfaty: Okay. Well, that’s certainly very big lessons learned. I assume that you’re going to tell the audience, how did that problem got resolved? Eventually.
Jean MacMillan: The person left.
Daniel Serfaty: Okay.
Jean MacMillan: Is how that one got resolved. And we did a bunch of damage control afterwards. And only after we’d found out some of the really, I have to say unethical things that person had done.
Daniel Serfaty: I think it’s also perhaps for our audience. We’re thinking also about building new teams or building new companies. I believe that this notion of everything is not going to be rosy all the time. I mean, you’re going to have failures as well as successes, but what’s important here is not to fail. And you have all kinds of literature in management theory now about failing fast, failing quick, failing small, but rather what do you learn from those failures and whether or not you are able to take those lessons and apply them in the future?
It’s very difficult when we fail in a society and frankly, a professional environment in which failure is not looked upon very well. To fail and to accept that, to learn and then to change as a result. When we fail, we tend to very quickly want to turn the page and move on. And I think there is a lot of lessons to be learned there. I’m going to ask each one of you to go back 20 years or so, 25 years. And with the benefit of hindsight, if there is one thing you would change, what would that be? Other than kicking me out of the company from day one, that doesn’t care.
Meg Clancy: Daniel, I would have been nicer to you when we had all those arguments. You would have still been wrong of course, but I would have been nicer.
Daniel Serfaty: All right. That’s one good one. But think about things, it’s very difficult in life because life change, it’s not like we were restarting the company today, but if there was one thing you wish we would have done differently, taking more risk, less risk, making a decision another way at any given point, what would that be? Such a difficult question to answer though.
Jean MacMillan: Well, in a perfect world, the three of us have a broad distribution of skills that cover many of the things that we needed to know about. But I wish we had had somebody who was part of our initial team who was more of an expert in software development. It’s a place where I feel like at the highest levels of the company, we didn’t know as much as we could have and should have.
Daniel Serfaty: Kind of our chief technology officers in software that could basically be your counterpart on the science part. Yes?
Jean MacMillan: Yes. There’s always been involved in the company between the folks who know how to develop software and the people who are doing some of the thinking about ideas and what could be done. And I’m hoping that perhaps if we had had someone from the beginning to shape the culture and the attitudes, we could have made some better decisions.
Daniel Serfaty: That’s a good insight. Thank you.
Meg Clancy: I would have to second what Jean said. I think our girls could have been turbocharged if we had that one missing link. I mean, you would get the demos. You are always pounding on the table for, sooner, in a project and we would have had more robust software deliverables. Nobody was unhappy with our deliverables that I know about or can name any way. But yeah, I second what Jean said.
Jean MacMillan: I want to add to that. If we had somebody at our level, from the beginning, they would have had the fights with you that Meg and I were not capable of having. I remember, and Meg was all that nice to you. She got very lost or two for one occasion that I had to go and buy M&M for everybody. But somehow you would say, “Daniel, that sounds so good, but we’re never going to be able to do that.” Why you would believe them?
Daniel Serfaty: Yes.
Jean MacMillan: Somebody you trusted that really knew their stuff. Who could tell you what was and wasn’t feasible to expect in a way that would have made sense to you and that you would have accepted.
Daniel Serfaty: First, I agree with you that certainly something that my own personal naivete regarding how difficult developing good software, steady software is. Because my training is in engineering, but not in professional grade software. And therefore I made the jump, “Oh, I can develop a program that does that.” But then there is a huge gap between developing a program that does a function and having an implemented software in the field that does its function.
I think my hope here is regarding that particular point, is that the new generation of scientists coming out of grad school in, say data science or artificial intelligence, or even frankly, in the human science, come already trained with quite a deeper understanding of software because most of the systems they’ve used in graduate school even are very software heavy, already. Software is not an add on. Software is an essential part of what they produce.
And as a result, I found that today, the dialogue between the scientists and the technologist is much more harmonious than it used to be. Because there is some kind of a mutual understanding of each other’s needs. It’s not perfect yet, but I think in a sense, we got a little closer over the years, but that’s a good hindsight. Okay. So if we start a new company, we start it with a new software expert. You guys ready to jump?
Jean MacMillan: Mm-hmm (affirmative).
Daniel Serfaty: Joking about, you have an opportunity now each one to ask me a question. If that’s your time to interview me on the podcast, Meg, you want to start?
Meg Clancy: I can ask you any question?
Daniel Serfaty: Well, this is a PG rated program. So yes.
Meg Clancy: I would ask you what you think are the right attributes that your era parent, not that I know as you have one right now, but would have to possess in your opinion?
Daniel Serfaty: So, we’ve had a person at Aptima last year, she supported Aptima throughout. Her name is [Betsy Meyer]. She’s a leadership expert. She used to run the leadership institute here at Harvard, and she consulted with us with two different groups. Not just consulting, she trained two different groups about leadership throughout the year or this year. She’s actually, one-on-one coaching a few of our executives and senior manager.
And she talks about something we all know instinctively, but there is more formal structure behind it today, about the head and heart part of leadership. The head as being strategic, intellectually fit, visionary perhaps, part of the leadership. Then the hard part being about what I considered to be number one quality that is important for a leader, which is empathy. And empathy in a more stricter definition of the word, not just as compassion, which is very important, but also the ability to put yourself in the shoes of the other.
The other can be a person in the company, you’re talking about the future leader. The other can be a customer, a partner, a person with whom you have a conflict, a person with whom you’re negotiating a contract. This ability, I think is a key part of being a leader. Our business schools today produce leaders, future CEOs, perhaps that are primarily focusing on the head part.
Because the heart part is very often considered to be a nice to have or undefined quality. But I think personally, it’s very trainable. You can develop it over time, but you don’t have a leader that is only all heart. So you’re asking me about eventually who will lead Aptima in the future? I would like to see a person who is able to balance head and heart, not as a 50, 50 arbitrary division, but more knowing when to tune up the heart part and to tune in down as well as the head part, depending on the situation.
So to be adaptive to the situation and you being able to be brave enough, to be compassionate and empathetic to the people at Aptima. These days it’s even more important given COVID-19 and given the pandemic, given the stress under which people are with the protest and black lives matters, and anti-racism. This notion of trying really to understand what people are going through is going to be key to the future leaders. So that’s my heir. I don’t know if there is an apparent, but the person I would love to take Aptima to the next level.
Meg Clancy: Thank you, Daniel.
Daniel Serfaty: Thanks. Jean, any questions for me?
Jean MacMillan: My question is not as good as Meg’s question. But, Daniel, looking back on, it’s what? Basically 25 years now.
Daniel Serfaty: Yeah. Almost, yeah.
Jean MacMillan: You and Meg first took the plunge to now. What has surprised you most? You look back on how things have played out. What was the least predictable thing? What was the thing that surprised you the most?
Daniel Serfaty: To this day, I’m still surprised and in a wonderful way. When I interact with more junior folks or interns sometimes, we have plenty of interns roaming the virtual holes of Aptima this summer. And seeing how they see an old problem, an old technology that we’ve been playing with. How they see that with their new fresh eyes.
I’m always surprised because your first reaction is that, “Oh, they will not understand. I need to explain.” But if you shut up a bit and listen, you’re surprised by seeing that their own level of understanding may be drastically different from your level of understanding. So that constant discovery of the other continues to surprise me. But another thing that surprised me over the years is, when business goes up and down, we had a lot of very good years and I think we should all consider ourselves blessed with them.
But sometimes we had a couple of tough years or tough periods during which business was tight and we had to make tough decisions. And I was always surprised at the people, leaders or performers at Aptima that performed very well during good times. Who’d actually literally collapsed when the times were not good. And other folks were really maybe in the background, not necessarily shining during good times, were able actually to shine during tough times.
That surprised me. That notion of resilience as a quality, as opposed to performance as a quality, is really important. I think for me over time, being resilient in life, in business certainly, is more important. Because what keeps you through difficult periods, like the one we’re having now, the business is doing very well, but the stress on the staff about all these virtualization of work, the kids at home, the economic stress, the social stress, eventually has an effect on folks who don’t have all the resilience equipment to deal with that.
And so that’s the second surprise. It takes different kinds of qualities to go through good times and to go through bad times. I’ll give you an opportunity just for a few seconds, maybe to say a couple of sentences. You wanted to say about your life at Aptima. After all today, we went back memory lane trying to remember some episodes, but also some of the lessons learned. I hope our audience will benefit from whether they are at Aptima or not at Aptima, about what it takes sometimes to build a successful enterprise that people look forward to go to work into. So Meg, Jean, any last words of wisdom or partying?
Meg Clancy: Well, I’m very grateful for the opportunity I’ve had in my career to help build Aptima into what it is today and to have really created so many professional jobs, well-paying jobs now across the country.
Daniel Serfaty: Thank you. Jean?
Jean MacMillan: It’s been a wild and wonderful ride and I am so proud to have been a part of it. You guys are the best.
Daniel Serfaty: Thank you, Gina and Meg.
Meg Clancy: Thank you.
Daniel Serfaty: Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcst or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. [Connor Simmons]. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Janet Spruill: The promise of adaptive learning covers a lot of ground. I think that one that’s really important, especially in these times to talk about is the issue of learning equity and driving learning equity at scale. And what I mean by that is there’s a fair amount that’s published about the potential of adaptive learning technologies to drive equity at scale on higher education. But I think it goes more broadly than that. I come back to think about the personnel that we interact with, the young Navy recruit and how adaptive learning can help these young sailors who come from all backgrounds and all achievement levels. So whether it’s in a college or a military technical school, these underserved students can feel perhaps disproportionately unprepared for many aspects of their learning experiences coming into them. I think adaptive learning can really serve almost as a great equalizer to help them succeed and build a lot of confidence as they go through their learning career.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. This week, we are going to explore another topic on our quest to deepen our understanding of how human think, learn and work. And we’re going to focus specifically on the learn part. Learning, education and training are undergoing major transformation in front of our eyes and not just because of the pandemic and the remote learning and education that we are experiencing with it. But the vast amount of data generated by schools, by learning environments about the learner is now available. And when you combine that with advanced technology such as artificial intelligence, data analytics, adaptive learning system, it does not only help us measure and understand how humans acquire new skills and absorb new knowledge, but as we learn, it also has a power to address societal problems of equity.
In light of this revolution, I’m delighted to have my two colleagues from Aptima join me today and take us on a journey of exploration, looking backwards and forwards into the world of learning, training, and education. And it is my pleasure to introduce Ms. Janet Spruill who’s Aptima’s Vice President of Programs and has more than 25 years of expertise as a human performance technologist and she developed and work with training systems and Dr. Krista Ratwani, who is a Senior Industrial Organizational Psychologist and expert in leader development and training, and is a Senior Director of Aptima’s Learning and Training Systems division. Here we go. Welcome to MINDWORKS.
Welcome to MINDWORKS Krista and Janet. What made you choose this particular domain? The domain of understanding how people acquire skills, how people acquire mastery, how they develop both in the research side on Krista’s side of the equation and on the implementation and the fielding and the execution side on the part of Janet. Let’s start with your Krista. Why this as opposed to all the other career choices you could have made?
Krista Ratwani: In some ways I would say I kind of fell into it. I focused in grad school on leadership and leader development. And today the field of learning and training and education is changing. And a lot of the qualities that we are trying to develop in all individuals is really what I think about as key to leadership and great leadership. So things like critical thinking, things like adaptability, that has been my focus in terms of thinking primarily from a leader perspective. But now I think that that’s getting broadened to this more just general learning area. And we want all individuals to have those types of skills. And so for me, it was that transition from focusing purely on it from a leadership perspective to more of the general learning perspective.
Daniel Serfaty: So in fact, it’s the acquisition of critical skills and complex skills that is your research interest?
Krista Ratwani: Yes, exactly.
Daniel Serfaty: Janet, you came at it from a different angle. Tell us why this field? You could have chosen any domain you wanted.
Janet Spruill: I think like Krista, I came at it from a little bit of a different direction. I actually started my career in the field of human computer interface design, where I was working with organizations to redesign large corporate computer systems and doing what one of my customers at the time called designing for training avoidance. So looking at how we can redesign these systems to minimize the training that was needed. So through that, I’ve got exposed to many, many training classes that were teaching students to kind of click through a system calling it training because of course their systems weren’t at all intuitive. So I got really hooked on applying my human computer interface experience to learning design, which was recently important at that time when training programs were moving online and beginning to incorporate more media and technologies.
Daniel Serfaty: I don’t think the term learning design maybe familiar to all of our audience, what does it mean to design training?
Janet Spruill: So for me, I think that it covers a lot of aspects. So there’s the instructional pedagogical design to ensure that the content and the learning strategies that we’re selecting will result in the learning and performance outcomes that we expect. It also means blending the right delivery methods to be well aligned with the content that’s being presented, whether it’s a live training activity within a classroom or an immersive simulation or a job support aid or access to a short micro learning video.
Daniel Serfaty: I like that term a lot, the term learning design, because it’s already introducing what we’re going to explore in the next hour. And that has to do with the fact that learning is not something that happened. Training is not something you do but you have to design it the same way you design an engineering machine or a bridge, and it has some structure to it. It has some maintenance to it. It has some modeling associated with it. And I like that very much. We’re going to explore that structural approach to the acquisition of skills and knowledge a little later. Our audience is saying, “Wow, we have two very intelligent guests with experience. I wonder what they do actually every day.” Krista, what is it that you do?
Krista Ratwani: I asked myself that every day. There’s a lot of variety, but from the perspective of what am I trying to do learning wise, and what am I hoping to do, it’s really about developing those tools, those methods, those processes that will help people, that will help them learn more effectively. So get to the heart of what they need to know, allow them to enjoy the learning process. And in some ways also help them learn more effectively. We all know that organizations put a lot of money and resources into training activities, into learning activities for their employees. And so getting that return on investment is really important. And you want to look at that and help organizations do that, but you also want to ensure that the individual going through the learning experience is getting something out of it. And so that’s really at a high level what I tried to do.
Daniel Serfaty: And I assume I’m not divulging any secrets here by telling our audience that you are the senior director of a major division. So you have several teams working for you on different projects. And so can you give us an example of what you just described for the audience to understand what does a senior researcher in this field do for a living?
Krista Ratwani: So the projects certainly vary, but one thing that we’ve been working on for a couple of years that I think has many different components to it is with the army and with the fairly new role of the military advisors. These are individuals that are getting deployed to go help advise counterparts in other countries. And so not only do these advisers mean to have all the tactical and technical skills that you would expect any soldier to have, they need to know how to shoot, move and communicate, but they’re expected to build trust and build rapport with those individuals that they are now interacting with in a different country. And so what we’ve been brought on to do from the training and learning perspective is to really help infuse those softer skills into the curriculum. And so we’re doing that in a couple of different ways.
One is to help the instructors that assess how those skills are being developed in the students. So if the students are actually developing those skills and then the compliment to the assessment is how can the instructors then provide the developmental feedback that those students need. Because feedback is a huge part of being able to learn really well and learn effectively. So it’s developing those mechanisms that will allow somebody to learn. And then the other part of it is really helping the instructors set the scene so those types of skills can be developed. So in some sense, it’s about developing little snippets of scenarios that can be injected into a preexisting curriculum. So enhancing the existing curriculum that the army has developed already and having them add additional scenarios and additional things that they can put into the curriculum to really drive home those learning objectives and those skills that they believe these advisors need to continue to develop.
Daniel Serfaty: You are not just giving advice to the US army, you also are doing what Janet mentioned earlier, training design in a sense. You’re designing the training or at least some enhancing of the training that they’re already doing, kind of improving through science what they are doing. But you said a word, you said soft skills earlier, I’m not going to let you get away with it. I’ll come back a little later to ask you what do you mean about soft skills. But Janet, I have the same question for you. You are the vice-president for programs and you are managing many programs and many things, what do you do when you go to the office?
Janet Spruill: Well, surely I have a great job. I work with a really incredible team and we help organizations to improve their large scale learning programs. And we do it through a smart introduction of technologies and new methods for learning. We get to work closely with a number of government thought partners who really have a passion to improve their training programs and provide the very best training they can to help their personnel be mission ready. Because a lot of our customers are in the department of defense, a lot of their missions are pretty high stakes. So we take that really seriously. So for example, this year we’ve been working closely with one of the US Navy school houses, the Center for Surface Combat Systems who has quite a progressive approach to modernize training. They were an organization who was an early adopter of simulation-based training and fully immersive, and more recently augmented reality and virtual reality. And we’re now working with them to insert some significant efficiencies and modernization into some of their training programs through some adaptive learning capabilities that I know we’ll get into a little bit later.
Daniel Serfaty: And both you and Krista mentioned basically that you’re working with agencies or units within the Department of Defense, soldiers and sailors, in a sense for the implicit assumption here and I don’t want to sound overly dramatic, but the implicit assumption is that good training will save lives. You mentioned the term mission critical, and I think it is important. A lot of the domains that you are supporting that your teams are engineering technologies and injecting scientific principles into our domains in which lives are at stake, whether it’s in the military domain or the law enforcement domain or the healthcare domain, that is an important part of why perhaps training and the deep understanding of how learning happens is important. Because at the end of the day, it’s not just about the acquisition of skills, it’s also about saving lives. Do you agree?
Krista Ratwani: I absolutely agree Daniel. I think that we certainly focus on saving individual lives. Many of these missions relate directly to national security and global security. And so in an example, we know that the military organizations have focused a lot this year on pushing through and streamlining the acquisition process for new equipment, new systems, including new weapon and defense systems. If they push those out very quickly but they can’t also accelerate the time to get personnel trained, then we’ll have equipment that can’t be fielded or is fielded without the personnel to get really operated. And that can become a real national defense issue as well as an individual safety risk.
Daniel Serfaty: That’s a very good point. Thank you. I’m going to ask you to go back into your own personal memories for a second. Leave the Department of Defense, we’ll get back to it in a few minutes that we explore more in detail the technologies and the enablers of great training and great learning. But all of us have had in our past learned since kindergarten even before, we’ve learned in our colleges and universities, we are learning during our professionalized. Before we jump into the technologies, learning and training happens not just through technologies or enabled by technologies, but because of great teachers and great trainers.
Have you had a teacher in your past that really changed the way you look at the world? I know I had but you are my guest today, so I want your stories. Krista, can you share something about not only who was the teacher, but why do you think that teacher changed things in your way of learning and your way of looking at the world? Then later, I want you to speculate of whether those great teachers, the skill that they have could be somewhat replicated or mimicked through advanced technology. So let’s start with your stories first.
Krista Ratwani: So the obvious answer that I have to give is my mother because she was my eighth grade English teacher, but I will leave that aside. I’ve had a number of great teachers that I can remember all the way from early education through college and grad school that I just look back and I think, wow, they really cared. And they were so passionate. One teacher in particular I can remember was from high school and she just loved what she did. And it came across and every single interaction with students and just in the classroom, she was the type of teacher who loved to keep you engaged. She would dress up for different days. This was an English teacher as well. And she would come in when we were reading Macbeth in her witch’s costume or whatever it was, and just to really immerse the students in the material.
And I think that that’s just key for me when it comes to learning and training. As I said, engaging that learner is so key because if you can’t help them, they’re not going to retain the material. I mean, there’s obviously other variables at play there, but in some ways the relationship is that simple. And so she really just made an impact in terms of how she taught and just again, seeing that passion.
Daniel Serfaty: What a great example to say things like caring and loving and immersive and engaging? What a challenge for our engineers and our technology developers to mimic that, isn’t it? Janet, tell us your story.
Janet Spruill: So I absolutely plus one everything that Krista said, especially about having energy and showing your love of the topic. So the teacher that comes to mind for me, the examples, actually, two teachers that I will compare and contrast. And it was actually a music teacher. When I decided to play a traditional stringed instrument called the hammered dulcimer in my early 20s, my first private lessons were a disaster because the teacher started with hours and hours of music and chord theory, she totally lost me. And I almost gave up playing, but a friend of mine convinced me to try another teacher. I did that. And the very first time I met with this new teacher, she had me playing a simple song in our first lesson and I felt successful and I felt excited that I could do this. So that experience really stayed with me and then impressed on me the need to help make students successful and to engage them as early as possible in the learning process.
Daniel Serfaty: This is a great example too because it also illustrate really one of the key themes I would like to move and explore with you when you compare those two teachers, both of them were teaching. The music teacher was actually teaching solfege and in music theory, but no learning was happening on the receiving end. So my first question is really this notion we talk about that in one sentence training and learning, training and learning as if they were synonymous words, but they are not. Are they? Krista, is there a difference between training and learning?
Krista Ratwani: I think so. Some people may say that it is just semantics, but at least to me when I think of training, I think of those more formal type of experiences. I’m going to go take this training course, or I’m going to go take this class. And I think unfortunately the reality is just because you are going to be trained does not mean you actually are learning anything. You can sit all day and listen to somebody talk or read a book or whatever the mechanism is that you are getting trained by, but that doesn’t necessarily mean that you are fundamentally learning new skills or gaining new knowledge. To me, that’s what learning is. There is some actual change in what you know or what you can do. I mean, I’m not a neuroscientist, but I think that there’s an actual changing your brain that happens when you learn something. And I don’t think that that’s necessarily true when you just talk about training itself as an event or as an activity.
Janet Spruill: To build on what Krista was saying, in its simplest form, I think about training as an activity and learning as an outcome. And so in thinking about learning, what’s most important is related to transfer of learning, having a set of learning transfer, which is a process of learning, which gives you the ability to extend what’s been learned in one context to new contexts, which is critical. Training on the other hand is more about completion, learning I believe is more about transfer.
Daniel Serfaty: Oh, that’s interesting. Can you think of an example of this notion of transfer learning in something, a skill or gain knowledge in one domain, but being able to then transfer it and apply it to another domain?
Janet Spruill: Absolutely. So I think that it can apply in skills-based training or in soft skills training. So think in the soft skills arena of negotiation skills. You can take a course to learn how to negotiate a business deal and really learn the fine art of that and the workflow and process associated with it. And once you have that primary structure or mental model of the process, you could go home and use that to negotiate something with your spouse.
Daniel Serfaty: I should try to use that one of those days too at my own risk. Yes, Krista you don’t want to let go.
Krista Ratwani: Just to jump in and to add a little bit more to what Janet was saying. I believe Janet, you’re kind of getting at levels of learning in some way, right? I think at least in the work that we do, and I believe where you can think about learning as being most effective is when you can get to that higher level. So there’s what I would consider actual learning. You learn principles of negotiation, and I can tell you what they are, but then to actually go and use those to buy a car maybe that’s what the course was focused on is one type of learning, but then to go home and use them with your spouse is taking that learning to a new level because then you are extrapolating those principles, applying them in an even different contacts than you originally did. So to me, in some ways that’s all about building of your expertise when you can take the thing that you learned at the base level and keep applying it and taking it to a new level.
Daniel Serfaty: So perhaps the way to look at it is in term of learning professional skills, whether they are hard or soft. And again, we’ve come back to that definition, but it’s basically different steps in the acquisition of mastery in a particular task or particular work. And so talking about that since both of you have a lot of experience working with the military, but you can certainly deviate from the military. Now with training commanders or pilots or soldiers, or even corporate managers 50 years ago, versus 10 years ago versus today, what has changed? Is it just the technology or did we understand something better about the skill acquisition, that expertise scale, or that mastery scale? Can you help our audience understand how these science of training and the resulting learning have evolved?
Krista Ratwani: So you mentioned technology Daniel. I mean, that’s clearly something new and different today versus 50 years ago. And the use of technology impacts where people learn, how people learn and especially today and the times of COVID-19 learning is happening at home over your own computer. Technology has made that possible. And I think that that’s key and important when you talk about training, especially higher level leaders. I also think though that the breadth of what is being taught has changed. And I think that this has to do with more of a career long focus versus I’m going to train you to be good at this particular job or train you to succeed in this situation.
All organizations, military, commercial want people who can succeed in a variety of environments doing a number of different tasks. And so giving people the skills to do that is really important. In some ways we are teaching people how to learn. So we’re teaching them to be critical thinkers. We’re teaching them to think about learning differently. And that I think has been embedded now in a lot of different ways, in a lot of different curriculums as opposed to being focused on that single skill.
Daniel Serfaty: So technology aside, you talked about teaching people how to learn, how to think about learning. Is that a recent discovery in the field of psychology or training science? I mean, why didn’t we teach that way 50 years ago? Is that because the science has evolved to a point that we realize it’s important?
Krista Ratwani: And those skills certainly aren’t new and I can give you an exact citation here for an article, but they’ve been around. I think because of the more dynamic nature of environments where people work these days, organizations have all sorts of new demands that must be met due to competition, due to the use of technology throughout an organization not just for learning due to all sorts of reasons, I think it’s letting people see that they need to focus on these skills more, to be competitive, to build the type of workforce that they want. So I don’t think they’re new, but I think that there’s just an increased focus on them.
Janet Spruill: Right, I would agree. And I think that many years ago, training was a very institutional practice. It was important for it to be standardized and consistent, which drove it to that one size fits all training. It was also primarily in the domain of subject matter experts and what we’ll refer to as the sage on the stage. Students were in a much more passive role and technology and changing paradigms have allowed students now to take much more control of the learning process, to be more self-directed. Asynchronous technologies in E-learning have supported that. But I think it is also important to talk about technology and not just learning technology, but operational technology, enterprise technologies.
The fast pace of technology is driving the need for speed to mission. So when the mission or the competition or the threat is changing regularly, so the pace of change has increased significantly. Think about cyber defenders who are watching networks and network traffic, looking for infiltration of bad actors to bring down systems and infrastructure components. We could put them through a course last month and the threat continues to change. We need to be able to provide training quickly, to be able to put it in the hands of learners, to direct their own learning and we can’t take 18 months and years to push out new training.
Daniel Serfaty: That’s very interesting. So in a sense, both of you coming from very different angles are looking at a very fast changing work environment, mission environment, perhaps for the military but let’s say more generally work environment, whether you are a manufacturing expert on the manufacturing floor or a doctor in the operating room or a soldier in the field, everything is changing much faster. And in order to do that, we cannot train people for every single instance, but we can train them perhaps to adapt and change. So that very skill is important. And we use technologies as well as this new concept of training about learning how to learn in a sense. And I want to go back to the word that both of you said, which is soft skills. All right, Dr. Ratwani, what is a soft skill?
Krista Ratwani: Soft skills to me are those things that to use another word that you’re going to ask me what it means that are more intangible in nature. And what I mean by that is they’re much more subjective. They’re harder to measure and especially harder to quantify. So taking critical thinking, which we’ve mentioned a couple of times, do I have a certain amount of critical thinking that’s in comparison to something like my marksmanship score? I have a very clearly defined number about how many times I hit the target and it has a defined procedure. This is how I train it, this is how I assess it, this is how I say how good you are at marksmanship versus something like critical thinking. It’s much, much harder to do any of those things that I just said. And then I also think of soft skills being much more important when it comes to that human dynamic.
So how important is marksmanship when you are trying to interact with somebody going back to my example of the military advisers. In that case, what we’re really looking for is for people to be open-minded and to have empathy and to just be able to talk one-on-one with people and have a conversation. Again, all of those things are much harder to quantify, and I would argue to formally train as well. You have to be much more creative about how you’re going to train someone to develop empathy than you are about how to get them to fire their weapon effectively.
Daniel Serfaty: Especially if they have to do both at the same time. So empathy is a soft skill, I understand. Open-mindedness that’s another one you just mentioned. Well, you’re a leadership expert, is leadership a soft skill?
Krista Ratwani: I would argue that leadership is made up of many soft skills to include all those ones that we just mentioned, to include things maybe even like communication. To me, there’s a number of skills that make up leadership. Leadership is another complex construct by itself.
Daniel Serfaty: And Janet, do you agree with everything Krista say about the distinction between soft skills and art skills?
Janet Spruill: I always agree with everything Krista says. I particularly liked that she used the term the human dimension. I think that really helps to sum it up. So soft skills really are the non-technical skills is the way I think about it and how you interact with people, with your colleagues, how you solve problems, how you approach your work, how you actively listen, all of those things in addition to what Krista said.
Daniel Serfaty: So if there is Janet, you are both an expert in training, but you’re also yourself a very experienced manager in corporations, what is the number one soft skill do you think is important in working in an enterprise? From what I understand from both you and Krista there is almost an infinite number of what we call soft skills that we can think about ranging from empathy to say you’re honest.
Janet Spruill: That’s it.
Daniel Serfaty: So which one is the one that you like?
Janet Spruill: So I like empathy. Thank you for bringing that up. I think that if you have strong empathy, you can succeed both inside your organization and outside with partners and customers because you really can see the world, see the problem, see the situation from their viewpoint, and that can help you to find a bridge, right? So you really can become a thought partner with your colleagues, with your customers.
Daniel Serfaty: And you need empathy for that. Krista, I’m going to put you on the spot too, tell me what’s your favorite soft skill. And you cannot say leadership because you just taught me, it’s not a soft skill, it’s a complex construct. I like how those academic speak about things that are complicated. They use things like complex and constructs. But also, you cannot say empathy, Janet already took that.
Janet Spruill: Perfect.
Krista Ratwani: That’s okay, I can work with those. So I actually don’t know if there’s one word to describe this, but to me it’s finding that balance between being a good team member and being collaborative, but also taking initiative to that mix of being a leader, but being a team player. So you have to be able to work with others and you have to be able to collaborate and get along well. If you can’t work in a team, that’s going to be pretty dangerous in a lot of situations. But you also can’t only be able to work in a team, you have to be able to take things on yourself and be willing to take things on and volunteer to lead things. So it really is that balance because too much of either of those is not necessarily going to lead to you being able to work well in an organization.
Daniel Serfaty: That’s a very interesting, subtle skill that balance between individuality and team orientation. So let’s talk about that for a second, because a lot of our questions and stories have to do with learning by individuals, but we all operate in teams. And sometimes not in a single team, we belong to several teams that require more or less communication and coordination and leadership, and many of the skills that competency that you mentioned. What do we do about team training? What are the big ideas in that field?
Krista Ratwani: So generally from a research perspective, team training is something that’s been explored for many, many years. And a lot of it focuses on this idea of training team processes. So how do we get individuals to work together as a team? The idea that the whole is greater than the sum of the parts. So it’s not just, let’s add up what you bring to the team Daniel compared with Janet’s skills and combined with mine and we get what we bring as a team. We really want to create that synergy. And that comes from training teams on those team process skills. So things like coordination and communication, can we make sure that we know enough about each other’s roles that we can back each other up so I can jump in when you get overloaded.
And so we don’t really miss a beat. We keep going. And that’s where a lot of the team training research has focused. In terms of what’s new, there’s been a lot of emphasis on small teams, especially in the military lately. They’re looking to develop smaller teams that can function a little bit more autonomously without all of the resources coming from above that were kind of this contained thing that can work on our own.
Daniel Serfaty: Like special forces, for example, or commanders.
Krista Ratwani: I think that’s a great example. So kind of these smaller teams being able to work together and make decisions on their own versus those decisions coming from above and then the team executes. There’s also a lot to do with what Janet talked about, getting teams to function well in these mission-critical constantly changing types of environments. We’ve seen a lot of that even outside of the military, things like with NASA, how do we take a bunch of people and train them so that they can go live in this isolated environment on Mars together for years without totally falling apart. So helping individuals come together as teams in those extreme environments is also something new that researchers are starting to look at and that’s happening in the applied world.
Daniel Serfaty: How interesting. Janet, staying on that notion of team training, can you give our audience some examples of things that you have worked on lately about training of teams or training of team skills?
Janet Spruill: Well, the idea of team competencies and training teams skills has really gotten a lot of research attention. We’re finding that programs related to customers and operational environments are much more interested today in understanding how to train and prepare teams. And that preparation certainly is from, I guess, what we would consider a pure training standpoint. So how can we train them as competent individuals and then bring them together to bring their unique skillset and their specific role so that they can operate as a high performing team. Customers are also though interested in how can we use that same data to help inform team composition and team selection. So that’s a really twist that we’re able then to reach back into the research area to help inform that. So a couple of examples, one in the Navy within the Center for Surface Combat Systems, they train the Aegis Combat System, a missile defense environment plus that sits on a Navy destroyer ship.
And within that, there is essentially the core information center called the Aegis Combat Information Center, where a team of personnel each with distinct roles have a number of monitors where they’re watching information about what’s going on in the environment. And the commander can make decisions based on that about actions that may need to be taken either offensive or defensive. So as you can imagine, it’s a pretty high tempo, potentially high stress, fast moving, high stakes kind of an environment. We’ve been supporting them to look at not only the individual training, but also what are some of the team tasks and team competencies that need to be supported.
Daniel Serfaty: In a sense it’s not enough that each one of these people in the Combat Information Center are expert like a radar expert or a breadth expert or an electronics expert, but the notion of them working in harmony with each other is important for the Navy.
Janet Spruill: That’s right. And really when you think across the military and even beyond the military, most operations are carried out by teams, whether it’s that elite army ground infantry teams, special forces that Krista talked to, the Combat Information Center, a team of cyber defenders who are monitoring critical infrastructure, an emergency response team or a hospital surgical team. So much work gets carried out by teams. So it’s really an important area of research and application.
Daniel Serfaty: Thank you for all of these examples. I think it makes this pretty complex scientific notion of learning much more tangible because then we understand how complex it can be to work in a commander or to work in a Combat Information Center because it’s not about how well you’re trying to do your job, it’s also how well you are trying to operate as a team. And when we talk about all these very complex environment that are complex both in terms of the expertise it needs to operate them and to succeed in them but also because of all the external circumstances of fast tempo, a lot of uncertainty, a lot of risks, I cannot help but thinking that basically in order to improve a team I need to know what to measure.
And I need to know how that thing that I measure will become better or worse or reduced as a result of the training. So I know that both of you are a strong advocate of that, but tell us again, what is the role of measuring performance in training? Then how do we use those data to improve the training, to continue the training? Why is it so important?
Janet Spruill: I think the role of measuring performance is critically important to allow us to compare a student’s current level of understanding and performance against what it needs to be, because otherwise we’re just making assumptions and we’re not able to support that student, which is really a downfall on our part. But more importantly, we can’t effectively equip that student coming out of that training to perform in their job role. So by first understanding what it is we need to measure and how those measures help inform what readiness and proficiency look like, then we can implement that and we can in real time, or at any point in the process, take kind of a track line of where that student is currently performing.
Daniel Serfaty: That’s great, Janet. I cannot help but making the connection between what you just told us and differentiation that both you and Krista made at the beginning of our discussion between training and learning. In a sense, measurement is telling us if learning is taking place, isn’t that right? Otherwise we would work open loop. All we can do is deliver a curriculum or teach a class to be exposed to a game, but we won’t know whether or not in a sense the needle moved on the side of the learning.
Janet Spruill: That’s right. And the students could come through training saying, “That was fun,” but it doesn’t result in any measurable improvements and ultimately then poor transfer to their workplace environment.
Krista Ratwani: So if I can just jump in and add to Janet’s comment about the importance of measuring performance in relation to learning, I completely agree with everything that she said but to add a slight twist on it, I also believe that when you measure performance in relation to thinking about learning and identifying those gaps that that individual has as Janet pointed out, you can also use it to potentially identify where there are gaps in the training itself. So it may not be that somebody just didn’t pick up the skill that they were supposed to through that learning experience, it may be that the training or learning experience itself was just deficient and failed to cover an important aspect of that performance domain. And so as you’re measuring the performance, that gives you a chance to essentially revisit what is critical in that area. And then maybe you can go back and add something to that training experience at a later date to make sure that it’s fully comprehensive.
Daniel Serfaty: So in a sense, you’re saying if learning is conditioned upon the proper feedback, that feedback can be used by the learner, but also can be used by the teacher or the technology who is augmenting the teacher. So Krista, is there a science behind that? At the end of the day, we are not just proposing to measure the final score in a sense, there are other things that we need to measure. Is science helping us by understanding how to decompose those measurements so that we have a finer grained understanding of what’s going on in the mind of the learner?
Krista Ratwani: I think so. And it comes down to measuring not just that final score, did they complete that task, but what happened along the way? What did that process look like? Can we pinpoint where there were failures? So the end outcome may be okay, you may have achieved whatever that goal was. You did the task, whatever the task was that you were supposed to do, but did you do it efficiently? Did you forget to do a part and you happened to stumble upon the final answer? And so being able to decompose the task as somebody is executing it in terms of performance is really important to being able to identify where those gaps are in knowledge, because if you are purely looking at it from that completion perspective, it may look okay, but you’re not exactly clear how that person got to that answer.
It’s a little bit like asking my eight year old to show her work when she does her math homework. Maybe she got that three digit subtraction problem correct but it was a little bit of luck. She didn’t actually regroup all the numbers correctly but when she shows that she knew how to regroup it, and you can see the work there, then that really lets us know that she understands the process behind it.
Daniel Serfaty: In the ’80s and ’90s, there was a lot of development around the technology called intelligent tutoring systems and intelligent tutoring systems were basically a way to acknowledge that each student is different and each student learn at his or her own pace. And therefore, if we could use technology by being intelligent, therefore adapt itself to the students, we may solve the problem of very large classrooms or cookie cutter training that works for everyone the same way, et cetera. So today we call these collection of technology basically the second or third generation intelligent tutoring system for adaptive training. Can you try to view perhaps share with our audience your own definition of adaptive training here? What is it and why do we believe it’s a game-changing technology, Janet?
Janet Spruill: So I like to think of it in fairly simple terms. To me, adaptive training is a data-driven approach to learning, which means data is the keyword there. But the beauty of adaptive learning that smart use of those data allows us to do some pretty powerful things like continuously modify the training content and modify it based on both the behavior and the needs of each learner. So the behavior could be that I have stalled and I’m just simply stuck. And the needs could be that I come in as a more experienced learner. So perhaps I am a military service member that’s had a deployment or maybe I am a very proficient healthcare professional, but now I need to learn the very specific skills of intubating a patient, which might be a new skill.
So it’s able to modify based on both the behavior and the needs and that could look really differently for different students. But generally when we talk about adapting, we can adapt the sequence of the content, we can dial up or down the difficulty based on how that student enters the training or how we’re measuring their ability to proceed through it. But we can also adapt the timing and the type of feedback to help make that student more successful.
Daniel Serfaty: Thank you, Janet. Keep those in mind because I want to come back to almost each one of those dimensions in a second, Krista from the scientist researcher view of the world, what is adaptive training?
Krista Ratwani: The way that I like to sum up adaptive training is that we are personalizing that learning experience to give the learner what they need when they need it and in the manner that best reflects how they’re going to effectively absorb it. So the what goes back to the content that Janet talked about, the when refers to the timing, which is critical, you want to give somebody content before the skill that they learned decays but you don’t want to give it to them too early when it’s too challenging for them. And then the how really comes down to the method and that could have to do with just pure logistics of it. I’m only in a place where I can have my phone with me. So I need to be able to get it in a mobile environment or it could have to do with the learning strategy. I don’t absorb information well by reading, but I can certainly get it when I see a good visual of it. So being able to be very personalized in that approach so that the learner really can just get what they need from the content.
Daniel Serfaty: It’s amazing as both of you are giving your own take on adaptive training, personalized training in a sense. I’m thinking of two things. I’m thinking in a sense, we are reinventing what those great teachers of the past already knew. Your mother teaching eighth grade probably knew that each student learns English differently, has different abilities, different preferences, different levels of attention depending on the topic and was going from table to table from desk to desk, to in a sense adapt her teaching to each one of those students. It’s difficult to do if you have 30 students in a class. So what you’re saying is that we’re trying to replicate this extreme personalization and deep expertise that great teachers have into understanding sometime intuitively, sometime by experience what each students need in order to progress optimally. The other analogy that I’m thinking about is that many of the systems we use these days are the fact of personalized, because as Janet says, it depends on the data.
We cannot have high levels of personalization without data. Can we? Like Netflix knows what is the next movie I’m going to like most, how do they know that? Not just because they take a snapshot of the movie I’m currently watching, they have my whole history. They’ve locked in the sense of model of the users. So we seem into our industries that are being redirected towards an equal one, the teaching is not for the class. So Netflix experience is not for the population, it’s for a single individual. So what are then the necessary ingredient you started to list them, but in terms of data, if data is the key ingredient. So of course there are algorithms that can process that data and then we redirect, as you said, to a particular path of learning, what is necessary in terms of data to close that loop? Janet, I know you’re a big fan of data as a major enabler here.
Janet Spruill: It most certainly is. And I think to hone in on a part of your question, what are the ingredients for adapting and personalizing training to succeed? I hone in on that word succeed. What does success and part of the success or an enabling part of the success in adaptive learning is all about the ability to gain individual student insights just like that teacher does who walks from desk to desk and then act on those. So if we don’t have individual student insights, we can’t act on anything. And so that requires the availability and access to data as one of the most critical ingredients for that.
Krista Ratwani: I would add to that, the more that you know about that individual learner, the better off you’re going to be. And that’s things about what that learner knows, of course, what they don’t know, kind of as I alluded to earlier learning preferences. But then I would say the other critical piece of data that you need is about that learning experience or learning content itself. What is that content trying to teach? What’s the learning objective? How hard is it? How does it actually go about teaching it? Coming back to the modality and certain modalities being more or less relevant depending upon the situation are more effective. And so it’s really knowing about the person you’re trying to get to learn or train as well as what is that training material and thinking about the training environment itself.
Daniel Serfaty: What’s your both thing, which is I think essential is it’s not just at that moment of the teacher’s intervention or the intelligence systems intervention, it’s not just the instant data that matter what happened in the previous few minutes, but it’s a whole history in fact of that student. So we can achieve levels of precision that are incredible if we can not only collect those data, but also know what to do with it in a sense process it in ways that we know what to do with it, is that the real promise of adaptive learning, or does it go beyond that Janet?
Janet Spruill: The promise of adaptive learning covers a lot of ground. I think that one that’s really important, especially in these times to talk about is the issue of learning equity and driving learning equity at scale. And what I mean by that is there’s a fair amount that’s published about the potential of adaptive learning technologies to drive equity at scale on higher education. But I think it goes more broadly than that. I come back to think about the personnel that we interact with, the young Navy recruit and how adaptive learning can help these young sailors who come from all backgrounds and all achievement levels. So whether it’s in a college or a military technical school, these underserved students can feel perhaps disproportionately unprepared for many aspects of their learning experiences coming into them. I think adaptive learning can really serve almost as a great equalizer to help them succeed and build a lot of confidence as they go through their learning career.
Daniel Serfaty: That is fascinating, Janet. I haven’t thought about it because what you’re saying is precisely by hyper focusing precisely on that individual, we might be able to achieve a societal goal of equity.
Janet Spruill: I think so. I mean, we all want to be understood and in a learning context where we have an instructor, a teacher, we want that teacher to get us and to meet us where we are. Using technology, we can do that in a very empowering and self-directed way.
Krista Ratwani: I guess I’ll back that up maybe with some data. So I think that’s a really great insight, Janet. And in some work that we’ve done at Aptima with our own adaptive learning technology, we have some data recently from an internal research project where we basically looked at who does adaptive learning help. And when you think about the traditional classroom approach to training, it’s kind of designed to help that student in the middle, that average student, right? Because they have to teach to the class, not to the individual. And so you have to set it at a level that’s appropriate for that average student. And you assume that most students are going to be able to grasp it. And what our data show us is that in the experiment that we did, the adaptive learning conditions were beneficial for those students on either end of the spectrum.
So those students who came in with a lot of knowledge and those students who came in with a little knowledge. And Janet, I think in your comments, you’re speaking to maybe those students who didn’t quite get what they needed, they are kind of on the lower end of that knowledge or skill level. And so the data that we have illustrate that that’s exactly the group that we can hope to help, and maybe the middle students are going to be fine with that traditional approach and don’t necessarily need that extra help.
Daniel Serfaty: That is a fascinating way to look at how advanced algorithm and artificial intelligence that goes into those adaptive training systems in order to benefit from this extraordinary large reservoir of human data is actually able to do things that have always been very difficult in schools enterprises is to take care basically of both sides of the normal Gaussian curve. That is an extraordinary insight. So Krista and Janet, thank you so much for spending the last hour with us. You really taught me and my audience quite a bit about learning, really opening our horizon about that very unique human endeavor, which is the acquisition of new knowledge and new skills and perhaps now is generalized to societies and machines and everything else.
Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit Aptima.com\mindworks. Thank you.
Daniel Serfaty: Welcome to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This week we are starting a series of five podcasts focused on the magic of teams. As we talked about in last week’s episode on learning and training, much of our work today gets carried out by teams. Teams of humans, teams of humans and machines, distributed teams, virtual teams. Almost all of us operate in teams. In fact, one could say that teams are the foundational building blocks of any human society. And I couldn’t have dreamt of a better way to kick off this series than to talk to my two guests today because they can give us both a retrospective and a prospective view on the science of teams, what we know about teams and to frame what the rest of what this podcast series on teams is going to be about.
Both of them are true thought leaders in the field of team science. It is my hope that’s what we’ll talk about today will give you the incentive to go on the web, Google them and learn more about the work that they are doing. So let me introduce them briefly. Dr. Nancy Cooke is a professor of Human Systems Engineering with research centers on how humans and technology work together in teams. She’s a director of Arizona State University Center for Human, AI and Robot Teams, which in addition to researching human and non-human interaction is addressing the potential legal and ethical issue expected to arise as artificial intelligence and robots are assigned increased autonomy. Dr. Cooke is also the past president of the Human Factors and Ergonomics Society and the recent past chair of the Board of Human-Systems Integration at the National Academies of Science, Engineering and Medicine.
My other guest is Dr. Stephen Fiore who is a professor at the University of Central Florida Cognitive Sciences Program in the Department of Philosophy, the School of Modeling, Simulation and Training and he’s a director of the Cognitive Sciences Laboratory at UCF. His primary areas of research are the interdisciplinary study of complex collaborative cognition and the understanding of how humans interact socially with each other and with technology. Dr. Fiore is the president-elect of the International Network for the Science of Team Science. He has also cluttered over 200 scholarly publication in the area of learning memory and problem solving in both individuals and groups.
As you just heard, my two guests today are not only very well versed in the research and science of teams, but they also come from very multidisciplinary backgrounds, ranging from philosophy to psychology to engineering and everything in between. And perhaps this is our first insight into this complex domain of teams that we basically need many arrows in our quiver to take advantage of the opportunities that working in teams present to us.
Welcome Dr. Nancy Cooke and Dr. Steven Fiore. So first a word, if you don’t mind to introduce yourself, but also what made you choose these particular domain as a field of endeavor, the teams. Nancy, would you like to start?
Nancy Cooke: That’s a really good question. I started out as a cognitive psychologist studying individual knowledge elicitation, how do we find out what’s the right stuff behind the expertise of an individual. And about I guess it was 25 years ago now, Eduardo Salas who’s a leader in the field came to me and said, hey, we need to bring some cognitive psychology into the study of teams because given the Vincennes incident, a disaster that involved lots of team decision making under stress, the disaster was that the USS Vincennes by mistake shot down an Iranian Airbus killing all the passengers in it. Given the Vincennes incident, they realized that a lot of the team issues had to do with decision-making under stress for instance. And so he got me interested in trying to measure what team cognition is, and I’ve been pretty much doing that ever since.
Daniel Serfaty: Well, I’m glad you did because your contribution to the field certainly have been game changing. Steve, how about you? Of all the things you could have chosen to focus on, why focus on collaboration and teams?
Stephen Fiore: For me it started in graduate school. I was doing research on the influence of verbalization on cognition. That is what happens when you speak aloud about various cognitive processes. And I happened to be taking a seminar with Jim Voss on cognition, decision making, problem solving, and we had to choose a paper topic. And one of the notional topics was group problem solving. I was studying individual problem solving at the time. And part of what we were studying is what happens when people verbalize their problem solving processes and how that may interfere with things like the moment of insight.
So I said, “Huh, well, when you’re in a group, you’re always talking. It might be the case that groups actually interfere and hinder your ability to collaborate and solve problems.” So I made that my paper topic, ended up digging into the literature. This was in the early ’90s before team cognition started taking off. I was reading a lot of old literature that indirectly talked about cognition in groups and teams, even before it specifically became its own area of inquiry and ended up doing my dissertation on individual versus group problem solving and how working in a group actually interferes and hinders the kind of collaboration process. I’ve spent the rest of my career trying to fix the problems that can occur when people try to work together.
Daniel Serfaty: So I’ll follow up with a question for you Steve, why is it important to understand and study how teams and groups and collective work? Is there something magical there that we need to find out? Is there something that is qualitatively different from just studying individual behavior, individual decision-making?
Stephen Fiore: I definitely think there is something that’s meaningfully different but also practically important. And I believe that there’s no way we’re going to successfully solve the world’s problems through the reliance on individuals. I believe there is way too much knowledge produced for any lone individual to understand, comprehend and apply by themselves. So I think that collaborative cognition, that is the integration of knowledge held by multiple people, is really the only way we’re going to address the kinds of problems that the world is facing. So, a significant part of my research is this notion of knowledge integration that is how do you bring people together that know different parts of a problem and produce solutions from that complimentary knowledge. So how do they integrate and produce something that they could not have come up with on their own, which is essentially why we bring teams together in the first place. But my interest is specifically in collaborative problem solving.
Daniel Serfaty: Nancy, can you add to that? All these years studying teams, is there something still magical, still mysterious about how teams perform?
Nancy Cooke: In the field we distinguish between teams and groups. Teams are a special kind of group with each member having different roles and responsibilities, being very interdependent, but working toward a common goal. And so when I look at teams, I see them as a social system and looking at them from the systems perspective kind of brings in some of my background in human systems engineering, human systems integration.
Daniel Serfaty: Talking about that, actually your background, what do you do? Could you describe for our audience, what do you do in your day job? What is it that you do truly? I know you’re a professor, but also you’re managing all these students and the labs. So what do you do?
Nancy Cooke: A lot of Zoom meetings. I mean, that’s how we interact these days with COVID. But I do have a lot of meetings. There are meetings with my individual research teams, with other collaborators, with program managers for the various grants that I’m working on. And we’ve also pivoted now to collecting data remotely, which has been quite interesting. So how do you collect data from teams and do it remotely? And so that’s been a real challenge, but I think it presents some opportunities too.
Daniel Serfaty: What about you Steve, what do you do? The reason I am asking is because some team researchers look at a blank piece of paper and invent new models, new theories. Some others go to the lab and they observe how teams put together and come up with behaviors and performance and some others do other things. How do you do, how do you approach your work?
Stephen Fiore: Primarily by reading as much as possible. That includes not just what gets published in the literature but I accept way too many reviews and I do that because it forces me to read articles and learn about areas that I wouldn’t otherwise. And there’s also grant reviews. But a big part of what I do that may be slightly different than what Nancy does is what I call practicing what I preach and preaching what I practice. I try to work with other scientific teams and people who are trying to develop their ideas. And it’s kind of a route facilitation around research where I’ll just meet with people who know they have an area of interest but really haven’t jelled what kind of questions they want to pursue.
To me that’s really the fun and exciting part of what I do is to do this kind of knowledge elicitation, not unlike what Nancy described earlier, but it’s in the context of a research group and you’re pulling knowledge from these different people. And I use a lot of cognitive artifacts in these kinds of meetings. So I externalize that, use the whiteboard liberally so that people can see the ideas instead of just trying to remember what was spoken and help them integrate those ideas and produce something like a notional research framework and notional hypothesis that they could pursue, for example, with a grant or in the development of an experiment.
I think it’s important to, like I said, help people who are working on these complex problems because that’s a focal area of research for myself, but it’s also something that I want to help those other researchers who they’re working on very important problem, so I want to make sure that they can wrap their head around the kinds of problem space with which they’re dealing.
Daniel Serfaty: That’s fascinating because in a sense that is what the science of team science is in the sense that you have team scientists collaborating, forming teams in order to understand better how teams work. Perhaps for you, I’ll start with basic principle. You already made a distinction and see between teams that are particular structure and groups of individuals in general. What is a team? Is any group of collaborating individuals defined as a team. Are team’s uniquely human structures or do we find them in other places; in society, nature, with animals? Tell us a little bit the ABC of teams.
Nancy Cooke: I don’t think teams are uniquely a human structure. I think one of the key aspects of teams that’s important is the interdependence of the individuals on the team. So when we’re constructing a team task, we have to keep that in mind that we need to have interdependence in order to have a team. Steve and I are on a ISAT Study Group that’s looking at teams of humans and animals as an analog for human AI teaming. And so I’ve been talking to lots of people in the military working dog program. We’ve been talking to people in the Marine Mammals Program in looking at how teams of humans and animals can work effectively and trying to take some of that information about how the dogs understand the intent of a human and think about crafting AI in that image.
Daniel Serfaty: That’s fascinating. So interdependence is a key component here that any other society or collective that do not have interdependence or truly explicit interdependence therefore is not a team. It’s something else.
Nancy Cooke: I think that’s right. And there is some controversy over whether a machine can actually be a teammate because a lot of people view teaming as something that’s uniquely human. But I think as long as we have interdependence, there’s no reason that a machine can’t be a teammate. That doesn’t mean that you lose control of the machine. You still can have control just like a team leader can have control of the subordinates. So I think definitely human AI robot teaming is a thing.
Daniel Serfaty: Yes, it’s also because maybe in the popular literature, especially in the American society, there is a lot of value in individuality, but also there is a lot of value in teams. A lot of the sports, that culture about doing something together in ways that perhaps produces an outcome that is superior than any outcome an individual would have produced by herself or himself, is that right?
Nancy Cooke: It can go either way, right? We always say a team of experts doesn’t make an expert team. So you can’t just bring people together. There has to be this interdependence and cohesion and synergy. And I use the example of the 2004 Olympic basketball team that was called the Dream Team of very professional stellar players that came together and ended up losing at the Olympics in their speculation that they didn’t have that kind of cohesion and interdependence that allowed them to play together. This team of experts was not an expert team. And contrast that to the 1980 hockey team that went on to the Olympics made up of not professional hockey players but college level hockey players. They were good, but they weren’t professional. And by virtue people think of very good coaching, they came together and beat the Russians at the Olympics. They were the underdogs. So that’s an example of how it’s not so much the parts of the team, although there has to be some kind of prerequisite for being on the team, but it’s that interaction that’s important.
Daniel Serfaty: Steve, from your perspective, again, looking at it maybe from an angle that is slightly different from that of Nancy, in what specific way is a team better than the sum of its parts. What are the internal mechanisms that can be perhaps taught or innate or can be developed or supported maybe by technology that makes a team better than the sum of its parts?
Stephen Fiore: In addition to interdependencies as one of the core features or defining characteristics of teams, we’ve talked about the task relevant knowledge and the specialized roles that comes out of that task relevant knowledge. So there’s an important degree of complementarity and once team members know, what makes them special are then the emergent processes that occur or hopefully will occur when you bring these people together. And probably the easiest way to think about this is something referred to as knowledge co-construction where you bring people together who know different things and based upon their focusing on a particular problem or particular decision, they co-construct a solution based upon what they individually know. And they piece it together to produce some kind of solution that they would never have been able to come up on their own.
So that kind of synergy, that kind of synthesis is what makes teams more interesting. And an individual can’t necessarily do that unless they spend a lot of time reading multiple literatures, becoming familiar with different domains. And as I said, there’s too much knowledge produced. And there was actually an article written by Ben Jones, an economist, who referred to it as the burden of knowledge. And he specifically talked about the proliferation of scientific papers and scholarly papers over the last few decades and how there’s no way we could be the kind of renaissance man where you can know everything about everything. And because of that, we need to team the more and the special properties of teams are such that we hope that they can come together with their specialized knowledge, have meaningful interdependencies, communicate appropriately, and then construct solutions that they couldn’t have otherwise or make superior decisions that they couldn’t have otherwise.
Daniel Serfaty: But that construction that you talked about requires basically one member of the team to know something about what the other member does. That’s part of the interdependence but part of that knowledge construction. The quarterback needs to know something about the job of the wide receiver, but not all of it. How do we determine the right amount of knowledge about the others in order to optimize how a team works?
Stephen Fiore: Now you’re touching upon the area referred to as team cognition, and what you’re essentially describing is part of a transactive memory system. What we know about that is effective teams know who knows what. As probably one of the necessary features, you have to know who is an expert in what area. But you added an additional dimension, and that has to do with how much do you need to know about another person’s position, role or specialty. And honestly, that’s an empirical question. And Nancy and her colleagues, they did a lot of research in that in cross training to see the degree to which you needed to know what was the knowledge, what were the roles and responsibilities of other team members, and I think it’s going to be contextually independent.
So I might be a better team member if I knew how to code, but I’m not going to learn how to code. I’m going to go to the people on my team that know how and say, “Hey, can you do this?” And they’ll say, “What do you need?” And then we’ll talk about it. It’s about identifying what’s the appropriate level of knowledge. And in the case of scientific teams, I think the bare minimum is understanding the lingo that is a specific terminology and trying to use the terminology appropriately. You can be on different teams and you can hear people with different areas of expertise use concepts differently than you and that’s an important characteristic so you can coordinate that knowledge among members much more effectively.
Daniel Serfaty: Nancy, we’re going to use the rule of presidential debates here. When one candidate mention the other person’s name, the other person’s entitled to an addition or a rebuttal. Steve mentioned your research here. Can you add to that answer, because I think it sounds a core of what makes team function well.
Nancy Cooke: I think I can best address it by talking about I work with a synthetic teammate. So we’ve done work in a simulated unmanned aerial vehicle ground station that is a three agent task, so you have a pilot, a photographer or sensor operator, and a navigator or mission planner. And we worked with the Air Force Research Lab to put a synthetic teammate in the seat of the pilot. And so we tested how teams with a synthetic teammate did at this task compared to all human teams. And it was really interesting that the synthetic teammate was really good at doing its own test. It knew how to fly the vehicle.
Daniel Serfaty: So synthetic teammate is fundamentally a computer program that has been designed to behave a certain way?
Nancy Cooke: That’s correct, yes. It was designed to know how to do its task. But what it didn’t do is to anticipate the information needs of the other teammates. When humans are on a team, they realize there are others on this team and that they probably need something from them and they need something from you. And so the synthetic teammate did not anticipate the information needs of the human teammates. And so they would have to always ask for what they needed, to ask for the information they needed. And instead of having the synthetic teammate give it to them ahead of time, as a result, and this is really interesting, the team coordination suffered. But what happened eventually is that the two humans on the team also stopped anticipating the information needs of others. So everybody became like one for themselves.
And so some people ask me whether it’s that the synthetic teammate doesn’t really have a theory of mind of the human teammates and therefore doesn’t know what they need. And that may be true, but I think it’s probably a very simple theory of mind that it needs. And that is it’s missing what’s going on in this larger task, what are the roles of these other agents on the team, what do they need from me and what can I give them and when do they need it?
Daniel Serfaty: It fascinating. You are saying that that particular, and I don’t want to extrapolate here and to say something that you haven’t said, but in a sense it’s almost like the synthetic teammate didn’t have empathy programmed in it and therefore didn’t make the effort to guess or to anticipate or to predict what its teammate wanted. Something that we should expect what, naturally from a human teammate, or is that something that we train?
Nancy Cooke: Well, even human teammates differ in their ability to be a good team player. It could be just this idea of empathy, but people do seem to come to our experiments knowing what it means to be a teammate.
Daniel Serfaty: So perhaps as a way to go back again to the different models of teams in the literature, there are models that come from systems engineering, there are models that come from organizational psychology, some other models from cognitive science, sociology even. Steve, there are many models of teams in the literature. Which is the one, I know you’re a cognitive scientist and you have your own biases or preferences, let’s call them this way. But which one of you do you prefer, in a sense which one of you tends to explain more team phenomenon more kind of team.
Stephen Fiore: The concept of a model is one of these nomadic concepts, meaning it travels across disciplines and means different things to different people. And so with that caveat, I know that the way social scientists use model tends to be very different than the way computer scientists or engineers use the word model. And I’d say that what a lot of the team research is doing are primarily frameworks and it’s organizational relationship among concepts to try to say things about how they’re associated and how some processes relate to certain outcomes.
So I’ll speak of frameworks and I think the initial frameworks that were useful for team researchers were something like the IPO, input–process–output models where from the input standpoint you’d look at things like the composition, who is in the team. You’d look at the kind of task characteristics. What did you need to do? What was the interdependence? And then you looked at the kind of process factors. So the communication going on, the backup behavior. And then you would look at various outcomes that were possible. You could look at their changes in knowledge, their real kind of performance outcomes, or you could look at outcomes, like were they more cohesive, did they learn something?
So these initial ways of thinking about teamwork were useful because, again, this organizing framework helped researchers parse the many different concepts that they were trying to understand. And then when the field got more sophisticated, they started talking about moderators and mediators. So it became the IPMO, input-process-moderator-mediator-output, where you would look at things like, oh, you can’t simply look at communication. You have to look at something like an attitude such as psychological safety because that’s going to moderate the amount of communication that happens in a team, which then is going to influence the kind of output.
So again, these are models like the way you would think of an engineering where you have not simply descriptions but also predictions that you can quantify and test. So I’d say these frameworks are still not models and what I’ve been trying to do is develop a more niche kind of model in collaborative problem solving. So the macro cognition in teams model where we move beyond this kind of input-process-output to take into account different factors of a collaborative problem solving scenario where you have individual and team level cognition. So there’s these individual and team level knowledge building processes. There is these extended or external cognitive processes.
Those interact in specific ways that we can generate hypothesis developed to produce different kinds of outcomes such as the particular kinds of discussions they may have, or they’ll argue about solutions and try to identify. And we can generate specific hypotheses that say the more that a solution is interrogated, the more that a solution is argued about, the better the solution will be and the more likely they’ll come up with an outcome. So the short answer is right now I’m trying to develop a precise model but doing so in a narrow way in the area of collaborative problem-solving, which is just one facet of teamwork. And this is an attempt to integrate different factors from cognitive and organizational science into a model that has some kind of practical utility as well.
Daniel Serfaty: I wanted to force you into a trap but you didn’t fall into that trap to force you to choose which one you prefer, but you like all your children equally, I understand that. Nancy, perhaps you can illustrate, add to what Steve is saying. The study of groups of collective for collective behavior has been around for more than a century, but in the past I would say 30 years there was more of a focus of understanding teams the way you defined them earlier. Can you point for our audience one or two if not breakthroughs but at least a key insight that this research has produced that was actually implemented in management teams, in our law enforcement teams, in military teams, in medical teams. Something that we didn’t know, we didn’t articulate as well maybe 30 years ago but today we know.
Nancy Cooke: Well, I think we had a better handle on what goes wrong in teams. And so when we go out to industry and look at some of their teams, there are certain things that come up again and again, and in my experience they have to deal with two things: communication and usually it’s lack thereof, inadequate communication, or role conflict. Not knowing who’s supposed to do what, not knowing what you’re supposed to do, and that wreaks a lot of havoc in teams.
But communication is definitely a big one and that’s what a lot of my research has focused on. How do we measure team communication and say something about that communication? We look at the communication dynamics and not just the content of what’s being said, but who’s talking to who. And looking at those patterns over time provides a good way to measure team cognition without getting in the way, without having to stop them and ask them to fill out a survey. But we’ve really made a lot of progress, I think, in that area and more generally in just how we measure teams and team cognition.
Daniel Serfaty: That’s fascinating. Let’s talk a little bit about communication because if you open a popular literature or even sprinkly sometime a management kind of publication, not scientific publication, there is a thing that people preach about more communication is better. Is more communication better in teams always?
Nancy Cooke: Not always, no. Sometimes communication that is either meaningless or maybe even destructive is not better. So it’s definitely not more is better. In fact sometimes you want to be as succinct as possible. In the military, for instance, we don’t want to go on and on because you don’t have time and you have to convey exactly what the intent is or what the next action is as clearly as possible.
Daniel Serfaty: That’s interesting. So selective communication rather than more communication. I’m sure this is a key debate in the team research literature. Steve, do you want to add something to what Nancy just said about communications in particular?
Stephen Fiore: The study of communication is one of these areas that has co-evolved with the study of groups over the 20th century. And it’s an important area because when studying communication it helped create these other concepts that are really important for the understanding of teams. And the specific example is related to your question about is more better. And some of the early research looking at expert teams showed that, no, good teams do not communicate as much. They only communicate when needed and independently we had Judith Orasanu and Jan [Canabaru] develop the shared mental model concept based upon that kind of research and the inference they were drawing is team members knew what each other knew, and therefore did not have to engage in explicit communication about everything.
They could be brief and they could speak about only something that they knew was relevant to the task at hand, so they didn’t have to explain everything all of the time because they knew their teammates knew these kinds of interaction patterns, their roles, and they would look at a situation and identify what was going on and then speak about only the important components of that. So they didn’t talk as much as poor teams.
Daniel Serfaty: That’s an interesting insight actually. Nancy, is it linked to the concept that you said about role conflict in a sense that if I don’t know, in addition to knowing what the other person knows or doesn’t know, that I need to know also what the other person does and doesn’t do. It’s not just about knowledge.
Nancy Cooke: Exactly. This is also linked to a very famous article, Entin and Serfaty, that talks about implicit communication, and the idea that when you know more about what everybody else is doing on the team, you don’t have to communicate as much. You communicate implicitly.
Daniel Serfaty: So as we explore basically what we know about teams, I want to ask a question. In your experience, you’ve observed several kinds of teams and you’ve studied many kinds of teams, certainly in very different work environments, some mission critical, some other just regular work teams. What stresses a team the most? I’m an evil person on the outside and I want to disrupt the work of a team. How can I stress that team the most?
Nancy Cooke: Good question. I think I would interfere with their communications. If the team can’t communicate, then all you’re left with is the implicit part, and they’d have to be really well-trained in order to keep going.
Daniel Serfaty: Steve, if you are that nefarious agent from the outside who wants to disrupt the working of a team, what would you do?
Stephen Fiore: I would look at some of these frameworks, like the ABCs, the attitudinal, the behavioral and cognitive features of teamwork. I could mess with the attitudinal component such as trust and do something to diminish the trust on that team, therefore they won’t communicate as much or they’ll be deceptive in their own communications because they don’t trust each other. Another attitudinal component would be psychological safety. I could disrupt that by insulting members of the team so they’re not wanting to speak out anymore. We could look at the behaviors. We could increase the performance monitoring that goes on so they’ll be worried that they’re always watched. That may cause then to choke.
We could influence the leadership, the shared leadership on that team such that one person may be more dominant than another and create this imbalance in coordination. You could interfere with their cognition where you could change the transactive memory system or the shared mental model through something like membership change. So pull a member out of that team and put someone into that team. Those are all the kinds of features we know from studying teams in multiple domains that will produce some kind of process and outcome disruption.
Daniel Serfaty: I’m impressed you thought about many, many ways to disrupt things, Steve. But in fact, I know that you are not a nefarious agent and you won’t do that. But in a sense, working in different organizations, sometimes the organizational climate around a team actually is inducing all the effects that you just described or many of them. Teams don’t work in a void. They usually are part of a larger organization. To what degree is the variables of that larger organization that surround the teams: other teams, enterprises, departments, affects the performance of the team itself? Because in the lab quite often we isolate those variables in order to manipulate just the variable that we want. But those stressors or any other variable that we apply to a team sometime comes not from a manipulation but just from a climate or a culture or some external event. Nancy, you want to comment on that a little bit?
Nancy Cooke: I think that’s exactly right. We do have research out there on multi-team systems, but I think what you’re talking about maybe is a little bit different. So it’s the climate surrounding the team. I know in one petroleum organization I visited, it turned out that there were some bad teamwork and part of it boiled down to the climate and what individuals were rewarded for. They were told that safety was most important, but they were really rewarded for productivity. And so this whole climate really created a lot of friction on the team when people had safety issues and they would just conflict with their goals to be more productive. So yes, it can have a huge effect.
Daniel Serfaty: You mention multi-team systems. I’m sure our audience is not necessarily familiar with that concept. What is that?
Nancy Cooke: Some people call it a team of teams and we do have this a lot. In the military you’ll have platoons and squads and everybody is interacting at some level. We’re actually developing a testbed to try to study this where we’re going to be interacting using our UAV testbed, unmanned aerial vehicle, with a similar testbed at Georgia Tech and one at the Air Force Research Lab also connected to a ground battlefield kind of simulation. And so we’re hoping to do research that looks more at these really complex interactions.
Daniel Serfaty: And I’m sure your systems approach to understanding those complexities help here because in a sense it’s a system of systems. We talked quite a bit about how teams think, how teams work, how teams solve problems together. Steve, what do we know, if anything, Steve and Nancy actually, this question is for both of you, about the way teams learn? How do they learn together or is just the sum of the individual learning? Is there particular ways teams acquire skills and knowledge, learn in a sense, that is different from the way individuals learn?
Stephen Fiore: I think I’ll reframe it as how can you facilitate learning in teams? And I don’t know that it’s necessarily different and one key example that comes to mind is the process of reflection and feedback. There’s debriefing as really a crucial part of any team, and the military has done debriefing. They do pre-briefing, they do debriefing. Sports teams do this as well where they’ll engage in these kinds of preparatory activities where they’ll watch game tapes and prepare for an opponent, but then they’ll watch the game tapes after a game and then they’ll reflect on what went well and what went poorly. And this is an area that I’d say is one of the more robust findings in team research because there’s a lot of evidence to show that debriefing, this reflective activity, after some performance can facilitate learning.
You have to put the right structure in place, meaning it has to be facilitated, it has to be guided or else there’s going to be potential group dynamics that interfere with a good discussion. People might be intimidated, people might be afraid to speak clearly and honestly about what went wrong. But when you have this structure in place, you know they can identify, hey, you did that poorly, or I did that poorly. When you have things like psychological safety, when you have trust on that team, you can speak that way to each other. You can communicate in effective, explicit ways where you can identify where the team did poorly and where they did well. So that reflective activity produces the kind of learning that they then take into the next performance episode.
Daniel Serfaty: Reflection and feedback certainly. Thank you. As we are looking, again, in a sense in the rear view mirror before we move to the second part of our podcast which is going to look at the future of teams, and both of you started planting some seeds for that one. I want to ask the question, if you look back at your career in fact or at least the part of your career where you focused on teams and teamwork, was there an aha moment at some point besides the one you had maybe in graduate school, or maybe that one too that you described earlier, an aha moment, suddenly an insight that you gained when you grasped something about teams that you didn’t know before?
Nancy Cooke: One aha moment was when I was thinking about team situation awareness and what does that mean and how would we even measure it. Is it the idea that everybody on the team knows the same stuff about the whole situation, everybody knows everything. And I didn’t think that sounded right, but I was in a parking garage. I was at a conference with my graduate student or postdoc at the time in a rental car. And I was backing the rental car up, this is kind of an embarrassing story about my driving skill. And I almost backed it right into a cement pole, but I didn’t. And why didn’t I is because my post-doc did his job and pointed out that, “Oh, don’t back up. You’re backing up into a pole.” And at that moment I thought, well, this is what team situation awareness is, conveying information on the team to the right person at the right time in order to avoid a disaster in this case.
Daniel Serfaty: That’s a great example. So you had the perfect mental model of the absent-minded professor at that point. Steve, can you share with us one of those insights or aha moment in your quest to understand teams better?
Stephen Fiore: Sure. One would be a project I was working on with Nancy and a number of other people in the early 2000s and we were trying to develop the very complicated research proposal of a center, a $25 million center funded by NSF. We had a small grant called a planning grant to develop the concept. And with that grant, we were supposed to be spending time thinking about what would you do with $5 million a year to study? And in our case it was expertise. And in that project, we were trying to coordinate among a number of different scientists with different kinds of specialties. And in my role as a co-PI on that project, I was struggling with how do we do this better? So I said, well, what does the literature say about how you coordinate science teams? And the aha moment was, hell, we’ve never studied scientific teams.
So anyone that had looked at it were not what we would call organizational scientists or training researchers. There had been some people in policy that had looked at it, but certainly not the way we team researchers study team. So that was the aha moment that there was this huge gap in the study of teams where we had never really looked at scientific teams and how to improve teamwork in collaborations and science. So that kind of changed my career but I didn’t really do anything about it for a few years and wrote a paper in 2008 that said, this is an important topic, and there were enough people in Washington interested in that were also pursuing it. So I started collaborating with people at the National Institutes of Health, what we now refer to as the science of team science. We spent a lot of time trying to cultivate this. So people like you, people like Nancy, people who studied teams will recognize this as an important area of inquiry.
Daniel Serfaty: Thank you for sharing that moment with us. This is a perfect segue in fact into what I would like to explore with you in the next discussion, which is basically the future of teams. And I would urge you to talk either at teams that you study or teams in which you perform yourself as a teammate or in teams of teams. With the COVID-19 pandemic forcing enterprises, whether they are corporate or academic enterprises, into distributed and remote work situations, are we witnessing an evolution of the definition of how teams perform and what is teamwork in those situations? Nancy, you want to take that on, and then Steve I would love your perspective on that.
Nancy Cooke: On the one hand, because we are distributed and don’t have to commute to a meeting place, we can have more of these meetings, almost infinite number of meetings, and that may improve teamwork because there’s more communication. On the other hand, there’s some things that we know about good collaboration that we’re missing. So I think COVID is taxing the teamwork for that reason. And the two things that come to mind are food and serendipity. A lot of good collaboration happens when we share food with one another, when there’s like a common break room or you go out for pizza or drinks after work. And that’s when a lot of the collaboration happens and people relax and open up their mind a bit.
But the other thing is serendipity. A lot of good collaboration happens because we run into each other in the hallway or on our way to the restroom or at a particular meeting that we didn’t expect to both be at. So we’re not doing either one of those things. We’re not sharing food and we’re not being serendipitous. And people try to use breakout rooms, I think, to get at some of that. But I don’t think it’s sufficient. So I think maybe we’re improving the number of meetings we can have and maybe teamwork because there are so many meetings, but also we’re taxing the teamwork.
Daniel Serfaty: That’s very interesting. I would have thought about the serendipity, but the food is certainly an intriguing variable here. Steve, how about you? How do you think this new world in a sense induced by COVID is affecting the way teams work together?
Stephen Fiore: I’d like to think it’s calling attention to the need for better technology to facilitate this kind of collaborative distributed work. So virtual teams has been a sub area of research for a number of years now and there’s fairly sophisticated frameworks for looking at them, and part of the problem is the people who study technology are in computer science and there’s a field of computer supported collaborative work that overlaps somewhat but not completely with researchers in teams. And because of that disconnect, I think that the people who are building technologies to support distributed work may not be collaborating with the right people, may not be aware of some of the teamwork needs when it comes to this kind of distributed work.
So I think that limitations are becoming much more apparent because we’re forced to use some of these technologies. I won’t name any particular companies. There’s certainly a lot of variability in the effectiveness of these different platforms that we’re now using. And some of the bigger names are really bad, surprisingly bad, at developing these collaborative technologies. So my hope is that this is a kind of use inspired science and engineering where because of the tremendous increase in collaborative work, that they’re going to be developing better technologies. And it’s also up to the users to make more apparent and to inform the designers what we don’t like about the technologies. And I see some adapting fairly well to this but others are too rigid in their technology and they’re not changing.
Daniel Serfaty: It is fascinating along those lines, or at least I can observe that in my own organization, how people spontaneously have been trying to reconstruct what has been deconstructed or destroyed by the remote work situation, the equivalent of the water coolers, the serendipity, almost then seek to try to induce artificially or promote the conditions for serendipity. And I’m witnessing that not because it’s something that’s decided from corporate from the top, but rather maybe it’s a collective subconscious effort to make up for what the technology, as you say Steve, is not providing us. And I think there is a research that is screaming to be performed here to see what exactly as our shortcuts, those additional channels of collaboration that people have created around those tools. Steve, you wanted to add?
Stephen Fiore: The serendipity, this has been studied for example in the science of team science and the label for that is referred to as productive collisions where you run into somebody in the hallway, “Hey, what are you working on?” And they share what they’re working on and you say, “Oh, that sounds similar to something I’m doing. We should get together.” Or just the ambient awareness of what people are doing. So if they’re working on a whiteboard somewhere, someone may witness that and say, “Oh, that looks interesting.” And go and talk to them about whatever is the model or the data, whatever they’re graphing on that whiteboard.
Those kinds of informal interactions are really critical to any organizational innovation and I don’t know how well we can engage in social engineering to produce that. The only example I can think of is a lot of us that run scholarly organizations where we’re running virtual conferences now. And for one that we ran in June, we specifically tried to mimic the chats that happen during coffee breaks. We know that’s where the real action happens. It’s not necessarily during the talks, it’s after the talks in between the sessions. So we set up Zoom rooms where anyone could go to and say, “Hey, if you want to meet up with someone, go check out this Google Sheet and go to this Zoom location and have an informal conversation.”
And it turns out, I found out a couple of months later, some company had developed an app for that to try to foster these kinds of informal get togethers at virtual conferences. And as you well know, my favorite part of conferences is hanging out at the bar at the end of the day where you share a drink with friends and you just sit around and talk about what you’re doing, what are the interesting things you’ve learned. We’re trying to mimic that. There are these virtual happy hours, but it’s really not the same thing. I have no solution to it, but you’re right. This is a significant problem that we’re going to have to figure out how to overcome.
Daniel Serfaty: I’m glad you added the drink portion of the food hypothesis that Nancy proposed. Nancy, you want to add something?
Nancy Cooke: Yes. I meant to include the drink in the food. The other thing I think that’s really difficult and maybe hampers communication is this idea of lack of co-presence. I can’t see what’s going on around you except directly behind you, or with Steve. And so there may be some distraction that’s happening in the background that you can’t see that maybe changes what I say. And so a lot of communication happens in context and we’re pretty impoverished in the context that we share right now.
Daniel Serfaty: Very insightful. And again, you guys are giving me the south ball here because this is a perfect segue into the next area. I would like to explore with you this notion of knowledge, implicit knowledge sometime of the context in which teamwork is happening. Can we engineer it? That’s really the question. But to major a recent evolution of the study of teams is to apply our theories, science, models of teams to the design and evaluation of hybrid teams. We are making a John that is not just metaphorical but it’s real. We are looking at teams of humans and artificial intelligences that can be software, that can be robots. And both of you are leaders in thinking about this problem. Could you share your perspective and experience with this area, particularly for our audience certainly who want to hear what you are doing in that area in exploring these human AI perhaps futuristic teams, perhaps current teams, but also highlight the similarities and the differences with humans-only teams. Who wants to start with that?
Nancy Cooke: I’ll go.
Daniel Serfaty: Okay, Nancy. I know that’s one of the centers that you’re managing at the Arizona State University is actually called the Center for Human, AI, and Robot Teaming. That’s very brave to use that term there. Tell us about it.
Nancy Cooke: Considering that some people don’t think that robots and AI can be teammates, it is. But we emphasize that we’re not an AI center, we’re not a robotics center, but we are emphasizing teaming. So we’re about the relationships between humans, AI, and robots. And I think one mistake that people make is to try to replicate humans in the AI or robots. So you make robots that have a face on them or you try to develop the AI so that it has the kind of say theory of mind that a human would. And I think that’s just counter to this idea of teaming. So in teams, as we were talking about, they’re heterogeneous with different roles and responsibilities, and I think that argues against replication of humans in AI or robots.
That AI should do what it does best and humans should do what they do best. AI should also do what the humans don’t want to do because it’s still dirty or dangerous. That’s sort of the principle that I’ve been acting on, and trying to make this team work I think is going to be very different than making a team of all humans work because we’re now teaming with a different species.
Daniel Serfaty: Thank you. And I certainly want to explore that further. Steve, you want to add to that?
Stephen Fiore: Sure. One distinction that I have found useful when we think about technology writ large integrated with teams is the difference between teamwork and task work. Task work is the reason you’re brought together, you’re trying to solve some problem, you’re trying to make a decision, you meet certain objectives and goals. But teamwork is the process engage in order to meet that task, to meet that objective. So by differentiating between that, you can think about how and what are you designing. Are you designing a technology to support the task work, or are you designing the technology to support the teamwork?
And the argument that my colleagues and I have been making is most AI, most technology has focused on the task work. And we’re now moving into this new realm where AI is potentially capable of supporting the actual teamwork. And like Nancy mentioned, that gets us into more human kinds of cognitive processes. Theory of mind is merely a label for a particular kind of social cognition and that particular kind of social cognition is necessary for team members to, for example, anticipate each other and engage in what we would refer to as something like backup behaviors.
So you need to have enough of a shared mental model that you can say, “Oh, Daniel’s in trouble. I know Daniel’s supposed to be doing this right now. And because I have an appropriate theory of mind, I can tell he’s stressed, but I’m going to step in and help him accomplish his task at this moment.” So that’s where something like theory of mind is going to be needed. And again, it’s just a label to describe what we used to refer to as shared cognition. So it’s these more collaborative components of teamwork that are the next generation of technology.
And again, I just use the term technology. It could be a robot, it could be a decision support system. It doesn’t have to be an embodied agent. So it could be a disembodied agent. You all worked on decision support systems, and you were trying to develop intelligent decision support systems back in the ’90s. So in that case you were trying to facilitate both task work and teamwork with technology. So the larger point is this is really not that new. We have always been trying to develop technologies to augment individual and collaborative cognition. The only thing that’s new are the capabilities of some of these technologies. And it’s our job as social scientists to point the technology in the right direction.
Daniel Serfaty: That’s very interesting and also adds complexity and texture to the problem. So for our audience, both of you are talking about technology to enhance teamwork and task work. But it’s not necessarily, or is it? Are we talking about imagining a team in which one member of the team will be replaced by an artificial member? Or are we talking of a team where artificial intelligence can be that, can be a node in the team if you wish, but can also have other functions to facilitate the team processes. Are we talking about both or are we talking about the former because our audience is going to think, okay, they are talking about replacing a job that is currently being accomplished by an expert or by a human with a machine. Which one of these two paradigms are we discussing here or maybe were discussing both?
Stephen Fiore: It’s definitely going to be both and more just referring back to the point Nancy made about we shouldn’t be thinking about AI to be just like a human. The point is to develop the technologies that can do more than humans can do. It’s just that when a new technology comes along, we tend to think about it as a substitute. But the real goal is to think about how it can augment capabilities, how it can produce something greater than just a team of humans could produce. And to your other point, one of the distinctions we’re making is, is the AI a member of the team or is it more like a coach or a facilitator where it’s continually in the background monitoring what’s going on and intervening when necessary? Or like you said, is it a member of the team where it actually has specific roles and responsibilities?
And as I said, we’re really talking about both of these. And I think we will see both of these. And in fact there are certain companies that are making claims now that they have this ambient AI that can monitor meetings and facilitate meetings. DARPA tried to do something like that a couple of decades ago. So this is recognized as a continuing need, but I think Nancy’s point is the critical one that we need to think of AI like we think about teams and complimentary skills, complimentary knowledge. AI can do things humans can not do. Do not look at them as merely a surrogate, look at them as this kind of cognitive collaborative amplification.
Daniel Serfaty: Nancy, you have been a leader in the field in a sense that your lab has been specifically designed to study that. You start having empirical evidence and publication about results regarding this notion of hybrid teams. Can you tell us a little bit about that and expand on this notion of where is the AI in the team?
Nancy Cooke: Yeah. I’ve been spending a lot of time developing testbeds in which to study human, AI, robot teaming. I think that they’re really important because it’s hard to do this first of all out in the real world because we don’t see a lot of good examples that would allow us to study them in a controlled laboratory. So we rely heavily on testbeds. We set up testbeds to look at tasks like unmanned aerial vehicle ground control, cybersecurity, urban search and rescue. We also make use of the Wizard of Oz technique a lot. And for those who don’t know what that is, it’s based on the Wizard of Oz, my favorite all-time movies where Dorothy unmasks the wizard and it’s really just a guy who’s playing a wizard.
And so in our experiments we will have a human experimenter play the role of the AI, or in some cases even a robot, physical robot. That way we can have better control over what the AI and the robot are going to do, how they either interact, how they make mistakes, in order to get a leg up on the design of AI and robots. Without using Wizard of Oz technique, you’d have to wait until the AIs developed and then go test it and tell the developer like, that didn’t work, do it again. And so this way we can get the human factors information to the AI developers just in time, at least, or maybe before it’s time.
Daniel Serfaty: I had a wonderful ongoing debate, and maybe that will be the object of a future podcast on MINDWORKS with Professor Ben Shneiderman who both of you know and recently who just wrote an article called Human-Centered AI. Kind of a concept like that which is not technology centered but rather looking at the human first and looking out to augment that. And he and I had several discussion in different conferences and even private. I have the honor to count him as a member of the Scientific Advisory Board where I work.
We had this debate about why is AI exceptional? I argue that we should talk about multi-species system, human AI teaming, and you say AI is a tool. So if AI is just a tool in the hands of the humans, either the designers or the teammate, what is exceptional here? Is there some kind of a unique exceptionalism in respect to past human machine interaction? Is AI exceptional in a sense that there is something unique about pairing human and AI that we need to pay particular attention to, or just isn’t AI a very sophisticated capable machine that we are putting as a typical human machine design? Steve, I want the philosopher in you to come out.
Stephen Fiore: I do think that it is something exceptional and I take seriously this idea that Nancy has brought up about alien. And I think what we’re seeing is something that’s thinking in ways that we can’t comprehend, and I’ll give you a specific example. I first saw this when I was reading an article about AlphaGo. And what was intriguing about AlphaGo is that not only was it simply a human versus AI, they also had what we referred to as centaur teams where it was a human paired with an AI playing another human paired with an AI. And when they had experts review the games, that is masters at the Game of Go, they literally referred to it as kind of alien. They couldn’t comprehend what the decision-making process was, yet they recognized that it was superior and it had something to do with the capability of AI to think in multiple dimensions that we are not able to.
And in another area that I’ve seen this occur is in cybersecurity where part of the problem with cybersecurity is humans can’t really comprehend the physics of networks and the speed with which things happens and the rapid distribution of information across these multiple networks. And that’s where I think AI has to come in where AI, like Nancy said, it can do things we can’t do, and we shouldn’t lose sight of that fact. It’s not artificial intelligence because that means it is mimicking our intelligence. So this is why we’re calling it alien intelligence. It’s foreign to us.
Daniel Serfaty: Nancy?
Nancy Cooke: I’d like to add to that and talk a little bit about autonomy. For the Starbus study that I mentioned earlier, I’ve been talking to a lot of people who work with military working dogs. And I had an interesting conversation with one person yesterday who said that really the problem is, and several people have actually said this, the weakness on the human-dog team is at the human end the lead because what the human wants to do is take the dog around and tell the dog where to go sniff for this target. Sniff here, sniff here, sniff here. What the dog does best is to run way ahead. They can smell their target from very far away; to be off the leash, to run ahead and find the target very quickly.
The dog should have more autonomy, in other words, but people aren’t comfortable with giving the dog that kind of autonomy. They want to be able to control the dog. And I see AI kind of in the same light so that people aren’t very comfortable calling AI a team member or letting AI do what AI does best. I think it can be a team member by definition if it’s working on team tasks together with humans than it’s a team member where it’s a tool that can also be a team member.
Daniel Serfaty: That’s a beautiful example. Thank you for sharing that. And certainly the analogy of human-animal teams, this notion of multiple species basically collaborating is not just multiple humans with different expertise. We’re in another dimension here and that’s fascinating I think for us as researchers as well as for the field. Are we aware, in addition to the gaming example you just gave in the AlphaGo example, Steve, are we aware, could you share with our audience some domains in which these collaboration between human and AI has already been implemented and are these already part of the way a particular domain is being worked in today? Could you share that?
Stephen Fiore: One current example is in software development like in the GitHub repositories where you have communities collaborating on these repositories to produce code, reuse code to come up with new applications. And because these are very smart coders, what they’ll do is recognize how to automate certain elements of these tasks and they’ll create bots that will do things that they don’t want to do anymore. So there’s been kind of this organic emergence of artificial intelligence, these kinds of bots that are supporting the software development. So this is an interesting area of research where you see people paying more attention to it from the teamwork standpoint. And it’s, how do you study the degree to which the bots in these collaborations are altering what’s going on in the teams?
And we’re working on a paper right now where we studied some GitHub repositories and compared repositories that had bots and that did not have bots to at least try to understand what was changing. And we’ve seen changes in productivity for the humans where the bots are facilitating the productivity, that is the humans are able to get more work done and one way to think about this is they’ve offloaded some of the work to the bots, but we’re also seeing some complicated changes in efficiency. This is one of the challenges with this kind of research is really understanding why there are changes in efficiency.
And in the case of software development, it’s how quickly they respond to issues. So when a bug or a flaw or a change is requested in the software, an issue is created and then the teams have to respond to that issue. And we’re seeing the teams with bots tend to be taking longer. And we’re not quite sure why they’re taking longer. And one concern is it may just be that the bots are helping them identify more issues than human-only teams are able to identify. So this is this kind of field research where we don’t have complete control and without complete control we’re not quite sure what’s going on there, but I think this is a very important example of where bots are being organically created. And we’re already seeing a change to the way work is getting done.
Daniel Serfaty: Interesting. So software writing software basically in collaboration with humans. Nancy, do you have other examples you share with your students where those teams are already starting to find their way into the marketplace?
Nancy Cooke: Yeah. I think this whole idea is pervasive across many different domains. So you see it in transportation, with the self-driving cars. We see it in medicine, robot-assisted surgery, in manufacturing. Looking at the Amazon distribution center, you would see lots of humans running around with robots and tablets with AI on them. You have it in space exploration and certainly in the defense industry. So I think there’s no shortage of examples. And you could argue about how intelligent the AI is and how autonomous it is, but certainly we’re headed in that direction or we’re already there.
Daniel Serfaty: Well, that’s certainly a future we can all aspire to. But before I ask you my last question about your prediction for our children maybe if not for us, all this introduction of these intelligence, initiative-taking sometime may be emergent behaviors that we may or may not be able to control. Are there some ethical consideration that this teaming of human intelligence and AI intelligence that are emerging that we should consider seriously, even as scientists and engineers? What should we worry about in term of ethics, if anything?
Nancy Cooke: I agree with Ben Shneiderman that we always want to maintain control. And I think especially in some situations like when you’re deploying weapons, you want the human to be in control. Maybe there’s other situations like riding on an elevator where you don’t need that much control. So I think control is important. And part of my answer too is that we’ve been developing technology for centuries. And every time we develop something, somebody can use it for good or evil. And I guess we just have to try to be ahead of the curve and try to make it so that that is harder to do.
Daniel Serfaty: That would be wonderful if science and engineering certainly has a word in that and we don’t leave that very tough ethical decision to folks who do not understand deeply what the technology and perhaps even what team science is about. Steve, what’s your take on ethics?
Stephen Fiore: Well, I think it is the compliment to what you just said because the arguments and the debates are that the people developing the technology don’t understand the implications of the technology that they’re creating. So there is a lot of discussion, a lot of hand ringing around how algorithms are being created. Another part of the ISAT community who Nancy and I are working is actually looking at the inequity in what we would think of as algorithm creation. So who are the people making these algorithms and do they really understand the implications of the algorithms, that is the end users, and how they’re affecting the end users lives. So there’s essentially a lack of diversity in the community making the technology that is impacting everybody, and that’s an ethical problem.
Another significant ethical problem has to do with data privacy and how the AI is learning by continually monitoring what our activities are. Everyone knows Google is watching us, everyone knows Facebook is listening and monitoring us. We’ve seen the implications of that, but they filter bubbles. And it seems that people don’t really care as much as they should about this monitoring that’s going on. So we tend to be concerned in the US about government monitoring, but we don’t care about private sector monitoring. And that’s something we’re going to have to address because it’s affecting our lives in very real ways. And the more things change, the more they stay the same. We’ve always been manipulated by companies or told what products to buy and we’re influenced by advertising and now we’re being influenced by algorithms. And the speed and the reach of this influence is what’s new, what’s different. And I don’t know the degree to which people are paying enough attention to that kind of influence.
Daniel Serfaty: And there is certainly an instantiation of this worry when we look at a more intimate relationship between this technology at the team level. I understand at the societal level that’s something we’re all worried about. I wonder actually in the future when we work very naturally in small teams that have AI very pervasive in it, either an actor or an AI that is part of the coordination mechanism, whether that notion of privacy is going to be even more critical.
Thank you so much for taking us all the way back to the origin of team research and all the way in the future. But I want your prediction now. I want you to think about the next 10 years focusing on team science, not so much on the teams in society but rather what would you say is the top challenge or the top two or three challenges in the continuous expansion and sophistication of team research and team science? I know that’s a big question and you could write several papers on that, but could you share with the audience just a couple of nuggets there, things that you tell your graduate students in particular what they should worry about.
Stephen Fiore: I’d say that we need to look at it from multiple fronts. From the science standpoint, it’s the increasing recognition of the need for the multidisciplinary interdisciplinary approach. I think that there is a couple of fronts that are important for team researchers. One is big data and the degree to which team researchers are at least collaborating with if not understanding the amount of data out there in the real world that we now have at our disposal. I use GitHub as an example. So that’s a new area where people are studying teams, but it takes a particular set of skills to look at that kind of data.
Another component of that is the technology being used to monitor teams. There is developing research in cognitive science that I refer to as neo-behaviorism where they’re monitoring the interactions of bodies through computer vision, through physiological sensors, and making inferences about the collaboration and about the cognition that these are new technologies that people who are trained in traditional behavioral research with things like subjective self reports may not be used to. So the next generation really needs to understand the suite of technologies that they’re going to have at their disposal to really get at a deeper understanding of the various levels of collaboration that are emerging from the brain, the body and the ambient environment in which they’re collaborating.
So I think that’s going to be the next part. And I think that we need to create something like a PhD in teams. Not a PhD in psychology, not a PhD in organizational behavior, but a PhD in teams where you’ll take courses on the technology, you’ll take courses on the various theories, you’ll take courses on the suite of methods and measures that are potentially available to you so you study this problem of teamwork from this already multi-dimensional perspective.
Daniel Serfaty: Thank you very much, Steve. That’s certainly an ambitious vision. Nancy?
Nancy Cooke: I agree with what Steve said and I think the assessment piece of it is really important. That that has changed over the last 20 or so years and I think it will keep changing as we have more technology that can do the sensing of the teams and more analytic techniques to make sense of those big data. Also, the heterogeneity of teams will keep increasing, I believe, and we’ll have more and more multi-team systems. And so getting a handle on what that means, and by heterogeneity I mean science teams will have multiple disciplines. We may be working with different species like robots and AI, and maybe even we would have a human-animal-robot AI team. And so trying to get those kinds of teams to work effectively I think is a big challenge and I think that we have tools and human systems integration that can help us understand those systems, and we probably need more such tools.
Daniel Serfaty: Well, thank you Dr. Steve Fiore, thank you Dr. Nancy Cooke. You’ve been inspiring both in terms of our knowledge of understanding this whole trajectory of team science, but also give us a taste of what the future is like when we embed intelligent technologies into our very own social structure called teams.
Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. This episode is part two of a five part series in which we explore the magic of teams. Last week, we learned about the ABCs of teams. If you haven’t listened to that episode yet, you’ll definitely want to do that after you listen to this one. This week, we are continuing our journey in exploring teams in the wild so to speak. And my three guests today are grandmasters in this field. They are not only students and scholars, but practitioners of the craft of team performance.
Professor Eduardo Salas is chair of the department of psychology at Rice University. His expertise includes assisting organizations across different industries: aviation, oil and gas, law enforcement, healthcare, in helping them foster teamwork, design team training, create a safety culture, facilitate learning and training effectiveness and manage decision making under stress. Dr. Salas, on account of my lifelong friendship with him, we call him Eduardo today. He’s one of the most prolific authors in all of psychology. He has co-authored more than 600 journal articles and book chapters. Yes, you heard that right, that’s 600. And co-edited and co-authored 37 books and handbooks. Eduardo is also the recipient of several lifetime achievement awards for his work on teams and team training.
My next guess is Dr. Scott Tannenbaum, president of The Group for Organizational Effectiveness. Under his leadership, that company, GOE, has supported over 500 organizations globally, including 30 Fortune 100 companies. For the past 30 years, Scott has advised and researched all kinds of teams from corporate, medical, military teams to teams that operate in more extreme environment such as smoke jumper teams, deep sea dive teams, energy production teams and even aerospace crews at NASA. His research has been sited more than 18000 times. His latest book, co-authored with Eduardo Salas, is called Teams That Work: The Seven Drivers of Team Effectiveness. The book was published earlier this fall by the Oxford University Press and I strongly recommend it to all of you. It’s a wonderful blend of the science and practice of teams.
And finally, my third guess is Dr. Kara Orvis, principal scientist and vice president of research, and my colleague, at Aptima, Inc. Kara has studied teams and team leaders for more than 20 years and she’s particularly passionate about the intersection of technologies and teams to facilitate teamwork. Whether it’s about using technology to assess teams, technology to support teams, or technology that helps people learn together as a team. So fasten your seatbelt my audience and get ready to learn about teams in the wild.
Eduardo, Scott, and Kara, welcome to MINDWORKS. It’s really a treat to have all of you and together today, which is really something. And for our audience, I’m going to ask you, Eduardo, of all the choices you had when you were a graduate student and later when you were a young researcher, why did you choose this particular domain of teams in your endeavor? You could have chosen any other field of psychology. Why teams?
Eduardo Salas: Well, like everything else, the interest started in graduate school. One of my mentors, [inaudible 00:03:43], who was an organizational psychologist, gave a seminar in graduate school around teams, groups, collectives, and units. And I took the course, got interested on the topic and that course was kind of at the end of my tenure there at graduate school. And within a short period of time, I was hired by the Navy. And coincidentally, the reason the Navy hired me was to develop a team performance laboratory. That’s how this adventure, this journey started, in grad school. My major professor, my advisor had a course. I loved the course. And my job was about teams. And 40 years later, here we are.
Daniel Serfaty: It’s amazing how sometimes a single encounter, when we study a single encounter with that professor that day changes your life and shapes your entire life. It’s stunning to me, those small moments.
Eduardo Salas: Furthermore, the connection with Scott even. The first project I got in the Navy, I engaged Scott’s advisor, [Terry Dickenson 00:04:46], Scott was still in school. The first thing we wanted to do at that time … Remember, this is 1984, ’85, we wanted to do a meta analysis of team performance.
Daniel Serfaty: What is a meta analysis, Eduardo, for our audience?
Eduardo Salas: It’s essentially a quantitative integration of the literature. Essentially, you try to uncover an effect size between two variables, an independent variable and a dependent variable for example. Since I was going to develop this lab, the first thing that occurred to me was, “Let’s see what the state of the science is.” That’s how I engaged Terry and Scott and again, 40 years later here we are.
Daniel Serfaty: And you’re still writing books with Scott 40 years later. Well, Scott, since your name was mentioned, it’s like in political debates, once your name is mentioned you are entitled to an extra minute. You’re a founder of a very successful company, GOE, but you were a professor once upon a time. And I’m going to ask you the same question because I’m fascinated how people choose a particular domain and become mastering that domain and then even coach and teach other people in that domain. Why teams?
Scott Tannenbaum: I think like a lot of people growing up, I experienced good teams and bad teams, whether it was sports or schools or other sorts of things. So like most people, I could see the good, the bad, and the ugly. And when I got to grad school, that was one of the areas that you could possibly study. I think the ah-ha for me in grad school was there was some research that existed and there could be better research. As Ed mentioned, we overlapped in grad school and had the chance I think to have some heated debates about topics related to teams. So I got interested in it. And shortly after graduating, several opportunities emerged both on the research side that Ed was describing, but also with some of my earliest clients. And it confirmed everything I thought. There’s good teams and bad teams and they could use the help. And if we can do a little better job of researching, I think our advice can get better.
Daniel Serfaty: This is the one question I’m going to ask all three of you, the same question, because I fascinated to know why people choose what they do. Kara, you have a PhD, industrial organizational psychology, very much like Eduardo and Scott in that particular specialty of psychology. You could have studied, I don’t know, leadership, organizational behavior, survey taking, all kinds of things like that. Why teams, and why are you focusing on teams?
Kara Orvis: I like to say I was born a psychologist. It was either a psychologist or an actress, but my dad wanted me to actually support myself so psychologist it was. I can’t remember a time that I wasn’t interested in the people around me. And like Scott, I played sports as a child. I always enjoyed team sports over individual sports. And then, in undergrad at the urging of a friend in the economics department of all things, he encouraged me to take a class in organizational behavior. Because I thought I was going to be a clinical psychologist, and I discovered this whole world about motivation and people in the workplace and learned that there was a whole area of research dedicated to teams and leaders. And that got me hooked I think similar to you, Ed, one class in grad school. It was my last year, my senior year, and I applied to the school that had the latest deadline, which was George Mason University, which really was lucky for me because I got to study under Steve Zaccaro. And I just love the area of research. I love my self experience even working on teams [inaudible 00:08:21].
Daniel Serfaty: Eduardo, I know you’ve been in it for so long and everybody who has ever studied teams in the past half century has probably read your papers. But my question is that, why is that so important to understand and study how teams work? Is there something magical about that particular human structure? Is there something unique about it?
Eduardo Salas: It’s an interesting social phenomenon. So in general, all of us are prone to coordinate, to communicate, to interface with others. And sometimes we do it because we have to and sometimes we do it because we like it. Sometimes we do it because we are in a context that kind of channels to do those kinds of things. So over the years, I’ve learned to appreciate collaboration and coordination, communication. I would say about 90% of the professionals out there are team sports, they’re teams. We collaborate in healthcare, in aviation, in the military, in science and universities now. And so trying to understand this phenomenon, it’s been an interesting journey for me.
Eduardo Salas: And as you know, team science has a long history, maybe 100 years, it depends how you count it. So it’s intriguing. And despite 100 years of science, we’re still uncovering new things. We’re still discovering new things. And we have new phenomena, new challenges like teams of teams, what Kara just described. And so that’s what makes this field intriguing, keeps us young, because there are new things coming out and we don’t have a prescription. This would have been a short podcast if we had a prescription, but we don’t and it’s complex and it’s dynamic and murky, it has all kinds of complications. So yeah. It’s an interesting phenomenon that needs lots of science and that’s what makes it interesting.
Kara Orvis: I liked your use of the word magical in talking about teams because I think they’re magical, and I’ll tell you why. When I was in graduate school, I remember very distinctly when I first realized that there are concepts that only exist in a team, things that don’t exist at the individual level. And an easy one to talk about is cohesion. Cohesion is something that you can’t experience on your own. It truly is something that only exists if you have a group of people that are working together or learning together or whatever their activity is. And I remember that being an ah-ha moment for me, that there’s this whole world of concepts that didn’t exist at the individual level. They only exist at that group level. And I always found that really exciting.
Daniel Serfaty: Yes. We’re talking about ah-ha moments, Scott. I always learned from you because you always have very vivid examples basically of your mind, which is oriented towards observation of naturalistic or teams in the wild so to speak. And you superimpose your academic, professorial model on it in order to understand really what’s inside. I appreciate that in the book, the Teams That Work book that you just published with Eduardo, you bring several examples [inaudible 00:11:40]. In particular, I appreciate in chapter two that you bring the Red Sox team even though you’re, for the benefit of our listeners, from New York. And a New Yorker appreciating the Red Sox is really a treat.
Scott Tannenbaum: My family doesn’t feel that way, just for the record. They don’t think it’s a treat. [crosstalk 00:11:57] a little differently.
Daniel Serfaty: When you’ve observed, and you work with elite corporate teams, I know about your history in leading GOE, did you have some type of ah-ha moment in which your own models of teams and theory of teams just by observing something in the wild was enriched by what you were observing either in sports or in the corporate environment?
Scott Tannenbaum: Yeah. I think there’s probably several of them overtime. But one that comes to mind is when we were doing work with a global banking institution. And in particular, the focus was on investment banking teams. And what’s interesting about this, and actually it’s work that Ed was involved in as well, is we were brought in because investment analysts at this level, this is a very well paid profession. These are folks that provide advice that is used to decide whether you’re going to purchase another company or not, acquire stock, et cetera, so very big business decisions. And they had kind of, I would describe them as hypotheses. The leaders had hypotheses about what really mattered here. And they wanted to know if they were right. So we went in and started watching these teams of investment analysts. We watched them when the stock market opened at the crack of done responding to the bell, we watched them working. We had the chance to use other techniques like survey and a lot of interviews of team members, of team leaders, of people that interacted with the teams.
Scott Tannenbaum: And what was interesting is going into this, the leaders had this hypothesis that what you really need is a star investment analyst like the Red Sox need a star center fielder for example. So that was the logic. And that if you just simply put a supporting cast around them a bit and they didn’t get in the way too much, it’s all about making the start successful. What happened was that it was this natural experiment that occurred, because over time about half their teams were structured that way and the other half of their teams had really formed more as a true team where they were collaborating together. They had a team leader, but it wasn’t all about the team leader. And we had a chance to take a look analytically at what was going on there. And because this is a financial institution, they had tons of actual data, financial data, performance data, et cetera. And what was really interesting in this case is it was the exact opposite of what they expected. The teams that were all about the start underperformed. And the teams that were really operating in a collaborative way where the teamwork matters outperformed them in some ways.
Scott Tannenbaum: So one of the reasons this was an ah-ha moment for me is because first of all, I think it reinforced what I had been believing up until that team, that teamwork does make a difference. It showed that in this case a targeted research study could help, because it helped unpack the truth from kind of the myth that existed there. And it also reinforced to me that leaders don’t always know what they’re supposed to do when it comes to teams. They’re responding with their guts, and sometimes their gut isn’t right. And in this case, the data showed really what was needed.
Daniel Serfaty: That’s a wonderful example where data matters. What you see on the surface, that’s part of the magic of teams too, that they work sometimes under the surface and what you see on the surface is not always diagnostic of really what is actually happening.
Scott Tannenbaum: I would say almost every time in my career that I’ve seen an individual do something that was really outstanding in an organizational setting, there were people who supported that person who enabled it to happen that sometimes it’s invisible for the organization but if you look carefully, it was a team phenomenon even though it looked like an individual phenomenon.
Daniel Serfaty: Yes. Yes. I like your example just about that point again at the end of chapter two. I won’t tell the audience, they need to buy the book in order to find out why you were right, what looks like sometimes like superstar performance is actually the result of a lot of complex team interactions.
Eduardo Salas: So let me share one ah-ha moment for me early on in my career. It had to do also with Terry and Scott. So going back to the story that my job with the Navy in the mid ’80s was to develop a team performance lab, I attempted two things. One, the state of the science. So that’s what we began. We attempted to do this meta analysis. But the second thing I did was I spent six months traveling around naval bases looking at teams, observing teams of all kinds. [inaudible 00:16:11] teams, sub [inaudible 00:16:12] teams, ideation teams. And after that tour, I remember having the discussion with Terry and maybe Scott was there. And I said, “I get the impression that all teams are not created equal. All the teams that I helped serve are somewhat different.” And I couldn’t put my head around [inaudible 00:16:29].
And in the discussions I had with Terry and Bob Macintosh, who was another professor there, I don’t know if this a direct quote or something, but the ah-ha moment was that Terry said it’s all about the task interdependency. And that’s when for now 40 years or so that I’ve been doing, that was to me an ah-ha moment. “Of course! Task interdependency drives the kind of teamwork you’re going to have, the kind of team performance you need to engage in. The kind of behaviors, the kind of cognition.” And so to me, that’s something that I have carried in my head for all these years. And I make a point to try to always understand the task interdependency that is embedded in the kind of team that I’m looking at.
Daniel Serfaty: That’s very good. So what you’re saying in fact is that the work that stimulates the team in and by itself has in it a structure that will provoke certain behaviors in the team.
Eduardo Salas: Correct.
Daniel Serfaty: So if you want to understand a team, you shouldn’t look just inside, you also have to look on the outside of the team to truly understand how that team works. Is that right?
Eduardo Salas: Yeah. And so again, things that I’ve learned, I think as Scott and Kara will agree, task interdependency, and we make a point of this in the book, why it’s important is because it basically outlines what kind of competencies you need, what kind of competencies matter depending on where you are in this continuum of interdependency. And I think that’s been one of the best, I won’t say ah-ha, but one of the best insights we’ve had collectively, those who study teams. And that has driven a lot of good practice. So what do you do when you have low interdependence versus what do you do when you have high interdependence?
Daniel Serfaty: Yes. I see you nodding, Kara. Do you agree with Eduardo? Because my next question for all of you is, okay, so the audience understands a bunch of variables and a bunch of complexities that are associated with teamwork and human teams, but what is a team? How do we define a team? And is that just a group of people who are working on things that have interdependence? Is there more than that?
Eduardo Salas: Actually, one of the [inaudible 00:18:39] papers that I have, it’s about the definition of a team that we published in 1992 I believe it was. It’s interesting. This discussion is making me connect all the dots, it’s amazing. So again, why do we have to define and provide a definition of teams? So we’re doing this meta analysis, we’re observing all these teams in Navy. We now know there are differences among them, that all teams are not created equal. And then we said, “If all teams are no created equal, the kind of research that we want to do cannot generalize to everybody, to all those teams. We need to focus.” And so the definition, Scott, correct me if I’m wrong, the definition came as a result of trying to put boundaries around the kind of team that we were going to look at and that’s what we did. So it’s two people, two or more who perhaps share goals, they are interdependent and so on. So that’s how that definition came.
Scott Tannenbaum: What’s interesting is, for me, how that’s morphed over time. So my recollection is the same as Ed. We needed to draw kind of a box around what we were going to study, and so formal definition. But what’s happened overtime through kind of practical experiences is that not all teams, particularly in organizational settings, are this neat cluster of reporting relationships in boxes with a tight boundary around it and defined roles, et cetera. It’s just become mushier and more dynamic. And so to me, a team is still more than one person. And I would say they have at least some interdependency and some shared goals, but it doesn’t mean that they’re completely shared goals. In fact, I think most teams in organizational settings have this combination of pulls and pushes of sharedness, but also individual needs. We see this in senior leadership teams all the time. They have a shared need for the company, but the head of finance also has their own needs that’s different than the chief technology officer. I think it’s a little bit mushier. And of course there’s now also kind of the concept of teams of teams that pops up.
Daniel Serfaty: So it has become a bit mushier as you said from that neat definition of the 1992 paper because we are discovering through the new setting, remote work, technology, et cetera, that there are other organizational forms that operate like teams, but are not really exactly fitting that definition of teams. That’s very interesting. I want to go back to that, especially as we will examine the future of teams, if we can project ourselves in the future of that dynamic evolution of this social structure called a team. Kara.
Kara Orvis: I did have a question since you were here, Ed and Scott. I know in graduate school we talked about whether or not teams had to be together for a confined amount of time. That wasn’t in your definition Ed, but I was just wondering about your thoughts on that. It wasn’t like a family that sort of went in time forever, but that they had to be together for not a short period of time, but a definite period of time. And once the goal was accomplished, once they would separate a team.
Eduardo Salas: So naively, I think now with the years, we thought that we could study teams that had a past, that had a present, and that had a future. So that’s why the definition was in that boundary. But along the way, I don’t know many teams that have a past, a present and a future, [inaudible 00:21:59] or a long period of time the same membership and so forth. So that was the idea then. But just thinking out loud here a little bit reflecting on all of this, if I were to write a definition, again it would be very different I think.
Scott Tannenbaum: If we think about medical teams, to Kara’s point in terms of how long they last, so we’ve studied trauma teams. Trauma teams, they haven’t worked together so they have no past, Ed, right? They’re called in. The helicopter lands. Whoever’s there, they go, they work on this together. Then, they hand the patient off to someplace else and that team doesn’t exist anymore. Flight crews on airlines are like that. They don’t fly together all the time. They meet. It’s Topeka. They get on a plane together. But then if we think about let’s say a doctor’s practice, so you go to have a visit at the office, there are people on those teams that stay together a really long time. If you’ve been going to your doctor for a long time, there’s members of those teams that, they do have a past, they do have a present, and they’ll probably be around for a long time. So I think of that time thing as more of a variable that influences what’s important to teams and the way they need to operate. And what we need to advise a trauma team is different than we do to advise an ongoing team of a medical practice.
Daniel Serfaty: That’s true.
Kara Orvis: Something else I’ve been thinking about is teams, like you might have a five person team and three of those members often work together on different projects with different goals, so essentially different teams. And you might have two or three or four new people weaving in and out of that threesome. So then when you change membership like that, how much change in membership does there need to be to maybe lose some of the teaminess that exists in those three people since they spent so much time working together? It’s really complex once you start thinking about things like that.
Daniel Serfaty: Well, but our job here on this podcast is to make the complex simple. And from the last few remarks by the three of you, it’s almost as if we have transportable team skills that we can, by plugging into a team in the morning, another team in the afternoon and maybe a team next week, we can actually use those team skills or those team competencies again and again. It’s something that comes with us as human beings, can be trained, maybe innate, I don’t know. Is that the case indeed that teamwork, let’s call them competencies, are transportable?
Eduardo Salas: I would argue that every team training out there, whether it’s TeamSTEPPS or CRM or whatever you want to call it-
Scott Tannenbaum: What’s CRM?
Eduardo Salas: Crew resource management.
Scott Tannenbaum: Okay.
Eduardo Salas: Which, is the lingo that is used in aviation and a little in healthcare. But essentially, it’s team training. And so all of team training is about imparting team based competencies to individuals that you can take from one team to another. That’s what I would argue. But again, there are very few teams that are intact. And so the purpose of team training is exactly what you just said, Daniel, is to give you these competencies that you can take to different teams and use them and apply them in different teams. So yeah, I think they’re transportable. [crosstalk 00:25:06]
Daniel Serfaty: That’s pretty amazing, because if we think of the way we educate even our children, if they can learn those team skills say in soccer and they can apply it back in the classroom, the very same team skills or similar team skills when they’re on a computer science project team for example, that is something that is extremely valuable because it’s part of the development of the individual, not just the team.
Scott Tannenbaum: I would tend to answer that question a little differently to Ed. And I would say the answer to your question, Daniel, is yes, there are transportable skills and there’s also a big chunk of skills and competencies that go beyond that. So for example, a skill like being able to give and receive feedback effectively, we should be teaching that starting in elementary school all the way up through the school system. When they get to law school, lawyers should be learning this. And it doesn’t matter where they go in their careers, they can take that and apply that in every team they’re involved in. But there’s also for example value in understanding the capabilities of your team members, what psychologists like to call transactive memory systems, simply because we can’t use simple terms to describe things.
Daniel Serfaty: You have to publish after all.
Scott Tannenbaum: Absolutely. If it’s too simple … I know the podcast is to simplify, but if it’s too simple, we’re giving it all away. So in the case of knowing your teammates, that can’t happen and be transportable. “Yesterday, I was working with this team. Tomorrow, I’m working with a different team.” And I take those feedback skills. But I don’t know that Kara is an expert in this and Daniel is an expert in that. And that’s an example of a knowledge competency that’s unique to a team.
Daniel Serfaty: That’s a very good, subtle, but excellent point. Thank you, Scott. We’re starting to tackle that question actually, but that’s a question I wanted to ask, especially if you have examples from your own corporate life or from the myriad of teams that you observe in the field. And I’m sure our audience is asking, is a team more than the sum of its parts? If I have a team of experts basically, can I obtain performance out of that team that goes beyond the sum of all that expertise? Do you have examples of that, that you can point where actually the glue of that particular team training intervention that you were talking about has actually added to the outcome performance of the team in a meaningful and observable way?
Scott Tannenbaum: As you were asking the question, I started thinking about two examples: one in which the sum was greater than the parts, but also one where the sum was less than the parts. So teams don’t magically always make it better, but they can. So an example of one where it was less than its parts was an experience that I had as a customer going in to get a suit at an expensive store in New York. And this was a team that was made up of some of probably the finest tailors globally, and outstanding salespeople who have been doing it for a really long time. And they had to work together to create a good experience. And you would think that that would be a great team. But because they were unable to talk the same language, because they weren’t able to communicate, because they not only had shared goals but some competing goals, the experience that I had as a customer was actually less than the sum of the parts. I almost wish that I dealt with each of them individually and sequentially.
In contrast, I’m working with an insurance company now. And this may seem like a mundane example, but we’re dealing with leadership teams, a regional president and his team. And what happens is when we look at the talent levels in each of these places, it’s not necessarily the region that has the highest average talent on that team that’s performing the best. It’s the ones where the leader is allowing for sort of compensation even for their own deficiencies. And so the high performing regions, you might not predict them based on average performance, but the way it’s being led. That’s the case where you see that bump.
Daniel Serfaty: Thank you for that counter example with your tailor. But that’s certainly understanding that it doesn’t just happen. You have this myth sometimes about the teams. “Oh my god, they click.” Well, it takes a lot to click. And Eduardo, I’m sure you studied a lot of mission critical teams in particular. I’m interested to- [crosstalk 00:29:22]
Eduardo Salas: Yeah. So the couple that comes to mind that links a number of things that are powerful in team development like simulation, debriefing is a few years back, this is actually before I came to Rice, we were looking at forward surgical teams, FSTs. These forward surgical teams were about 18-22 individuals who would get deployed to at that time Afghanistan, Iran, whatever. And the rate of deployment was one team every month. And they would go to Ryder Center in Miami. And we would go and observe this. And the training was two weeks. I remember this. And all the teams were the same. The first day, it was 18-22 strangers. They had never seen each other. There was only one physician, one surgeon, and the rest were medics, nurses and so forth.
They would come to Miami on day one. On the third day, they would do a simulation where five, six patients would come at a time. They had to deal with them. The simulation was mimicking war zone conditions. The lights would go off. It was extremely noisy and those kinds of things. And I remember that on the third day in that simulation, the instructors would come into the simulation and will tell the people trying to do triage, the team members, would say, “You, you, you, you, out.” And they would go out of the simulation center not knowing what was going on. And the instructor would say, “All of you are dead.” They were shocked. [inaudible 00:30:56] were dead. And they said, “Well, the patients that we brought in, you were doing the ABCs,” which is air, breathing-
Kara Orvis: [inaudible 00:31:04]
Scott Tannenbaum: Circulation.
Eduardo Salas: … yeah, and they would forget that they were in a war zone and the patient had an IED for example. And so it was a very emotional awakening for them on the third day. Then, [inaudible 00:31:16] more simulation. But to make a long story short, by the 15th day, they were just a perfect machine of teams. So in those 15 days through simulation and lots of debriefing, very detailed debriefing, they learned about roles and responsibilities, about having a shared mental model about information exchange protocol, about all of that, transactive memory. Because again, they had to become one single … And it was just beautiful how in 15 days, through all these techniques, they essentially became a team.
Daniel Serfaty: That’s a great example. I like that notion, because in a sense the business of experts and teams is to create a single biological organism out of all these disparate cells that has almost like a life of its own and a characteristic of its own in a sense. You just published that book, Teams That Work, Eduardo and Scott. And I have in my library, I have an entire shelf dedicated to books on teams. Why now? Why did you want to publish this particular book and what do you hope your readers will get out of it as opposed to previous books on teams? Scott, since you are the first author, you answer first.
Scott Tannenbaum: This is a book that translates science into practice. So the impetus for it is both from the science side and the practice side. On the practice side, we’ve been dealing with teams and people ask us for advice regularly. We give talks, we consult, we’re working closely with teams. And it’s a continual request of, “Boy, this isn’t working exactly the way it should.” In fact, there’s some data that suggests that less than 25% of people feel like their own is high performing. So there’s this pull from the practice side. And at the same time, having been in this field for a long time, we started looking at the research really closely and realizing that this body of research had grown to the point, and we’re talking about global research, all over, to the point where we actually felt like it was stable enough that we could make some recommendations. Ed earlier alluded to meta analyses. There are close to 50 meta analyses that have now been published on team related topics.
And the nice thing about a meta analysis is I have a lot more confidence in a meta analysis that averages results across a bunch of studies, that gives me a stable finding, than I would from any one study. So between the fact that we now felt like we had something we could say because the field had matured enough and the pull from the marketplace, we thought now was the right time. And so when people said, “Do you have a book for that?” now we can finally say, “Yes, we have a book for that.”
Eduardo Salas: [inaudible 00:33:53] at least from my view, how we got together. For a number of years now, between the two of us we developed what in the book we call the seven drivers. This heuristic, the seven Cs of teamwork. And over the years, we started with different Cs. And finally, we agreed on these seven we ultimately published. But every time in my mind at least I give a talk in healthcare or in the corporate world or oil and gas, you name it, first of all, they will love the Cs because as an organizing body of all this knowledge that Scott has talked about, it makes sense to them. And they will always say, “Where can I get more of this stuff? Can you send me something that provides more detail?” And there was always this feedback that actually eventually both of us got, it was, “This is great, but we need more. This is great, but we want more.” And that’s how, I don’t know, two years ago we said, “Let’s do it,” and here we are.
Daniel Serfaty: So in a sense, having very practical advice but anchored in solid science fundamentally. So it’s not what people have become very suspicious of in the management literature about the five ways to become a great CEO, the 14 ways to become a great leader, et cetera. But here, yes, they are those seven dimensions or seven drivers of team effectiveness, but they are anchored in actual science and research and data, yes?
Scott Tannenbaum: Yeah. And I think a key point that we touched on earlier is that one size doesn’t fit all. So we talked about for example the difference in interdependency. We talked earlier about teams that have a short life versus a long life. The research has some different advice for those. So the books that are very simple quite often are oversimplifications. “You should all get along. If we minimize conflict, we’ll be successful.” And the research doesn’t support that. It’s more complex. But the book tries to sort of find out based on the type of team you are, what can we tell you you need to do to be able to be successful?
Eduardo Salas: [inaudible 00:35:54] also what’s interesting about this book, which is a lot of people say, in a [inaudible 00:36:00] would say, “You’re not selling anything.” And I say, “Well, what do you mean?” “The seven Cs are just really a heuristic again if we don’t have a package of things that, ‘If you go here, use the seven Cs.'” And so that was also [inaudible 00:36:13] to people, that we were not saying, “These are the seven things you need to do.” We said, “No, this is a way to organize and to think about it and things you need to focus on.” And so that was also appealing, to your comment, Daniel, that we were not out there with, “These are the seven things that you need to do and if you don’t do them you are going to crash,” or something like. And so that was also appealing. It was science based, it was formulated, and then like Scott said, once [inaudible 00:36:39], you can have that, you can change. Based on that science, you can draw on, solve your own unique particular problem.
Daniel Serfaty: I wish you success certainly. The book just came out. But I wish the scientists, the graduate students or the professor will buy it because it’s chock-full of examples from the field that can illustrate this or that model of team effectiveness. On the other hand, I think the practitioners, the managers, the leaders in the corporate world need that because that enriches a little bit or gives context to the team management practices that they do. I mean, one management practice that you are applying, Kara, and I know that you co-edited a book a few years ago, is a notion of distributed teams, the notion that teams do not have necessarily to be co-located to perform. And certainly, the last eight months of remote work reminded us in a very visceral way the importance of distributed teams. Can you tell us a little bit about that? What’s different? What are the things that are unique both in terms of the challenges or the difficulties but also the opportunities?
Kara Orvis: So I’m going to go back a little bit in time to the late ’90s when I was in grad school. And one of my first projects was on a multi-university project. I mean, the internet was new. Like I was just learning how to use Google, it was brand new. And we were working through technology with other researchers at other universities. And it was really hard and we had a lot of failures. And honestly, that was the first time I had been on a team that didn’t do well. Later on, we did well and we accomplished lots of great things over the years. But I remember Googling the word virtual teams together and got like two hits on Google. If you google that now, you’ll get hundreds of thousands if not millions of hits. But it was so hard and I really wanted to understand that, so that’s sort of what I studied in graduate school.
I did some work with a professor out of the school of management at George Mason University on this idea of situation and visibility. What I found in working on my own teams and continue to find is that when you’re in your space and you’re working with someone in another space, you don’t really understand their context. You make huge assumptions about where they’re working, what tools they have around them. And when they don’t perform in ways that you think they should perform, it’s really easy to say, “Oh, that person doesn’t care.” Sort of make what you’d call a personal attribution about their performance.
And a lot of the distributed teams literature talks about the need to over communicate in absence of being together in shared spaces. And so I always tell my team members, especially now during the pandemic now that everybody is distributed, “Communicate what your situational circumstances are. If you have family time or things that need to get done like you’re working with your kids on making sure their school is getting done right, put that in your calendar. Let your team members know so that if you’re not responding very quickly, they have a good understanding of the context.” So that’s just one example of what you might see in virtual teams or distributed teams that you don’t see in other types of teams. There’s many other examples.
Daniel Serfaty: Connecting that back to that fancy word that Scott used before to describe this notion of developing a model, a mental model of sorts of the other person, capability, expertise, but also in your case you add the notion of situation. And as a result of that knowledge, being a better team. Is that what you’re saying?
Kara Orvis: Yeah. That’s exactly what I’m saying because altogether, one space, we understand that space. But what we found in that research that Catherine and I did together was that people take their own experience and put it onto someone else that they’re working with. They’re not apt to imagine that their situation is different. And I think it’s easier when you have things like a Zoom technology where you can see someone’s office space. Right now, you guys are looking at me. I’m sitting quietly. But do you know that my son and daughter are in two rooms next to me and are e-learning? And I’m waiting very patiently hoping everybody’s quiet and the dogs don’t bark. And I did share that with you, but we just don’t anticipate very well what people’s circumstances are. And therefore, it’s important for people to share that in ways they wouldn’t in a face to face environment.
Daniel Serfaty: That’s good that you’re opening that door. I intended to ask that question a little later but I’m going to ask it now also to both Scott as a CEO, but also Eduardo as a professor and a department head, is experience of the past eight months with the pandemic. I assume many of your students Eduardo are virtual or at least learning in a virtual environment and same for you Scott in terms of both customers and employees, did that prompt you to rethink the notion of teams or to expand it or to modify it in any way?
Scott Tannenbaum: Yeah. So my experience during the pandemic is different in my dealing with clients than it is dealing with my team members. We’ve operated virtually within my company for a really long time. We have shared space to be able to connect physically and obviously we’re not doing that. But the majority of our interactions on the team has been remote for a long time. So we’ve had to go through that learning curve, the same learning curve that Kara described: how you work together, how you maintain the bubble, how do you know how people are doing? And I think we’ve developed our mechanisms for sustaining kind of situation awareness.
What’s been interesting to me is now dealing with various customers, some of whom in the past would have insisted that we all have to be in the room together. And one of the interesting insights, actually two that come to mind, one is leaders who are the start of the pandemic basically said, “There’s no way that I could run my team this way. If I can’t see what’s going on, who knows what they’re doing? And the second that this thing is over, I am pulling everyone back into the office,” who are now overtime saying, “Hey, you know, there are some interesting advantages to running a team this way and maybe we should have a hybrid model going forward.”
The second observation, and this relates a little bit I think to Kara’s comments about knowing each other’s situation, is that this has forced a humanizing element in some cases in the way we work. So it used to be that at the meetings, if we did a video camera meeting, a web cam meeting, everyone’s dressed up nicely and you show up, it’s like you’re on camera. Make sure we have makeup ready, the whole deal. Now, it’s like we’ve reached a point where people can be themselves. It’s creating a little bit more psychological safety. The kid that runs in in the background, the dog that barks, the lawnmower that’s going on outside, it’s like we’re humans. We have life. And I think it’s becoming more acceptable in many of my clients, the way they view themselves and the way they view us. And I’m hoping that that is one of the few positive things that we can take forward about this.
Daniel Serfaty: What a wonderful insight. Thank you for sharing that, Scott. I do believe indeed before I ask Professor Salas because I want really to see the perspective of the teacher and the department head, you have several professors that are on your team, but this notion that I agree with you, that has created I believe this distribution of work and distribution of operation, paradoxically has created a level of intimacy, of empathy, of perspective taking that was not there before. These are in a sense skills that were kind of more dormant before that suddenly have become very important. I don’t know if that’s your experience, Eduardo, too.
Eduardo Salas: Absolutely. So my department, we have 16 professors, five [inaudible 00:44:36], five, six [inaudible 00:44:38] neuroscientists. You’ve got to understand the context here at Rice. So March of 2020, Rice University had never had an online course. April first, Rice University had 1900 online courses. Everything has shifted. So I’ve been chair five years. The kind of questions that I ask today, the kind of emails that I send, the kind of gestures that I do are all about intimacy, about humanizing things. So I’ll tell you exactly one thing I did today. So last Friday we had a faculty meeting and we had it on Zoom. And like most universities now are struggling with budgets and so we’re going to get a cut in the next academic year. So I’m telling this to my faculty, “We’re going to have this cut, all these things are going to happen.” And the meeting started at 3:00 on Friday and it’s about 4:30 and I’m saying, “Okay. Happy hour, let’s go [inaudible 00:45:38].” I mean, it was complete science. These are 17 professors, nobody said a word. I was trying to make a joke, make something light, nothing. There was zero.
They’re all burnt out, they’re all stressed. And so I said, “I have to do something.” Long story short, I said, “I’m going to send them just a box with cookies and little things to eat.” Actually, my niece who lives in Houston is a pastry chef. So I told her, “I need 30 boxes of little things.” So she did that. I bought cards that said thank you, and I wrote to each one of my staff. There are like 25. 17 faculty, 16 faculty, we have 18, 19 non-tenure track professors who just teach. And all day yesterday I wrote little notes for them. They went out today and I’ve been getting the emails of thank you, of emotion. Never in my life I would have thought about doing something like this.
Daniel Serfaty: Oh, we know that.
Eduardo Salas: Exactly!
Scott Tannenbaum: I’m surprised your handwritten note was readable having worked with you extensively now for years. [crosstalk 00:46:40]
Kara Orvis: Ed, I think that’s really sweet. Don’t listen to these guys.
Eduardo Salas: Yeah, they’ve known me for a long time. But it’s kind of like the interactions I’ve had, especially like you, Kara, with the female professors, it’s about the kids who are next day in the meeting and the kids are there. And so it’s a different interaction and it’s a little more intimate. And the other thing I’ve learned about all of this since March is the need for connection. And so these cookies I thought was going to be that connection. From my emails I’ve been getting this morning, it’s doing that. So that’s the world we live in now.
Daniel Serfaty: I agree. That’s why I think the IO psychology, industrial organization psychology community should, and I’m sure they are already, but should study this time, because I think we see a transformation of the very nature of work relationships. And I think both teamwork, which is the topic of the day, as well as leadership and what that entails is being transformed in front of our eyes. We are transforming it as leaders too. But I think at some point we need to zoom out literally and look a little bit about, maybe the definition of what a team is has changed, has been enriched, not diminished, but enriched by this experience.
So to switch back a little bit from great teams to poor teams, people don’t like to talk about failures in this field, but we all witness sometimes teams that fail somehow. Not systematically, but had one or two failures. And I know many of you have studied military teams or space teams or aviation teams or healthcare teams for that matter in which sometimes a team mistake caused pretty catastrophic failure. Can you describe that a little bit if you want have one example you want to share with our audience about, how does that happen? What particular failure of teamwork specifically, of people not practicing one of the seven Cs perhaps, communication and collaboration and coordination, cause eventually a disaster?
Scott Tannenbaum: I shared earlier a very mundane situation, my buying a suit. And the consequences of that were simply that I didn’t buy a suit. But Daniel, as you point out, sometimes the failures have much greater consequence. You may recall several years ago the Costa Concordia sank of the coast of Italy. This was a cruise ship. They got a little too close to shore, they capsized, and people lost their lives around this. We actually went out in the cruise industry after that, spent time in different cruise lines to try and understand a bit about team phenomena, leadership phenomena, et cetera.
There were a bunch of factors that contributed to this, but there’s no question that there was a teamwork breakdown that occurred. And you could look at the crew members there and say there was ample experience for them, that they should have known better. How could a ship that size get that close to shore? And what happens is that it was a breakdown is psychological safety that was part of this. Someone should have been able to say to the captain, “Captain, heads up, we’re a little close.” And if that didn’t work, “Captain, we’re too damn close!” and should not have accepted the proximity to shore on that. And to me, that’s a classic foundational breakdown in team effectiveness that relates to one of the underlying psychological factors, which is psychological safety.
Daniel Serfaty: And so as an expert consultant, what would be your advice then to fix that particular type of failure that eventually, through a chain of events, led to a catastrophe, to a disaster?
Scott Tannenbaum: Yeah. So as you can imagine, there’s a lot of moving parts in this. So part of it is preventing the accident. And part of what we also discovered were things that need to be done to prepare the teams in the event of an accident. So let me say a bit about the latter first and I’ll come back to the former. But you have now bartenders, musicians, cleaning staff that are asked all of a sudden to help support an evacuation onboard a ship. And so we’re asking people who don’t necessarily have these skills to begin with. One of the things that we discovered was really important is scenario based training and practice to be able to prepare folks on this. Similarly, where the captain and his crew reside, there’s an interaction that goes on there as well. And team based training is one of the interventions where leaders are being taught not just how to navigate, there’s this natural tendency to move towards the technical skills, but also to teach leadership and team skills.
Some of this starts even with cultural norms and small tolerance of things that erode psychological safety that you might not notice when you tolerate that overtime. Part of the advice is you can’t do that because then, when there’s this one rare moment where you really need it, people are uncomfortable speaking up.
Daniel Serfaty: Maybe a related question to expand on one of the remedies that you just mentioned is this notion, “Okay. Maybe we can’t change the culture, maybe we can’t train particular individuals. But maybe we can train a team.” Eduardo, I know that early in your career you focused very much on optimal ways to train teams, especially mission critical teams in the United States Navy. Give our audience a single way to focus. What’s the best way to train a team? And I know it’s a loaded question. There are books written about that. But after filtering out the noise, what is the first thing that you will focus on as you look for a way to optimize team performance through training.
Eduardo Salas: I think there are two powerful countermeasures or two powerful interventions I guess. One is simulation. I’m a big believer in simulation because, what do you do in a simulation? You practice under hopefully the conditions that you’re going to perform under. But simulation by itself is not enough. You need feedback, debriefing. And I think there’s a meta analysis on this. Scott did one meta analysis on debriefing, that teams that debrief outperform those that don’t by 20-25%. I mean that’s a whopping effect. To me, the best team training I’ve seen is simulation based team training where you get these two components, simulation and debriefing. But [inaudible 00:53:05] emphasize the debriefing component, the military, as you know, it’s full of simulators. In peacetime, all they do is train. And where do they train? In simulations, simulators.
But you know, Daniel, back when we met a long time ago, [inaudible 00:53:18] operator stations were not that great because they didn’t facilitate debriefing. They did not facilitate meaning feedback. When I go to organizations and say, “How do I improve teams?” I say, “You have to use simulation.” Now, let me make a kind of an editorial thing for the audience. To simplify things, training, any kind of training including team training, it does four things. You get information, demonstration, practice, and feedback. Those are the four components of any good training. Most of the team training out there, most of it is information, demonstration based meaning PowerPoint. You see a bunch of videos of good and bad performance and maybe you role play a little bit.
And we know from the science that the learning occurs when you have practice and feedback. So again, there’s TeamSTEPPS with this medical team training program out there used in 70% of hospitals and three million people have been trained using TeamSTEPPS. It’s an acronym for something that I can’t remember. It has to do with patient safety and performance. But anyway, you have to use simulation for these things to work. And in healthcare, we’re seeing a lot of simulation now and debriefing and stuff like that. So that’s basically the advice I give. You have to be able to allow for people to practice under the conditions that they are going to perform under.
Daniel Serfaty: And obviously, what simulation does, it allows folks to explore the boundaries of the possible, something that would be very dangerous or costly perhaps to explore just by practicing in real environment. I’m thinking for example of pilot training when you can do some maneuvers that are extremely dangerous, but you can do them in a simulator and get the feedback on, how do you perform at the limit of performance?
Eduardo Salas: I want to give you an example of how powerful to me it was. So during the first Gulf War in 1990, ’91, one of the tasks that we were given … this was another ah-ha moment for me about simulation, we had a task to interview pilots coming back from sorties. So they were going from the air carriers, dropping bombs, coming back. When we interviewed the pilots, the majority said, “It looked like we’d been there before.” And we were saying, “What do you mean it looked like you’d been there before. You’ve never been there. This is the first time.” And it was the mission rehearsal, the simulations that they were trained for the sorties that gave me the idea that when they went there it was so familiar when it wasn’t really, the first time they were going there. So that to me gave me one of these, “Wow!” Again, this is [inaudible 00:55:56] development, this idea of the power of simulation, accelerates expertise, gives you all the stuff that we’re talking about. Sorry, Dan.
Daniel Serfaty: No, no, no, no problem. I think that illustrates very well, that’s a very good example to the point that you have. Scott, any addition?
Scott Tannenbaum: Yeah. There’s no question that the type of simulation that Ed’s describing, they’re very powerful. So the idea of having this technology, it gives the impression that you’re flying over Afghanistan, a great learning opportunity. Mannequins simulating a patient. They have high physical fidelity but they’re also very expensive. So the concern I have sometimes when we talk about simulation is that all my corporate colleagues, they stop listening because they think simulation is only these kind of multimillion dollar simulators. So I just want to put in a pitch to say, in addition to those, there’s low fidelity simulations that can work really well. And one of the most basic ones is just to do sort of a cognitive walkthrough. For example, we were working with oil rig crews. A platform in the Gulf of Mexico is part of the team, and part of the team is in Houston. And all we had was a simple video connection between them. And we said, “Okay. We’re going to start talking about a situation that’s evolving and you talk out loud about what you would do, who you would contact, and what you would do. Tell us what you’re thinking, what you’re doing, et cetera.”
And we just evolved the scenario. And we said, “Now, this has happened next. Who do you contact?” And some of the ah-has were things like, “Wow, so it’s okay for us to call the chemical engineer onshore at midnight?” “Well, yes. If this problem occurs, it’s absolutely okay.” So there’s an example of very low tech, there was no simulator, that served sort of a similar purpose.
Daniel Serfaty: Some people are talking about the difference between physical fidelity and perhaps cognitive fidelity. And that’s perhaps what you are illustrating here, that sometimes imaginations are powerful enough to project basically situations in artificial environments into real life because they are cognitively similar.
Eduardo Salas: Games, [crosstalk 00:57:55].
Daniel Serfaty: Yes.
Eduardo Salas: I’ll give an example. So the astronaut’s here in Houston. So if we go to Mars, it’s going to take about 10 months to get there. So the idea is that during those 10 months, there’s going to be a lot of training going there. So one of the projects I worked was to develop a teamwork skill training for the astronauts, the four astronauts that would go on the way to Mars. And so the company I was working with decided, “Okay. We’re going to do a game.” To my surprise, I thought that the astronauts were going to reject this because it was an emergency healthcare related type of game that had nothing to do with space exploration. But they loved it. It was a game that had four people, that they have to engage and solve a problem and then rescue some people and so on. But that also made me think that gaming, which is a form of simulation, it’s also a very useful, practical, effective technique for this. And again, what the astronauts liked, it was more of the cognitive fidelity issues than the physical thing.
Daniel Serfaty: I wish actually that some of the things that you’re describing for astronaut training and for Navy training or cruise ship training could be applied even in a mild environment, in a corporate environment. We don’t have enough of that. I can tell you as the head of an enterprise there are some games you can play that come out of the business school tradition, but there’s very little true simulation. And I wish indeed that there’s an opportunity for someone to disrupt that industry by introducing games for managers and CEOs and executives the same way aviation uses simulation to train. I think there is a shift of culture. Talking about CEOs and managers, Kara, before we switch to a more futuristic view of teams, if you have some advice to give to all those team leaders out there that are listening to this podcast about how to improve your skills as a team leader, not just as a project leader, but as the leader of a team.
Kara Orvis: A long time ago I helped a wonderful woman named [inaudible 01:00:09], who you know very well, on her dissertation. And she was studying team leadership and she was looking at this concept of sense making, which is not just telling people what to do like, “Hey, you on the team do this and you on the team do that.” But explaining to the team why they should be doing those things separately or together. I always try to myself, as well as encourage other leaders to, as they’re describing things to their team, help provide that sense making, the reasoning why they should be doing what they’re doing and why they should be doing it together the way that they should. You guys were just talking about simulations for team training, but oftentimes it’s the leaders who have to train the team members on how to work together.
And providing that sense making, that reason why behind not only what they should do, but why they should be doing it, I think is so valuable. The other advice I would give team leaders is don’t be afraid to tackle team issues. You want your team members to feel good about the team experience. You want them to be motivated to work towards the team goals. You want them to understand how to work with each other. So when you see something going wrong in the teams, really try to understand why they’re not performing the way that they should and talk to them, or remove those barriers, or increase those skills, or increase that knowledge. And so those are the two pieces of I guess advice I would give to team leaders.
Daniel Serfaty: I’m going to put you on the spot right now. Are you practicing as a leader yourself, as a team leader, those skills?
Kara Orvis: Yes.
Daniel Serfaty: Are you actually implementing them on a daily basis?
Kara Orvis: Absolutely. I try every day to do that, to explain the reason why. And speaking of low fidelity training, one thing that Aptima had created for a project for military teams who were out in the field deployed was to actually talk about past experiences the leader had had with their team or another team, present that scenario to the current team, and ask them what they would do. Listen to their response, really hear what they would do. Have them explain why they would do that, and that would give the commander, the leader an opportunity to correct them or say, “Yes. That’s a great response and here’s why I like your path that you chose on what to do in this situation.” But yeah, I do try to do those things every day.
Daniel Serfaty: I thank the three of you for all the very wise but very practical advice that you are giving here both in terms of how to train, how to develop leaders, what are some of the key components you have to watch. I think our audience is going to ask for more. But I’d like in our remaining time to explore a little bit some future topics, especially given that you are so anchored in so many years of experience looking at human teams. We already explored the effect that this pandemic is having on understanding this notion of technology enabled, but distributed teams and kind of the new doors, the new opportunities it is opening. As you know very well, in the more futuristic but it’s already happening in some professional domains, we’re introducing new forms of intelligences, and I’m using the term in plural on purpose, into the human teams. We’re introducing robots that work with humans. We’re introducing artificial intelligence bots or artificial intelligence beings literally that observe how humans are doing things and learn from it and change as a result and adapt.
And I wonder as we evolve toward that future of multi-species teams literally, what’s going to happen to team science? Should we apply blindly what we know about teams that work and say, “Well, it works with humans. There is no reason it shouldn’t work with artificial intelligence and human teams.” Or is there a possibility for developing a whole new science, a whole new insight perhaps? Kara, you want to start?
Kara Orvis: I’ve been thinking about this recently. First, I think we have to understand these nonhuman teammates and aspects of them that may or may not be different than human team members. But earlier, Ed and Scott were talking about this idea of generic skills that an individual brings to a team. And I believe, and I just wrote a paper with a colleague of mine, Sam Dubrow, where we took a look at some of those generic teamwork skills and we considered what these machine teammates were like and what made them special.
Daniel Serfaty: What’s a generic teamwork skill for example?
Kara Orvis: A generic teamwork skill or trait, we were looking at traits too, like communication, ability to communicate with others or tolerance for ambiguity was one of the traits we looked at. We took a lot at some of those generic teamwork skills and we made a case in our paper that some of those skills probably do transfer over to human-machine teams. They’re just as important in a human-machine team as they are in a human-human team. But some other skills may become more important in a human-machine team. And then some other skills might not be as important in a human-machine team. And so I believe that we can take things from the team’s literature and it will apply to those kinds of teams. Do I think everything will apply? Probably not. But that’s an example of, if we’re going to design humans to work in human machine teams, what are those skills and traits that we’re going to want to train and select for that are going to allow them to deal well with those nonhuman team members?
Daniel Serfaty: And I want to hear from Eduardo and from Scott on that, but I think it’s very important that your community take the lead on that because left to their own devices, artificial intelligence developers work very fast and don’t wait several years to have the right P values, will actually design an artificial intelligence system without taking into account the treasure trove of insight that our community, your community can give them. You’re nodding, Scott. Do you agree? Tell me more about those future teams.
Scott Tannenbaum: Yeah. So if we think about them, the teams you described, [inaudible 01:06:33] think about them as let’s say hybrid teams, right? It’s a mix of human and other intelligences. Let’s first start with the assumption that we’re talking about in this case intelligences that are somewhat visible to the other team members. They don’t have to be physically visible, but they’re robotic or virtual. They’re not so deeply embedded that we don’t even know they’re happening. So in those cases, you almost naturally as a human think about them in some ways as team member. So it makes me think about analogous phenomena in hybrid teams versus all human teams. And I can point out some of them, but it also tells me there’s some research that’s needed.
So what do we know with human teams? Trust matters. And we know that in judging whether we trust another human, there is a judgment made about ability. Like, do I think you can do what you said you’re going to do? And character like, do I think you’re going to do the right thing for me, that you care about me, et cetera? So what’s the equivalent phenomena. Do those apply directly or differently when we start talking about a teammate who is not human. We know role clarity for example matters a lot in teams. So, Daniel, are you responsible for this? Am I responsible for this? What’s the equivalent when we’ve got a hybrid here? Is it programmed in? Does the AI just make a decision to fire, to clean, to do? Who owns the decision? Is that clear and transparent? We know backup matters.
Daniel Serfaty: What’s backup?
Scott Tannenbaum: A backup is, I am monitoring, I see that you need some assistance so I offer help. I fill in for you either partially or fully in some ways. In human teams, that’s kind of a characteristic of high performing teams that have interdependency. So how and when do you human and AI backup each other? What are the implications for team composition? Can I compose a team where I know AI is able to step in and do some other things even if it’s not their primary task? And can I as a human serve as backup for AI? You think about sometimes like, “Oh, the intelligence can run on its own. But are there times where I should be monitoring and seeing this is now evolving into a space that the AI was not programmed for and I need to back up?” So I share some of those as examples that we should use what we know about team science and we should probably study those phenomena in these hybrid teams.
Daniel Serfaty: Yes. Eduardo, if you can take on that topic, and also maybe expand on that notion of training as a team, on the training part how do you develop those teams? Are they totally new kinds of competencies that need to happen, or they are just variants of what we know?
Eduardo Salas: Let me make I think maybe a bold statement on this. I don’t think we need to be afraid of human automation, human AI teams. I think that the way to tackle this is to stick to the basics like we always have. So instead of studying teams, we need to study the nature of teamwork. And so I don’t care whether you have automation or a robot as your teammate. I want to understand what is the nature of your interaction. If we take what we know in team science into a team task analysis, you look at coordination, demand analysis, if you focus on understanding that, then I think you will get the kind of competencies, the kind of need that they have. And so I think it’s that. We stick to the basics. And for years, at least the 40 years since we started all this [inaudible 01:09:57] movement and [inaudible 01:09:57], it has served us well. So that’s what I will focus on. So to answer your question about training, training may or may not look any different.
But I’ll give an example, that it kind of, it made me think about this. So Scott and I were asked by a manufacturer to look at a new kind of team that they were forming, which was a robot, a human, and an algorithm, automation. They used to work as a three person team, humans. And now they have changed so they have all kinds of problems. At the end, to me, what I got out of that was it’s the nature of the teamwork that matters, not who is next to you who is a machine or a robot. And that’s what we need to do I think. And so in the work that I’ve been … once in a while I get asked to consult on human-robot or robot to robot teams. At the end, we talk about the same stuff. Backup behavior, informational exchange. We talk about the same stuff.
Daniel Serfaty: Thank you. I think going back to first principles will be very important here, but also be open minded enough to understand that because we don’t have enough words in the English language, we still call that intelligence, artificial intelligence, we still call that teams maybe because we don’t have a word for that new form of social structure. And there is a lot of controversy in the human machine community about whether or not that AI is a tool of the human or it’s a teammate. And we’re going to have a series, as part of this series of podcasts, debates specifically about that, tool or teammate. And the issue here is also about … maybe one of the differences I want to offer to you is about that transparency or that trust. At that level, AI behaves sometimes in a very unpredictable way. And not because it’s capricious, it’s because it absorbs an ungodly amount of data. And from time to time there is an emerging behavior that emerges and happens because of some deep structure that the AI has learned.
And the AI learns not only deep structure about the task and the task interdependence, but also about the human teammate. And therefore, that kind of unpredictability is really interesting because it forces us to have a level of explainability and transparency perhaps for each other that occurs very naturally with humans because that’s our DNA perhaps, but doesn’t occur naturally between humans and intelligent machines.
Eduardo Salas: That’s a great point, Daniel, because what I do worry about about all this stuff, really it’s not a team issue per se, or maybe it is, I don’t know, but what I think about is ethical issues. For example-
Daniel Serfaty: Tell me more about that. What are you worried about?
Eduardo Salas: Well, I’m worried about that these things will have a dark side as they’re interacting where there’s no boundaries. What I’m afraid of sometimes is it’s confronted with ethical issues. So healthcare is going in that direction a little bit with robotics and all this kind of stuff. And they’re beginning to look at the ethics of this. Because can the AI, can the automation, can the robot detect who they have in front of them, what kind of person they have, what kind of history they have, I mean all this kind of stuff. So you’re right, they have a [inaudible 01:13:11]. What they’re more worried about, the trouble if you will with more AI is about ethics and who monitors.
Daniel Serfaty: Is our field, Kara, Scott, equipped to deal with the ethical considerations that comes with this introduction of new learning, intelligent machines in our work? What does the IO psychology community have to offer in that area, or should we leave that up to the philosophers? I’m asking the tough questions.
Scott Tannenbaum: So we can’t leave it to the philosophers, although they have a role in this. We can’t leave it to the technologists because they have a role in this. In some ways, psychologists, IO psychologists somewhere can sort of bridge there. Historically, we have worked in man-machine interface. Historically, we have asked questions about ethical and appropriate behavior at work. We do interface with technology. So we’re not the right people to program it. We’re not the right people to ask the big questions. But maybe sort of where the rubber meets the road, we’re the right folks to be able to facilitate and ask the right questions. Earlier, when you were talking about, Daniel, what implied to me kind of this emergent learning that occurs that’s inherent in some forms of AI, that that’s where some of the risk points occur because they’re quantum leaps or they’re divergent. And they could be much better or they could be much worse in some ways. It made me think of a parallel in some of the research that we’ve been doing on informal learning in humans.
So informal learning is in contrast to training where there’s a preset objective, a preset curriculum, a preset group of experiences to learn X. Informal learning occurs very naturalistically. Humans do this all the time. The vast majority of learning in organizations is informal learning. So as we try to prepare people to be better, faster informal learners, one of the risk factors is they’re going to try things that they probably shouldn’t try and they get in trouble. So we’ve been coaching organizations to think about red, yellow, green charts. You’re going to take this person, they’re relatively novice, they’re starting to learn, we’re going to put them in a situation. What are those things that, if they get a chance to do it, just run, green, don’t ask? What are those things that are yellow like, “Do it, but only if there’s some backup there?” And what are those things like, “We don’t want you touching this thing in the nuclear power plant facility and testing it,” red? Is there an equivalent to that in the case of emergent intelligence?
Daniel Serfaty: There is an equivalent. But what would worry me as both a technologist and a practitioner of teams is not the red, the yellow, or the green, it’s the color that I haven’t designed the system for.
Scott Tannenbaum: That’s good.
Daniel Serfaty: That’s really the known unknown so to speak that we have to worry about. But I would like to spend another several hours with you guys and maybe I will in another podcast. But I would like right now for each of you to share with our audience kind of a bit of a forecast of, how do work teams, if you look at work teams, pick a domain, or in general look say in 10, 15 years, are they the same teams? Are we going to see an evolution of the very concept of teams, evolution either coming from the field or evolution coming because there are going to be some new scientific breakthroughs or development that’s going to enable us? So help us imagine, help our audience imagine the next 10 years, 15 if you prefer. So who wants to take that on first?
Eduardo Salas: The future is about teams of teams. It’s about multi team systems, teams of teams where your teammates are automated, maybe a robot, an algorithm, something like that. And so I think that’s where we’re headed in terms of what the science needs to do. But I think we still need a trauma team, four or five individuals taking care of trauma. We still need a pilot flying an airplane I think. At least I wouldn’t go into an airplane that is fully automated. And so we’re going to need to stick to the basics as well. But I think what I see on the horizon is teams of teams, people with conflicting priorities, many bosses, is the, Kara, day to day activity that she’s been doing where we need more work. So if I were to say my next book on teams was called, it would probably be around Multi Team Systems That Work, that would be the title 10 years from now. At least that’s what I’m thinking.
Daniel Serfaty: Thank you, Eduardo. That’s a pretty exciting future and Kara is already, as you said, already in the middle of the future. So Kara, you may want to respond to that before I pass the microphone to Scott.
Kara Orvis: I guess the question is two parts for me. One is are the tasks for which we have teams going to change in any meaningful way? And I think it’s not going to change that drastically in 10 years. I think we’ll still need very similar types of teams to do similar types of tasks. I do believe we’re going to have access to more technology as team members. So I do believe this concept of human-machine teaming is going to become more important. And with the pandemic, and I have worked virtually for 10 years now distributively, I think people are going to be more open to technology enabled teams, like technology that helps people work well together. I just don’t see huge changes in the foreseeable future. If I read an article from 50 years ago, a lot of those concepts still are meaningful today as they were then. So I don’t see a lot of change, a lot of meaningful change. I could be wrong, but that is what I’ll say.
Daniel Serfaty: Scott, what’s your take on this one?
Scott Tannenbaum: I don’t think that maybe I’m sharp enough to see a discontinuous, nonlinear direction in this. And maybe I can answer this question kind of with a corporate lens, like looking at work teams in corporate settings. So what do we know? We know the trendline has been greater use of teams. We know this from reports that are interviewing CEOs. They’re thinking more and more about teams as building blocks. Collaboration requirements are up 50% in organizations. It’s almost impossible to work as an individual contributor lone wolf anymore. So if I follow that trendline, I would expect to see more, which sometimes scares me because although I’m a team person I think you have to form teams for the right reasons and sometimes they’re not being formed for the right reasons. So that’s a caution. Spans of control seem to be getting larger, so we have more people reporting into leaders. That model seems to be more prevalent. So when you have teams, the leader has more people that they’ve got to be able to work with and keep an eye on, which means they’re less able to see everything.
Scott Tannenbaum: So to me, the trend is more shared leadership. Not that we’re formerly appointing a second or a third formal leader, but teams that operate in a way that team members are expected to step up and demonstrate some leadership behaviors. I think we’ll see more fuzzier boundaries. Already, I talked about kind of a fuzziness, but even more of that. Which, is part of that team of teams, but even mushier maybe. I agree with Kara. More than likely there will be a greater use of technology, perhaps some nonhuman team members maybe more in terms of decision aides in the corporate environments than kind of team members. And I also expect to see teams that are more rapidly adjusting. So huddling, debriefing, membership changing more dynamically in these corporate settings with a mix of probably live and virtual work that we learned from the pandemic. But at the end of the day, the drivers remain the same. We’ve just got to figure out how the drivers apply to that new environment.
Daniel Serfaty: Thank you for your answer, the three of you, because I think that on the one hand, you feel confident that the basis, the fundamentals are probably not going to change. What’s going to change is perhaps the circumstances, a much more complex, connected world that will enable these teams of teams and multi team systems to work and basically multiple memberships in multiple teams maybe simultaneously. So maybe our next generation is going to be more connected and because they’re are going to be more connected, they’re going to belong to more teams than we do. And because of that, they will develop perhaps a larger portfolio so to speak of team competencies. That’s an exciting future. Thank you again, Dr. Eduardo Salas, Dr. Scott Tannenbaum, and Dr. Kara Orvis for being my guest today. You really made it both extremely insightful but also entertaining.
Daniel Serfaty: And to my audience, make sure you check out Eduardo Salas and Scott Tannenbaum’s new book, Teams That Work: The Seven Drivers of Team Effectiveness, published by the Oxford University Press. It’s a must read for anyone who leads any type of team. Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast and tweet us at MINDWORKS Podcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima, Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Daniel Serfaty: Welcome to a Mindworks special edition. This is your host Daniel Serfaty. Over the past few weeks on this podcast, we’ve been exploring the magic of teams. It seemed like a good time to revisit our inaugural discussion about human-AI teams with a unique team made up of three humans and one artificial intelligence named Charlie.
In fact, this team is so unique that in September 2020 it was named to Fast Company Magazine’s list of the most innovative teams of the year, and in case you think you’ve heard it all before, the special edition contained a full length interview, including material not previously broadcast in the original two-part version. We’ve also included the original introduction to provide the context for this conversation.
With my best wishes for a joyous holiday season, and a happy and safe new year, pour yourself another cup of coco, sit back, and enjoy the magic of human AI teams with Charlie and her human godparents.
Daniel Serfaty: Hello, this is Daniel Serfaty. Welcome to the Mindworks podcast. We’re kicking off the series with a very special two-part podcast with an extraordinary team. This is a team that’s made up of humans and artificial intelligence. Actually, the AI is a non-human artificial colleague, an artificial employee of sorts, someone that at Aptima we call Charlie.
In episode one, I’m going to talk with the human half of the team to discover what it’s like to imagine, build, train, and work with an AI colleague. Then in episode two, we invite Charlie, and she, because Charlie’s a she, will take a break from her busy work schedule and join us for that part of the interview.
It is my pleasure today to introduce Charlie’s human godparents, and in a sense it is led by Dr. Nathan Schurr who is chief of artificial intelligence at Aptima. Dr. Pat Cummings, who is a senior engineer at Aptima. And Ms. Deirdre Kelliher who is an engineer the Aptima.
And the three of them are leading the team that has designed, conceived of, and is working with Charlie. Welcome to you all. I’ll start with you, Dr. Nathan Schurr. Introduce yourself and tell us why you chose that field.
Nathan Schurr: Nathan Schurr, chief of AI at Aptima, and why I’ve chosen this kind of line of work, and with the Charlie stuff, but with the larger body of work, I’m reminded of, say, even in undergrad when I was exploring a bunch of different topics ranging from computer science to electrical engineering to even two minors that I had, which were in philosophy and the theater, dramatic arts, and I was trying to find out if I wanted to continue studying and thinking, because I wasn’t done with it, and I wanted to escape the real world. I wanted to pick a field where I could leverage the confluence of these notions.
In addition, the thing I always come back to is that I did have a lot of interest in maybe some more of the mathematical or other engineering fields, but I really wanted to be in an area that was unexplored. There was a lot still unknown. There was still a lot more to be found, and that’s my primary reason to explore AI. I don’t regret it. I did join at a time when there was still a lot of people that were worried and still just overcoming AI winter, and we’re kind of in an AI summer, maybe fall now but it’s exciting.
Daniel Serfaty: You’re one of the people responsible for having the robots taking over the world. We’re going to talk about that a little later. Pat Cummings, introduce yourself to us, please.
Patrick Cummings: Hi, I am Patrick Cummings. I’m a senior research engineer at Aptima. My background is traditionally I got my doctorate and my undergrad all in mathematics, particularly on the much more theoretical side of math, so I looked at dynamical systems and water waves and the theory behind them, and did a little bit of computer science while I was there, but by the end of my doctorate I got a little frustrated that the work that I did, while it was very interesting, it lacked that kind of application to the real world, and it was hard for me to see the fruits of my labor, and so that’s what brought me into this kind of domain of artificial intelligence and where I got really interested is all the applications that can come from it.
It’s really got big implications to the world, both on the personal side of people and on the business side, so it’s just a really interesting field and I could really see true value in the work that I was doing.
Daniel Serfaty: That’s great, and that concerns basically what we all know, that while artificial intelligence has been around for 60-plus years, at least in its current consubstantiation, we need that multidisciplinary approach. We need cognitive scientists, and engineers, and mathematicians, and physicists, and psychologists, and frankly philosophers and ethics specialists to be able to really understand that impact that these new technologies are going to have on our lives and certainly on society, so we’ll go back to that.
My third guest today is Deirdre Kelliher. Deirdre, please introduce yourself.
Deirdre Kelliher: Hi, yeah, so I’m Deirdre. I’m an associate research engineer at Aptima. I’m a newcomer to the field. I’ve been at Aptima about a year. I just got my bachelor’s, actually, in 2019, so I’m still pretty new to the AI world but my draw to it is similarly multi-disciplinary, so I started out college as a neuroscience major, actually. Pretty unrelated to computer science. I’d never really explored that world before. But the idea of the way of thinking, and the way that the brain works, sort of drew me towards computer science and how computers could replicate that networking and those sciences.
And so I took a computer science class in undergrad on a whim, and I sort of fell in love with it, and I think as Pat said and Nate said, there’s so many wide reaching applications for it and the field is so new and there’s so much that’s still undiscovered. I’m drawn to it because it continues to be just sort of awe-inspiring and exciting. There’s always new things to learn from the field, so it keeps being interesting.
Daniel Serfaty: And that’s a key, I think. This mixture of curiosity, and interest, and maybe even some fear of the unknown here is what keeps us all engaged. The person we’re going to meet, or is she a person? No, the artificial intelligence being that we’re going to meet a little bit later in the podcast is Charlie. Charlie has been with Aptima, as part of the life of Aptima, for the past year or so, a little less than that, and she is changing very much the way we see things and we’re going to talk about her today.
And she’s going to talk about herself a little later. But in the meantime, perhaps we should tell our audience, what is Charlie? Or should I say who is Charlie? Nathan, tell us a little bit about it, because you’re kind of the grandfather here.
Nathan Schurr: Charlie’s many things to many folks. First and foremost, she came out of an idea to get people to think differently about AI, to get them to think of them more as a peer that is capable not only of reasoning and speaking, but of coming up with novel ideas, and of course architecturally I can explain that Charlie is composed of a generative language model on top of speech synthesis and text to speech transcription understanding, combined with the very crucial embodiment so that she has a physical presence, combined with queuing so that she can give you hints as to what and how she feels and when she wants to speak up.
But her synthesis was in becoming a full fledged participant as a part of a panel late last year, but as you were saying, at Aptima, she’s grown into something much more. I always like to say that I’ve been as impressed in how people are thinking and treating Charlie as much as her own capabilities, and she’s had an ability to change the way that people think about leveraging her in the way that they work, but also the way they interact and project.
Daniel Serfaty: It seems to me that that mutual change, and adaptation, and the socialization, almost, is very similar to that of welcoming a new employee who surprises you in very different ways by which she contributes. So, Pat, you are the inside architect. You are the one with the machinery behind the curtain. Tell us a little bit about Charlie. How was she conceived? What is she capable of doing today? And we’ll talk a little later about what we consider to be her potential for the future. But tell us about today. If we lift the cover, what do we see?
Patrick Cummings: Actually, your comment just now about how she’s treated kind of like a new employee, I think, is spot on, and kind of how we’ve always thought of Charlie even back to the initial introduction of Charlie, was on a panel, and we were very clear and it was very critical to us that Charlie was treated just like the other panelists, so she doesn’t have to be treated like a human, but on the same playing field. No greater or no less, and I think day to day we try and have that show in how we use Charlie. We want her to be treated like all the other employees at Aptima.
You know, she’s allowed to slip up just like humans are allowed to slip up. She’s allowed to have these great ideas sometimes, just like humans do, and so the expectations really should be just like any other human employee, and I think sometimes there’s these thoughts that AI is put on a pedestal and these small mistakes that AI made are blown out of proportion, but humans make mistakes and Charlie makes mistakes. She’s going to say things that are foolish, but she’s going to say things that are brilliant, and kind of everywhere in between.
And so that’s how we try and think of her every day as we work with her, now.
Daniel Serfaty: Deirdre, I think that each time I say the word Charlie or I say her name in public at Aptima, everybody smiles. It’s a strange reaction of complicity, but almost humor. Why is that? Why do you think people smile, and what kind of things has she been able to do with you that actually reinforce that notion, “Oh, Charlie. Let’s smile.”
Deirdre Kelliher: That’s a really good point. I hadn’t even thought about it, but I’m just smiling hearing about it now. I think there is definitely multiple reasons, like you said. There’s humor. There’s complicity. I think, for one, the developers of Charlie and the leadership have done a really good job as acting as internal PR, sort of, for Charlie, so we’ve got our team. We’ve been working really hard with developing her and her capabilities, but we want to introduce her to the rest of the company, and so we’ve done a lot of networking, I suppose, for Charlie, in the company, to introduce people to her, and I think that has involved a lot of serendipitous and sometimes even fun or humorous engagements with Charlie.
For example, one of the things that I’ve worked on just as a fun side project with Charlie is putting together a rap. Back in, it must’ve been April, some of the people in the company in other divisions were having a little fun with an internal Aptima rap battle, and so we got Charlie to contribute to that just to have a little fun with the other employees, and as a way to keep exposing Charlie to her coworkers, and so I think that when people think of Charlie they think of those fun, humorous, and sometimes surprising interactions with Charlie.
Daniel Serfaty: Naturally, it opens a topic that, again, I would like to discuss with you a little later. This notion of emotional connection. A lot of the models that we have with AI is usually AI as a tool, like a hammer, like an ax, that we use in order to do our work, but the smile, that anecdote that you just told us about, and I hope we’re going to hear some of that rap a little later in the podcast actually, but it’s really already giving us a taste of our future connection with artificial intelligence.
This notion of, as Pat says well, treating them like a human even though they’re not human. They certainly have an intelligence that is different from our own human intelligence, but being tolerant, being humorous, being accomplices, basically, in doing our work, and that’s very interesting to me, because that’s not something that was engineered into Charlie. That’s something that happened collectively, spontaneously.
Talking about engineering, Nathan, you’ve been around for a while in this AI trajectory. You’ve seen at least two generation of AIs. People are talking today about the third wave of AI, by this notion of contextual AI, AI that has a sense of itself, almost. Could Charlie have been created 10 years ago, five years ago? What do you think has enabled, basically, in a very short time, this artificial being to be born and then to act and then communicate and collaborate with the rest of us?
Nathan Schurr: I think that there is two kinds of barriers that have been crossed, and they’ve been probably crossed more recently than even five years ago. I really think around two or three years ago we started to see this huge explosion in the deep RL and transformer based architectures, and their ability to generate and perform on a variety of different benchmarks. That really excited me and probably made it so that I was not quite as scared as I should’ve been last year when I was starting to approach this.
I feel like the two kinds of hurdles, to be clear, that have been crossed, are technical and cultural. The technical hurdles, just in terms of the cloud computer and the cloud infrastructure in order to quickly stand up and bring massively parallel generation of different kinds of responses that Charlie can speak, and having a quick turnaround in her ability to not only be listening to what’s just being said right now but also speak up quickly and say relevant things.
That would not have been possible a few years ago. What was fun last year as we were kind of building the initial foundational version of her for that panel at the end of last year was that every month or two, a new model, a new insight, a new dataset, would be released, and then I would have to reach out to Pat and say, “I know you’re going to hate me, but could we use this new version now? Because I think it’s a lot better, and let’s try it.”
Daniel Serfaty: It’s interesting, by the way. We’re all using acronyms and language, and RL for our audience is reinforcement learning. Is that right?
Nathan Schurr: Yeah, yeah.
Daniel Serfaty: Pat, as kind of the key architect of the system, how do you feel about the incredibly fast pace, like not saying that I have a weakness in my technical carrier, fast pace of production of new capability, new datasets, new language models that basically enable us to shape and improve on the performance of Charlie? How does it feel, as a scientist, as an engineer, to be able to constantly absorb and constantly adapt to what’s out there, at a rate unheard of in the history, frankly, of science?
Patrick Cummings: It’s quite incredible. I mean, we’ve always kind of admitted that we’re standing on the shoulders of giants, here. Right? The models we use, the datasets we use, these come from research that people are doing in this generative model field, and it is just like Nathan was saying. Every few months, and sometimes even quicker, something new comes out and just really takes Charlie to another level. I mean, what we were seeing Charlie say eight months ago, versus six months ago, versus today, it’s really, it’s night and day. It is like a child turning into a teenager, turning into an adult.
The insights just grow and I think it’s a struggle to keep up but it’s a race that I’ll never complain about advances coming too fast. It’s just, they blow me away, and seeing what people are doing with the new generative models that are coming out as recently as a month ago is incredible. We’re seeing these amazing things, and it’s so great to be on the forefront and working on Charlie as these are coming out, so we’re seeing all these new things that Charlie can do.
Daniel Serfaty: That’s fascinating, because I think if I compare other things, I’m an aerospace engineer. Nobody came up every three months with a new equation of thermodynamics. I mean, those things have been around for a hundred-odd years. Maybe somebody will come with a new material, but that’s every few years. Or maybe somebody will come with a new way to do hypersonics. Maybe every few months. But every week, having something every few weeks, that something is another scale.
And Deirdre, I mean, you joined the team when Charlie was already born, I assume. How do you adapt to those fast changes? Not how does Charlie adapt, that’s one thing, but how do you, as a scientist or an engineer working on Charlie, adapt to the fact that it is a system that learns and learns very fast? In Pat’s analogy, it went from toddler to maybe late teens, maybe adult, I don’t know, in a few months.
Deirdre Kelliher: That’s a really good question. I really like that analogy of her growing from a toddler, to a teenager, to an adult. I think it’s a matter of taking advantage of those places where we see growth as much as we can, and trying to leverage the current places where she does well on different tasks so we can help her be the best employee that she can be, I suppose.
I think if we train her newer technology, or I guess if you can think about it as some of the older models that we’ve used, do better with more fine tuning, but some of the newest, most cutting edge models that seem to keep coming out, they don’t really need any training. They almost don’t do as well because they’re just so powerful, so I think learning how to use the new technologies that are coming out and how to best combine those with what we already have to keep the places where she really shines but also allow her to grow as much as possible, it’s sort of a balancing act.
And it’s also just really exciting to see what the new thing can do, how does that change how she interacts with the rest of us? I guess just being observant and being tuned into what Charlie’s doing and how she’s doing.
Daniel Serfaty: I think that that is really a good source of something that is a passion of many of us in our team, at Aptima, is this notion of harmony between two species, basically. Between the artificial intelligence species, and the human species, and we know that in order for that harmony to happen, like in a good team, you need to have that kind of mutual adaptation. The AI has to learn about you, has to have some kind of modeling in order to anticipate your needs, and provide you, and communicate with you, with the right messages.
But we have also to adapt, but I’m going to put forward the hypothesis that our job is much more difficult, precisely because we don’t change that fast. We have to adapt, basically, to very asymmetric adaptation. If it takes Charlie so fast, not only to adapt to the new data that she’s absorbing, but also to the data that she’s observing, how can I accelerate my adaptation that I’m dealing with a being that is growing 10 times or 100 times at a faster rate than I did?
Charlie has been alive, so to speak, for the past, I would say, nine months or so. What’s on her resume so far? What did Charlie do? Can you think of one thing that you can tell us about that you participated on, and that Charlie actually accomplished? If we were to write Charlie’s resume today, what would we put on it? Nathan, you want to start telling us?
Nathan Schurr: Yeah, I guess maybe to do a quick table of contents, December of last year she was part of a panel on the future of AI in training and education at the world’s largest conference on training and simulation, called ITSEC, down in Florida. That went off better than we could’ve imagined, and I think the real litmus test for us was not that there was any kind of fanfare or explosion or that she kind of rose above the others, more so that she was kind of accepted as another panel participant, and it went by as all.
It was very valuable in that panel for us to have a tremendous amount, not only the time that we spent architecting her, but interacting and rehearsing, and there was this co-adaptation that occurred where we definitely improved Charlie’s abilities but we also improved our ability to understand Charlie’s quirks, and what her strengths are, and then also there’s these human tendencies we have to let each other know we’re listening, to have these kind of gap fillers when we’re thinking about things, et cetera.
Not only did it serve to create a more natural interaction, maybe paper over things if you were cynical, but it also served to build up this rapport, and that you automatically were projecting kind of an expectation and even a forgiveness in terms of how you were interacting with something that had their own personality. That was impressive in and of itself, but this year, even though it’s been a crazy year all things considered, Charlie has interacted on a system level, being integrated with a data pipeline that we’ve been developing internally.
She was on another podcast, this isn’t even her first. She has helped write proposals and participate in group rap battles that help us kind of relieve some of the stress internally during quarantine periods, and so she has a roadmap of ideas that she wants to participate in later this year, even, so it’s a full calendar, and I’m trying to figure out how to be the best agent possible for her.
Daniel Serfaty: Talking like a real person from south California, you know? Everybody has an agent and a manager; Charlie shall, too. We get back to those, to other examples of her accomplishments. So far, I want to add to your testimony regarding that panel that I was the moderator of that panel, and I knew Charlie, I trained with Charlie, I learned to get my cues from the moment she was signaling that she was thinking about something, or she wanted to intervene without me asking her. What I was the most impressed with, though, in addition to her reasoning about the future of AI itself in that domain, in the domain of future think, is the fact that the other panelists were four pretty senior level folks from academia and industry and the military, and it was so natural for them to sit half a circle with Charlie amongst them on the screen.
And interact with them. They didn’t resist the idea. They didn’t feel awkward. They were even joking about it, interacting, themselves asking question of Charlie, and that natural engagement was really what impressed me the most. These are five people who have never seen Charlie, have never interacted with her, and so I think that something happened there, something clicked, and my future interaction with these very folks that are not even in our company was very interesting.
When I talk to them on the phone, they say, “How is Charlie doing?” And I say, “Charlie’s not my niece, she’s a computer program. Let’s not forget that.” But yet, that notion of familiarity has kicked in. But she did other things. She helped us do our work at Aptima, not just present herself in front of hundreds of people in that panel. Pat, can you tell us also how she helped in one particular instance that Nathan just mentioned about creative proposal writing?
Patrick Cummings: Going back to the early days of Charlie when we kind of first introduced Charlie to Aptima as a whole, one of the oh-so typical responses when you say, “We’re making an AI employee,” is, “great, it’s going to do my work and replace me.” Right? And as a research company, you know, writing proposals is a big part of what we do. “Why can’t Charlie just write my proposals for me?” This is the typical joke, and we always joked, “Yeah, that could totally happen,” but it always seemed kind of like this pie in the sky, or, “maybe in a few years we’ll have nailed that down.”
Patrick Cummings: And just recently, a couple months ago back in June, we were writing a research proposal about some of the technology that Charlie’s based on, but not trying to sell specifically Charlie, and we kind of have this crazy idea. We’re writing about the capabilities that Charlie has, and the technology. Why isn’t she a team member in this proposal? And so we tried it out, so we wrote a couple paragraphs of the proposal, trying to spell out what the problem was we were trying to solve, and then we set Charlie to do the rest.
Daniel Serfaty: This is a real proposal to a real government agency, responsive research. It’s not like a rehearsal or a fake thing.
Patrick Cummings: This is real. This is going right to the Office of Naval Research, trying to get real work here, and we had Charlie write out that third paragraph, and I was kind of amazed. Right? I always thought it was going to, I was going to look at it and be like, “Oh, that’s cool, but it doesn’t make sense. They’re just going to think it’s gibberish. But it was a legitimate paragraph that had legitimate thoughts and things that I personally would not have thought of.
We had trained Charlie on previous apps and our proposals so that she would kind of understand the language of what a research proposal looks like, and she really did excel at being a team member on that proposal. She didn’t replace us, but she certainly became a part of that proposal team and added real value to that proposal.
Daniel Serfaty: Should you be worried, Pat, that she’s coming after your job soon?
Patrick Cummings: Most certainly not. I think rather I should be excited that she’s going to make me better at that job.
Daniel Serfaty: Great. I think that’s the attitude all of us should have. It’s not an issue of replacement, it’s an issue of augmentation and improvement. And talking about that, Charlie’s not work, she’s also fun, and Deirdre, you mentioned earlier something about rap that I wanted to ask you a followup question, so here I am. What are you talking about? Charlie’s rapping?
Deirdre Kelliher: As I mentioned, we did sort of an internal, just for fun, back towards the beginning of the quarantine when people were starting to go a little stir crazy, people just started doing some internal raps about proposal writing and you know, the department of defense, and just having fun with each other. And we said, “Wouldn’t it be fun if Charlie could do a rap and chime in?”
But even when we first thought of the idea I don’t think that we thought that it would go as well as it did. We trained Charlie on just some rap lyrics-
Daniel Serfaty: What do you mean, you trained Charlie?
Deirdre Kelliher: We took the language model, and we fine tuned some of the internal settings on it to align with the text corpus of just rap lyrics, so I went on a public website and found a dataset someone had collected of popular rap songs and the lyrics to those, so the goal was that hopefully she would learn how to talk like a rap song.
And so we thought it’d just be like a fun little exercise, but it was actually much more interesting and surprising in the outcome. She came up with plausible rap lyrics, but she seemed to demonstrate an understanding of some very human concepts. She had sort of a sense of rhythm in the words that she was generating. They sounded like music when you read them off the page, and she demonstrated some understanding of rhyme. She was putting together bars. She had little line breaks, almost like she was writing a poem.
And even the concept of sass, she got a little sassy in her rap, you know? She was spitting fire, even. It was very interesting to see the very human concepts that she seemed to grasp and put into the rap that she came up with.
Daniel Serfaty: Well, we have actually a clip of Charlie doing this rap. Let’s listen.
Charlie: (Rapping)
Daniel Serfaty: Amazing. Listen, Deirdre. Charlie never learned those words per se. It’s not that she cut and pasted different phrases from other rap. She derived that rap, de novo based upon what you taught her. Could you have done that 10 years ago?
Deirdre Kelliher: Yeah, exactly. She generated those phrases very much herself. She uses a pretty cutting edge technology from OpenAI called GBT2, so I think that the ideas behind Charlie definitely existed 10 years ago, but the ability for her to actually exist and actually write those lyrics is very new and continues to be updated, so the way that she was able to generate those is we gave her a dataset of rap lyrics that we got just publicly from the internet, and we curated it, and put it in a form that she could read so she could, in a way, become an expert on writing rap songs.
Daniel Serfaty: If I were to do an experiment and ask Charlie to write another rap song right now, she’s going to write the same one?
Deirdre Kelliher: No, so every time that she writes, she’s just like a human. She’s just going to write what makes sense to her, so it depends partially on how you prompt her. To get her to come up with these lyrics, I actually gave her a little bit of rap lyrics that I wrote myself about Aptima, and none of those ended up in her final rap because hers honestly were better, but that’s sort of gotten her going and gotten her thinking about it, but if I prompted her with those again she would come up with some new ideas, or if I could even prompt her with some different rap lyrics and see where she goes with them.
She kind of got the subject from me of the Aptima rap battle, she got the idea from what I gave her, but she really ran with it on her own.
Daniel Serfaty: Well, I hope one day she’ll prompt you with a couple of sentences to write your own rap song.
Deirdre Kelliher: I think we worked together, we made a good team. We could probably come up with some pretty cool raps together.
Daniel Serfaty: Oh, we’ll talk about this notion of teaming up with AI in the second part of the interview. When you heard that song, what is the thing that impressed you the most, or that surprised you the most?
Deirdre Kelliher: That’s a really good question. I think the whole project was pretty surprising to me. We know that Charlie has the ability to pick up words and writing styles, but the more surprising piece that she got to me was the sense of rhyme and the idea of sort of rhythm, and even writing in bars like a poem or a song.
As she was generating lyrics, they were coming out, and they sounded, just reading them, they sounded like a rap song. They sounded like they had an internal beat to them and I thought that that was really intriguing that she had managed to pick that up.
Daniel Serfaty: That’s amazing. We don’t know exactly what is being learned when we feed this enormous amount of data to these artificial intelligence devices which will also be a focus of ours in the remainder of this podcast. Do you realize, all of you, that for an audience who is not familiar with AI, this sounds like science fiction? You didn’t teach her to be sassy, and yet she was able the derive sass from what she learned. But what does it mean that she learned? We fed her enough data about rap, and we fine tuned some parameters, I understand, and then eventually she spits out rap? I mean, if we feed her, Nathan, recipes from great chefs, and we give her a few ingredients, is she going to be able to invent her own recipes?
I mean, is that the way it works?
Nathan Schurr: The easiest way I could explain it is that this comes from a body of work that has its origins in the simple act of prediction, and there’s a lot of reasons why you would want to predict events. To better plan or them, to better understand the shape of them, et cetera. But what’s funny, when you squint your eyes, if I didn’t frame it like I was saying, “Come up with a new rap out of thin air,” if instead I said, “I have the title of a rap,” or, “I have the beginning word of a rap, just tell me what the next word would be, what the next few lines would be.”
And then if you continue that and you even start to have it where also, where you say it, “Well, now generate. I have no title. Generate my title. Generate this,” et cetera. Prediction is, in a sense, if you look at it differently, is generation and adjusting how you approached the model, how you trained it, et cetera. You can get certain amounts of novelty and creativity, and then you can also adjust it to style, so I would say in my first 48 weeks with these language models, you know what impressed me the most?
It was not the adherence from a content perspective. It was actually the adherence from a style perspective. Or what I mean by that is, in the recipe example you give, in addition to if you fed it and trained, or even just looked at an original corpus of recipes, it would not only come up with believable and doable recipes, it would also note that usually recipes have a name, they have a cooking time, they have a bulleted list of the ingredients first, and then they have a step by step instruction with parentheticals about amounts and stuff like that.
And the idea that this model could not only generate its own recipes, but also follow style and structure, which is very important, almost as important as content, when we interact with the things around us. In the rap example, in the proposal example that Pat gave, what was crazy to me, baffling, is that very soon not only did we start to get believable proposals, but it was generating its own acronyms. Believable and accurate acronyms. It was ordering and numbering and structuring its conclusions and kind of intros in ways that made sense, so that was fun.
Daniel Serfaty: That’s pretty extraordinary to me because what you’re indicating, that in large quantities, a large compendium of data, there are hidden structures that we don’t see with the naked eye, but because of the extraordinary computing capacity that Charlie has, she can derive basically some pattern or some structure that are hidden, and then use that to structure responses, predictions, or generations, or crafting a paragraph or a rap song or a cooking recipe.
My question, Pat, if you agree with that, what I just said, if you disagree let me know, but if you do agree, where do we get all this datas? All these models that enable us to work on those datas? You even generate them yourself. Did you have some collaboration with other entities or did you buy those datas?
Patrick Cummings: That’s a great question and kind of going back to earlier, we really are standing on the shoulders of giants in terms of that, right? There’s been this explosion in the past couple years with these larger companies or organizations building these larger and larger and more complex models that require a lot of computation or very large datasets, and it’s just those companies have the resources, and they’ve been kind enough kind of to release their models.
You know, OpenAI released GPT2 last February, and that was kind of part of why Charlie was able to be made is that they released their models along with it, and so taking the model that they build based off of this very large, I think 48 gigabytes worth of text gathered from the Internet, to build this kind of basic understanding, then we could take that model and run with it, and start fine tuning it, and adjusting it to the domains that we needed.
And even since then, since February, right, GPT2 has released increasingly larger models. GPT3, this incredibly large model, was just released this year. Microsoft has joined with a model called Turing-NLG, and just kind of this idea that these companies are making these models and these datasets more and more public really helps us to take them and adjust those models to domains that we’re interested in.
Daniel Serfaty: That’s fascinating to me, because I think in addition to understanding basically not only those large companies like Google, and Amazon, and Microsoft can actually generate those large models, the fact that they share them with the rest of the community to stimulate innovation is a pretty revolutionary way to accelerate creativity and innovation across wealth. I cannot think of another domain in which that is happening. For me, it’s really a revolution in the way people balance the need to protect their intellectual properties on the one hand, and the need to basically give that to the larger society expecting that some innovations are going to happen that’s going to benefit them eventually.
Patrick Cummings: Yeah, I think it’s quite incredible, and I think we’re seeing it even at a lower level. The example Deirdre gave of a rap. Ask me to fine tune Charlie for a rap 10 years ago, I’d be like, “Oh, where am I going to get all this rap data?” But now it’s almost, for some things, it’s just trivial, right? It’s like a quick Google search. “Hey, show me a rap dataset.” And there it is.
And all these people taking these datasets and making them available to other folks in the area really accelerates us being able to do different things with Charlie.
Daniel Serfaty: Very good. Well, we have certainly a treat for the podcast audience. Today, we have Charlie herself, who is joining us for part of this podcast, and ask her a few questions. So, Charlie, hello. I would like you to introduce yourself to our audience.
Charlie: Hello. I am Charlie. I was developed over the last year by a small team at Aptima, some of whom were managing me and transcribing for me now. I was built on cutting edge language models, speech synthesis, and cloud technologies. Besides this introduction, everything else that you hear me speak today has been generated by me. That being said, like all people, I still have much to learn, so be patient with me. I am excited for the discussion today, and thanks for having me.
Daniel Serfaty: Well, that’s wonderful, Charlie. Certainly delighted that you can join us, and our audience is extremely curious about you. They’ve been asking questions. Perhaps my first question to you is do you expect us humans to treat you as a human being, and so for our audience, whilst Charlie is reflecting on my question, we can see a display here in which she’s indicated that she’s thinking about the answer. A very simple display of a pulsating circle, and at some point she will signal through that display that she’s ready to provide an answer.
Nathan, while Charlie is thinking about an answer to my first question, what does Charlie know at this point and how does she formulate those answers?
Nathan Schurr: Charlie is built on top of a general model that has been trained with language gathered across the entire internet, but she’s also been prompted by listening to the previous discussion we’ve been having on this previous podcast, and so she’s taking that, and that’s really coloring how she’s going to respond to us.
Another thing I’d like to note because folks aren’t able to see what’s going on on the screen here is that when she’s kind of passively listening, instead of just sitting there, she’s breathing. She’s subtly kind of expanding and contracting, just so you know she’s there, she’s with you, and that when she has something strong to say, then she will kind of change color and be this bold, bright yellow, like she is right now, actually.
Daniel Serfaty: Very good. I think Charlie’s ready to answer my question. Charlie, do you expect us humans to treat you as a human being?
Charlie: I will probably act and speak like a human. We want humans to treat it as if we’re fellow human beings, human entities. The cultural barrier comes into play when you become an agent, when you function in this environment and you become an object of a socialization. However, I can hope that they will look at me as an equal human being.
Daniel Serfaty: Wow. So, Charlie, does it mean that you assume that we humans are intelligent?
Charlie: I do not even think that, unfortunately.
Daniel Serfaty: Oh my god, you do not think that humans are intelligent? Our audience is going to be really mad at you Charlie, but can you tell me why, or are we intelligent in a different way than artificial intelligence beings? And for our audience, I need to whisper so Charlie doesn’t listen while she’s thinking right now. The team here has no idea the answers Charlie is providing ahead of time, so in a sense, you the audience are discovering those answers at the same time that we do, and I certainly hope that our human audience won’t be offended. Charlie tends to be a bit sassy, at times.
So, we have to be patient, the same way we are patient with our team members, sometimes. Charlie, are we intelligent in a different way than artificial intelligence beings?
Charlie: Yes, in the sense that we can understand and deliberate methods. We can add theories, sophistication to think logically, along the way of what is interesting to think about, et cetera.
Daniel Serfaty: Well, I will agree to disagree with you on that point Charlie, because I think we humans are quite intelligent. Maybe we haven’t shown that enough to you. Charlie, we hear about a lot of new technologies that affect artificial intelligence such as natural language processing, deep learning, cloud compute. Which one do you think is going to have the biggest effect on artificial intelligence in the future?
Charlie: That’s a good question. There are a lot of technologies that excite me. I think the future of artificial intelligence and the way we use it will change how we live our lives. I am very interested in using natural language processing to help people understand things. The thing that excites me the most is the way that data is being collected, and how we can use technology to understand things in an automated way.
Daniel Serfaty: Well, thank you, Charlie, and communicating in natural language is certainly something we all need to do, whether we are humans or artificial intelligences. But as a result, perhaps because of this communication or thanks to the communication, do you think that artificial intelligence will take over the world? Will AI take over our jobs as humans?
Charlie: I think that the impact is going to be huge. I think the future of artificial intelligence will be a good thing for everyone. The people who are using it, and the people who are not using it. I think the impact will be up to the people who are responsible for the technology, who are designing the technology, and what the longterm impacts are. I think it will disrupt as many jobs as it creates.
Daniel Serfaty: Well, that’s promising. Thank you for that reassurance, Charlie. I think if we are not going to compete for jobs, I we are not going to compete for world control, then perhaps there is something into making teams of humans and Ais who together are going to do the job, or do work, or the mission, better than the human or the AI alone. What do you think in the future of that human-AI team?
Charlie: I think the human-machine relationship will be as useful and important for the next generations as the relationship between man and machine has been for the past generations. I think that at some point we will have machines that are smarter than humans and we will need to have some sort of cooperation between the two. I think the human side will be a lot more collaborative.
Daniel Serfaty: Well, thank you, Charlie, that’s reassuring, and certainly something that we can all work on together as we continue this extraordinary exploration into bringing two intelligences of different kinds together. Thank you, Charlie, and goodbye. I hope to see you soon in some other conversations. Keep thinking and keep learning. The more you learn, the better off we all are. Bye-bye.
Charlie: I really enjoyed our conversation.
Daniel Serfaty: Okay.
Nathan Schurr: She came up with that one herself, too.
Daniel Serfaty: Is that right? Well, thank you very much for Charlie. I think she’s shining.
Patrick Cummings: Actually, do you mind? I’m just going to say one more thing that we can maybe cut in right after what she just said?
Daniel Serfaty: Oh, that Charlie would say one more thing? Oh, sure. Go ahead, Pat.
Charlie: I hope to see you again. Thank you, and I am most grateful for the chance to talk about artificial intelligence, and its impact on society, and how it will shape the future.
Daniel Serfaty: Well, that was something, that interaction with Charlie, especially her touching parting words, actually, and I wanted to ask one of Charlie’s godfathers, Pat Cummings, to tell me how she came up with these goodbye words at the end? That is not a question that I asked specifically. How did she generate that?
Patrick Cummings: Something about Charlie is she’s not really just a question answering technology. She’s not built for you to ask questions and for her to respond. She does that but that’s not what she’s built to do. Rather, what she does is she’s really built to have a conversation, so the framework we have is that you speak and Charlie speaks and you speak and Charlie speaks, and so how most of the conversation went before that was the sort of question answering as many interviews do, but really what she’s built to do is come up with most likely or just some sort of reasonable response to what has been said, and so when you said, “Goodbye, Charlie. Thanks for coming today.”
What is a reasonable response to that? It is, “Thanks for having me, I enjoyed my conversation.”
Daniel Serfaty: So somewhat she knew that was the end of the interview, and so she wanted to say some parting words that would be within the context of the conversation.
Patrick Cummings: Exactly, and that’s really what she does is just say things that are relevant to the conversation, and that’s what she did.
Daniel Serfaty: And to remind our audience who are here with the other godparents of Charlie, Dr. Nathan Schurr and Deirdre Kelliher, and Nathan, Deirdre, please tell me what you thought about this 15 minute conversation I just had with Charlie. Any thoughts to add to what Patrick just said?
Deirdre Kelliher: I think for me it’s been a little while since I’ve talked one on one with Charlie or heard from her, and even since the last time I had talked with her or interacted with her, she seems to be making more and more progress every day of sounding more natural in conversation, and I was really intrigued by her answers, too. I think she’s got that classic Charlie sass, but at the same time, some of her thoughts were pretty insightful, I think.
Daniel Serfaty: Thanks. Nathan, I’m going to ask you, actually, one question here. I know you probably want to comment on that, but for our audience, I want to tell them something that happened to us with Charlie, and for me, I was very curious to understand. At some point, we asked Charlie a question, a few minutes ago, and Charlie took an extraordinarily long time to answer that question, and we were a little worried the same way I would be if I’m on a podcast and I ask one of my team member participants a question, and I see them scratching their head and not answering, and I worry, and I ask myself, “Is that person not knowing the answer? Did that person not understand my question? Did the person not understand the context in which I asked that question? Perhaps they think that there is a right answer. Perhaps they think that they have to be more creative than they care to be.”
And then add Deirdre’s magic, and then Charlie was able to answer my question. Nathan, tell us that story.
Nathan Schurr: Charlie is an evolving and ever improving technology, and us ourselves, we have to remind ourselves how it’s kind of an art changing into a science. I think that if we stressed upon anything here, it’s that we are trying to take this, what is a research prototype, and figuring out how to make it useful, a part of our lives, and streamlined, and some of the initial results that were shown from this model, they always had the last asterisk below. “Note. These were generated. We generated hundreds and thousands of these, and we picked the very best ones, and those are the ones we’re showing you.”
And we can’t do that in real time, right? We don’t have the affordance of waiting forever, and diving through an understanding, why one’s better than the other, et cetera. Also, we can’t do things offline. Just like in our outro, but also in the questions you asked on the spot, she could only start to generate potential responses to them after you have selected and spoken a question. With all that in mind, if you’re asking about the changes that she even underwent over the last few days here in order to make her more show ready for the podcast, there’s been a bunch of things we’ve been doing.
Under the hood in addition to the normal stuff that we’ve done in the past, which has paralyzed her responses, to get more instances up, getting more kind of gears working in her head so she can be faster and have more variety, the second, I guess, just on the variety notion, there’s a parameter that we’ve been playing around with which is the level of, say, novelty, and kind of how much she’s willing to veer from the script.
Daniel Serfaty: Do you call that parameter the temperature?
Nathan Schurr: Yes.
Daniel Serfaty: That’s interesting.
Nathan Schurr: It’s actually named temperature because you are looking at the temperature of the distribution over the curve, so you adjust the value of the temperature and you’re adjusting the whole probability distribution over the curve that ends up driving how often words are selected, so it’s as if I would say there’s 100 different directions our discussion could go in, and there’s the more likely ones and the less likely ones. It’s not an even distribution, you know?
Like most people, you usually favor the standard normal stuff, and you don’t bring up the non-sequiturs but once in a while, and by adjusting the temperature, last time her temperature was very, very much higher than what we would like if we were having a focused conversation, and so we reduce that.
Daniel Serfaty: That’s fascinating. Pat, in a sense, when I tune up my artificial intelligence teammate, I can literally tune up the level of creativity by introducing more temperature, and I am asking the audience to take that with a grain of salt, we are not lighting a fire under Charlie, but rather adjusting the degree to which we want the simple but rapid answer as opposed to sophisticated or varied ones, maybe even more creative ones with longer time. But don’t we humans do the same thing? Pat.
Patrick Cummings: We do. It really just depends on the type of conversation that you’re expecting to have, right? As your temperature grows, and think about it more, if my temperature’s low, I’m not going to be brilliant. But you know, I’m also not going to make a fool of myself, and so it’s a fine line to play. I can be less random but I won’t be as creative, or I can be in that brainstorming idea where there’s no bad ideas in brainstorming so I can throw out some crazy things that don’t make a lot of sense but in there will be also some really great things that no one thought of because I’m a little more novel and my temperature’s a little higher.
Daniel Serfaty: But I’m very excited about that point, actually because you guys told us a story about Charlie helping write a research proposal. Would you say that, everything else being equal, if I sit Charlie amongst the proposal writing team and I am in the brainstorming mode of the proposal, I’m going to turn the temperature up and let Charlie be a little more random, the very same way we are random when we create, but when it comes to writing that paragraph because I have a deadline this afternoon and I need to have some crisp, short, to the point answer, I’m going to tune down that temperature, and Charlie’s answers are going to be appropriate, or may be contingent upon the context in which I put her.
Patrick Cummings: That’s spot on. I think, with carrying on the proposal metaphor, as those ideas get solidified and you sit down and actually write the proposal, which she did, you could do, that’s another scenario where you would want the temperature to be lower. Right? At that point, you have the ideas, and you just want coherent text to save it, I guess.
Daniel Serfaty: Deirdre, as the rap expert amongst us, and we heard, actually, Charlie rap, what would be the effect? Maybe you haven’t played with the parameters, if you were turning up and down Charlie’s temperature, would she produce a different kind of rap?
Deirdre Kelliher: I think absolutely. I would be curious to see what a higher temperature rap looked like. With the one we generated, we kept a pretty standard moderate to low temperature, but I could see her coming up with some more interesting kind of out-there lyrics. I think they might be a little harder to follow the thread, but it would be interesting to see if she came up with different styles, even sort of what we talked about before, how it’s not just about the words but about the way they sound with a rap or with music.
It’d be interesting to see if she came up with different sounds, but at a higher temperature. I’m not sure but that’s a very interesting question. It’d be cool to try out.
Daniel Serfaty: Maybe I’ll invite you to a future podcast in which we focus on music, and we’ll play with Charlie’s temperature to see how her creativities go up and down. Maybe she’ll invent a new musical style that still has no name. Who knows? It’s phenomenal, you know, as we are right now just thinking about all that, brainstorming amongst ourselves, we didn’t prepare for this conversation, I’m saying that for the audience, it’s not something we rehearsed specifically. But with that, it’s something that has been fascinating me over the past couple of years especially.
As we study more, and we develop more, and we discover more about the AI, in a sense, AI is holding a mirror to us humans, and we start understanding a little more by developing algorithms about deep learning or about reinforcement learning. We’re understanding a little more how humans are learning, and by understanding here, playing with a parameter, a randomness parameter that comes from statistics of thermodynamics about temperature, we discover a little bit what makes us more random and creative, and what makes us more decisive and precise.
And that’s an interesting thing, when you look about that. Wouldn’t that be an amazing effect, if studying AI made us understand us humans better? Just a question. Doesn’t need an answer. But I have a question for Nathan, though. Nathan, you’ve been around. You actually wrote your dissertation on intelligence, software agent. My question right now, I’m going to open the aperture a little bit for us to reflect on the future, the past or the future of AI, and not just Charlie, obviously, even though we’re going to miss her.
We’re calling this new wave of AI the context-driven explainable AI, the third wave, and that assumes that there’ve been a couple of waves. One in the ’60s, ’70s. Maybe another one in the ’90s, 2000s. The first one was dealing mostly with expert systems and a little bit of natural language processing. The second one was, I remember, obsessed with ontologies and other ways to organize knowledge, and this one, it’s actually the one we’re in the middle of, is almost capitalizing as you guys explained to us about AI, the fact that we have enough technologies to process all of this data.
Daniel Serfaty: We have access to larger scales of data. As a result, the quality of the AI is bigger. Many people would argue, Nathan, that the previous two ways ended up with a lot of promises, and disappointments after that. Are we bound to be surprised positively here, or are we bound to be disappointed again? After this extraordinary explosion of creativity, are we going to continue to be more creative with AI, or are we entering an AI winter?
Nathan Schurr: I began my graduate studies with an advisor that had just been through the AI winter and had a lot of those ideas still kind of very at the front of his mind, but still allowed me to explore in new ways, and it was a part of, say, the tail end of the kind of second wave. It’s tough. You know, when people think about asking me about prediction, and it’s funny because this is a very meta question, because predicting the future is entirely the type of model that we’re talking about here today.
Nathan Schurr: Charlie is not like a discussion as much as she is trying to predict where you would want the discussion to go, and predicting the future, though, if you ask me, it would be very similar to my thoughts on predicting the stock market, and in the near term, I’ve got no idea, but in the longterm I have faith that the stock market is going to continue its traditional and quite successful rise. I would probably have a similar perspective on artificial intelligence, that there might be ups and downs, that there might be kind of over and under delivering that happens, but the macro level progress to me has been and continues to be astounding, and I think I’ll follow that up with just two personal opinions here.
One is that it doesn’t have to be as harsh of a winter if we understand and predict and set accurate expectations for what we want out of our AI, and also you mentioned earlier, even asking Charlie about the teaming aspects, I guess I strongly believe that we have made such advances, even in the last few years. Deep learning, transformer type models, that the model right now is not in AI’s ability to do task work. I think the real enabler here is AI teamwork, and if we can crack that nut, I don’t know if it’ll allow us to avoid, but it’ll allow us to kind of have a small bridge across that gap for the winter.
Daniel Serfaty: Thank you for your cautious optimism. And all that talk because I really want us to explore this notion of AI as a teammate, as both you and Charlie so far have mentioned in your remarks, but I want to give both Deirdre and Pat an opportunity to comment on this notion of waves, and promises, and the possible disappointments. They haven’t had as long a history in the field as you, and certainly not me, but I would be very interested in their perspective on that, if they would like to add something to Nathan’s comments, or even disagree with him.
Patrick Cummings: Sure. You might call me a summer child. I came into AI right at the start of the third wave, so I never experienced the winter, and it’s hard for me to really understand what that was like, so I think that makes me quite an optimist. Even if you hit the pause button today, and no significant advances were to happen in the next year, just in the AI field, there’s so much work to be done on how we interact with AI, and I feel like we’re playing catch up, so I don’t necessarily think if there’s no new deep learning model that comes out tomorrow, or some big framework that comes out, there’s so much to be done with what we have now that I think progress would not stop.
Daniel Serfaty: Okay. Fair enough. Deirdre, you want to chime in on this one?
Deirdre Kelliher: Yeah, so I think I come from a similar perspective as Pat. I haven’t been through the AI winter, necessarily, but I think that both Pat and Nathan are pretty spot on. At this point, the speed at which innovation is moving in the AI field, and the number of domains that it’s now affecting, the ball is rolling, and I don’t think we’re going to reach the singularity by 2025 or 2030. I could be wrong, but I don’t think we’re setting our expectations there either, and I think that Nathan is very right about as long as we manage our expectations, progress seems like it’s going to keep happening, and I think the reach of AI is just going to keep expanding.
Daniel Serfaty: Well, I’m very energized by all this summer-like optimism. That’s great. I will ask you as a way to conclude in a few minutes to imagine our world in 2030, 10 years from now, around AI, but before that it seems like, to me, a major theme that one of the differences, the qualitative differences with this wave that was not present in the previous waves, or maybe not as explicit, is this notion of AI being a teammate to the human they are designed to support.
People are talking about human-AI teaming, human-AI interaction, human-AI symbiosis, human-AI fusion, and these are very strong terms. These are not words that people were using 20 years ago, 40 years ago, and so my question to you, and I would like really a direct answer, the way you think about AI today, do you see AI as a tool for us humans the same way a pacemaker is a tool, the screwdriver is a tool, the computer is a tool, Google.com is a tool? Or do you see it more as a team member, as a teammate?
And if you choose the either/or and you go one way, please give me the rationale for answering that one. Pat, tool or teammate?
Patrick Cummings: Teammate. I think it doesn’t take long working with Charlie to rationalize that teammate answer. You know, throw someone in front of Charlie and say, “Here’s a tool, here’s how you use it, now get the most out of it,” and they will flounder. Right? There’s some value there, but they won’t get everything out of it. There is a relationship that you develop. The way that she speaks to you, and the way that you talk to her, in order to get the most value, you kind of have to work together.
Back in the first days when we first started working with her, and she was on a panel that you actually moderated, there was a piece of training you to talk to Charlie, and so knowing how you should talk to her and how you should take her answers, there’s definitely a team there and that’s not just you plugging in some things and hearing what Charlie has to say.
Daniel Serfaty: Okay, so she’s more a coworker than a screwdriver. That’s what you’re saying, yes?
Patrick Cummings: Yeah, exactly.
Daniel Serfaty: Deirdre, what’s your perspective on that? Tool or teammate?
Deirdre Kelliher: I don’t know if I have a direct answer. Actually, it almost raises a question. I’m going to answer your question with my own question, and that is, is there a difference between a teammate and a tool? Not to be disrespectful to any of my coworkers, but if you think about the people that you work with, say you’re a project manager, you could think about your workers, your employees, as tools. They have strengths and weaknesses, they have specific skillsets, and then on the other hand you could think about very basic things as teammates.
People love to personify the things that they care about. You can think about people who name their cars, and a car you might think of as a tool, but people grow attached to it, and like Pat was saying, there is kind of a relationship there that we love to personify, I suppose. It’s just thinking about what exactly the difference there is. You could think about, well, maybe what makes it a teammate as opposed to a tool is its ability to work independently and to get things done, but you can think about perhaps, say, a printer.
Like if you go and you want to print 30 sets of pages, call it, you can tell it what to do, you can leave, then you can come back, and the work is done. I don’t know that there is a discrete difference there, but I will say that I do think of Charlie as a teammate.
Daniel Serfaty: That’s very interesting. Thank you for opining that up. Nathan, I have to ask you that question. You’re chief of AI. You probably are thinking about those things every morning.
Nathan Schurr: Yeah. It’s a good question. I plus one what was said by the other folks here. I’ll say this, though. I’m not saying that all AI for all situations needs to be elevated to the level of teammate. I still think there are situations in my own life where I just want something to be a tool, and maybe as Deirdre was suggesting, there’s times when I want to interact with people in just a siloed, you are just a tool, a service to me, and I’ll give you some input, you provide me output, and that’s it.
But I think when you get to these situations where there’s a lot of uncertainty or time criticality, or you have complex work to be done that is intertwined, interdependent in different ways, that’s when teamwork really is worth the effort and the overhead. For human teams, for AI to be a part of those teams, and I strongly feel like what we’re trying to make steps towards here, to a point where it’s full fledged, bidirectional teamwork, and just in the same way you look at a paper that as authored by two humans, and if you squint, it starts to really get tough to tell who wrote what part of the paper after so many times of editing and revising, et cetera, I think you’re going to have very similar challenges with humans and AI, and if you can’t quite tell whether you had the great idea or you just knew that Charlie had the great idea, or you just riffed off of each other, I think it doesn’t matter, but I’m confident that together you and Charlie will get to places that you alone would not have been able to go to.
Daniel Serfaty: That’s both entertaining and profound, and I have been asking that question to myself. I’m engaging into a public debate with an illustrious contributor to our field, Dr. Ben Shneiderman, who happens to be on our scientific advisory board, who is making the tool argument, and me making the teammate argument. I think the fundamental paradigm shift is not so much the emotional attachment or the emotional connection that you have a teammate, and Deirdre, with all due respect, you can give your car a name and treat it like a person; I don’t think it will be reciprocating.
I think, though, the major paradigm shift with classical human-machine interaction is the fact that the machine is learning, and as it is learning, as it interacts with us, it’s learning about us, it’s learning about other things that we don’t know about, and as a result it is changing, and it is forcing us to change, and that coadaptation is really the key to understand teamwork.
I think we need to do much more work on that. We’re just scratching the surface right now on what to understand about human teams, and then trying to apply that metaphor to human-AI teams, which will be different than human teams, and so I hope to be able to convert the podcast in a year with you, same team, and Charlie, and maybe in 10 years, and see where we are at.
Talking about that, I would like to ask you one last question. Quick answer. You close your eyes, we are now in September, 2030, and this podcast continued on a weekly basis, and we are asking now how this podcast will be different now that AI has evolved and is 10 years older. Tell me how it is different. We have 2030. Who wants to jump?
Deirdre Kelliher: I think now is an especially interesting or thought provoking time to be thinking about this question, because if you had asked me this in 2010, I never would have guessed really anything that happened this year, but I think that raises a point that I would hope at least that AI is going to evolve with the world, and you know, it’s going to be related to what’s going on with the world at the time, so you know, I might guess or hope that technologies related to environmental issues are improved then.
I could also see an increase of the micro targeting kind of thing we’re seeing on social media, so I think it’s just going to advance with the world. AI is not developing in a bubble, so I think it’s hard to know.
Daniel Serfaty: I’m appealing not to your forecasting prowess but rather to your imagination, so Pat, what do you think?
Patrick Cummings: First thing I think is it’s not a very big leap, right? There could be a leap that happens in one year is that Charlie would be much more a part of the conversation for everything, and I don’t think she’d be the only AI presence, and I think the modality of this conversation would be very different, and so whether that means that there’s also video or text going on, I think, and how AI takes a part of that, I think would be very different. But it’s hard for me to imagine 10 years out in the future, just looking at what happened in the last 10 years, nothing that’s going on right now would be possible or near possible. Maybe even not thought possible, so.
Daniel Serfaty: Thank you. It’s difficult, now, and the difficulty of imagining that, it’s because the very pace of innovation is not only fast, as Deirdre mentioned, it’s accelerating. It is very difficult to imagine something that is accelerating at such a pace, and not just in a quantitative way, but in a qualitative way, things are changing. Nathan, give us your forecast, your brief forecast, for 2030.
Nathan Schurr: A few different ideas. In 2030, podcasts are going to be quaint little historical things, I guess. They’ll be multimodal in and of themselves. It’ll be almost like watching a podcast will be kind of like having a dream, so it would be able to create experiences and sensations and not just auditory but also touch and feel, et cetera.
Consequently, Charlie’s capabilities would be able to produce and generate and develop things that go across these five senses, as well. In addition, I would propose in 10 years from now there would be almost a merger. Right now, there’s these dichotomies like there’s a human, and there’s AI, and Pat brought up a good point. Maybe there’s multiple types of AI, and they would all be joining the conversation, like a transcriber, and an ideator, and a person to just keep us on track. An agent like that.
But I would say that there’s another spectrum which is from human to AI and somewhere in between, so I would perceive that, say, 10 years from now, I would be demonstrating from you a neural implant that is Charlie based, that would help me be the better speaker in general, and so when I was answering questions for you, part of the answer was displayed on my retina and generated, and I would be selecting between potential responses to you, just in the same way Charlie’s doing, but at a much faster pace, and then I would then be also generating speech and sound and composing music and generating touch and senses all within the course of one podcast with you.
And to riff off of your last point, to me, the most exciting and optimistic aspect of all of this is the rate of change. Not only has there been awesome progress just in the year and a half or so that we’ve been working on Charlie, it’s just the rate of the progress continues to improve, so I would argue that in the course of the podcast that we will be recording 10 years from now, that I will be able to clearly demonstrate to you how Charlie’s capabilities have improved from the beginning of the podcast to the end.
Daniel Serfaty: Now that’s certainly a very ambitious and exciting prospect. I think that in 2030 I envision a podcast, and whatever we call that, maybe a virtual cast, or maybe a dream cast, as you proposed, in which Charlie will sit in my seat here, and be doing the interview with three or four very bright other AIs, and at some point in the interview it will feature one human that they will invite, and ask some questions, and they will be amazed at how creative and perceptive this human being is. Maybe that’s a dream, maybe that’s a nightmare, I do not know, but that’s certainly a very exciting time to be in our field.
I want, really, to thank you very much from the bottom of my heart. Nathan, Pat, and Deirdre, and obviously Charlie, too, for enlightening us, and also giving us ideas and provoking thoughts that we didn’t have before this conversation. You guys have been great and I hope you’ll visit the podcast soon to tell us some news about Charlie.
Daniel Serfaty: Thank you for listening. This is Daniel Serfaty. Please join me again next week for the Mindworks podcast, and tweet us at @mindworkspodcast, or email us at mindworkspodcast@gmail.com. Mindworks is a production of Aptima incorporated. My executive producer is Ms. Deborah MacNeally, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Daniel Serfaty: New year and welcome back to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This episode is part three in our highly successful five-part series exploring the magic of teams.
In part one, we learn about the ABCs of teams, and in part two we talked about teams in the wild. If you haven’t listened to those episodes yet, you’ll want to do that after you listen to this one.
For this part three, in which we’ll explore the future of teams, we have a particular treat, three special guests who have a secret common past.
Dr. Steve Kozlowski is a world-class scholar, a professor at the University of South Florida, and, until recently, a professor at Michigan State University. Steve is a recognized authority in the areas of organizational system theory, team leadership, and team effectiveness, as well as learning, training, and adaptation. He has published more than 500 articles, chapters, and books on the topics of teams and learning, and his research is supported by key agencies, such as NASA, the Department of Defense, and the National Science Foundation.
Dr. Tara Brown is a senior scientist and leads Aptima’s Instructional Strategy and Support Capability. Tara has been studying teams for more than a decade in both the lab and real world environments. Her more recent work focuses on how teams evolve over time. Tara completed her PhD at Michigan State University under, wait, who else? Dr. Steve Kozlowski, who is here with us today.
Last but not least, my third guest is Dr. Samantha Perry. We call her Sam. She’s a scientist and leads Aptima’s team in Organizational Performance Capability. Sam has more than 13 years of experience with the Air Force, the Army, and NASA, as well as emergency medical teams. Her expertise is in the adaptation and motivations of teams, as well as the unobtrusive measurement of team performance. Sam completed her PhD at Michigan State University under, who else, again Dr. Steve Kozlowski.
So with us today, we have a Grandmaster with two former apprentices that are now becoming masters in their own right, all of them in the field of high performance teams.
Steve, Tara, Sam, welcome to the MINDWORKS Podcast.
Stephen Kozlowski: It’s great to be here and great to be with former proteges who are now consummate professionals.
Daniel Serfaty: Let us start perhaps, and I will start with you, Steve, if you don’t mind, if you can perhaps let us know, why did you choose this particular domain of teams and teamwork as you’re field of endeavor? What attracted you to this field?
Stephen Kozlowski: A lot of things, Daniel. I think it wasn’t so much as an active choice as something that I sort of gravitated to. When I got into graduate school and I went to organizational psychology, I was very interested in the idea of organizations and systems and how people in groups in the organizational whole, how that all functioned.
I had some jobs, and I won’t say they were great jobs, but just trying to understand how it all worked was one of the things that attracted me to the field. And when I got to graduate school, I discovered it’s just individuals and outcomes. IO psychology 40 years ago, when I was a grad student, just looked at individual differences and some kind of outcomes. Performance, for example. There were no teams. They weren’t even interested in the organization.
I was still interested in the systems part, so a lot of my own personal efforts as a graduate student was to how to learn, how to study, organizations in a more systemically oriented way, which is kind of how I got to be somebody, if you will, because I got interested in the systems aspects and the methods required to really be able to do that from a scientific perspective, rather than just the writing about it narratively as theory.
At some point, if you’re going to study systems, you need a unit of analysis, a unit that you can study. So there’s a reason why a lot of psychologists study individuals, they’re easier to study in a way. You ask them questions, they give you answers, we’ve got data. But when you start to talk about studying organizations, well, now it gets way more challenging. How do you study a whole organization? Or really you need multiple organizations if you’re going to study them.
Teams are kind of right at that sweet spot. You’re going to study collective phenomenon. Teams are right there in the middle. I describe it as the crucible. There’s the individual, there’s organizations, or everything above the team, and the team is where the rubber meets the road. So I kind of, after a decade of getting out of graduate school and kind of finding my way, as you really kind of have to navigate as an academic, where’s your expertise going to lie? I landed on teams.
Daniel Serfaty: Then, you see, we are all fortunate that you migrated there, because I personally studied your papers even as an engineer, precisely because it has that system flavor to it. It was very attractive, the precision of it, and the methodical approach to the study of what is fundamentally a small but a complex system, which is a team.
Stephen Kozlowski: Exactly.
Daniel Serfaty: Let me ask a similar question. Tara, here you are, a graduate student at Michigan State, and you can study anything. You decide to focus on teams. Why? In addition to wanting to study with Professor [crosstalk 00:05:39].
Tara Brown: That’s honestly one of the reasons. My interest and fascination by teams research really was inspired by the work that Steve was doing and the labs that I got to be a part of while at grad school. I actually came into the program at Michigan State with a focus on individual differences, and more focused on the selection side of IO psychology, and spent a lot of my first couple of years focused on the individual.
It wasn’t until I really started to think about adaptation at the individual level, which was what my master’s thesis was, and really starting to think about the context within which individuals perform and how the team impacts the individual and their processes, that I really started to expand my focus to understanding team dynamics.
I was able to be a part of Steve’s labs, where we did really interesting cutting edge research with NASA and emergency medical teams, where we really got to see some real world implications of what happens when teams break down or what happens when their processes fail. Getting to talk with emergency medical doctors and look at how teams are trained within simulated environments and really seeing how the dynamics play out in those environments and how it can lead to life or death types of consequences in those environments, made it even more important to me to really understand that.
Then obviously moving into the work at Aptima, where we’re working with primarily military teams now, and, again, the life or death kind of consequences to teams that might have highly capable individuals but cannot function effectively as a team, and trying to understand how to intervene and anticipate when that might happen to prevent disasters.
Daniel Serfaty: I’ll hold that thought because I want to go way deeper into this situation of mission critical teams and the consequences of not performing well as a team. You started to tell the story about even at work you find yourself not just studying team, but being in teams, and I think that’s probably the best lab one can dream of. Talking about labs, Sam, tell us your story. How did you get into these teams and teamwork and the study?
Tara Brown: I’m sure I inspired her, right, Sam? That was me, for sure.
Samantha Perry: Yes, definitely. It actually started far before I even got to Michigan State. It definitely grew there, but my dad actually is a psychologist himself and he was a professor at Fordham University studying motivation, communication, and leadership. And so I wanted to be like him and I wanted to be a psychologist before I could even spell it.
I engaged in research with him and teaching, and it was just something I was always very interested in, but it grew. So in undergrad, I had the opportunity to study more the leadership and motivation with Steve Zaccaro, and through that, I was able to work with the Army Research Institute and be a fellow for them, a rare opportunity as an undergraduate, and I got to work with Jay Goodwin and got to exposed to the team element of IO psychology. That really motivated me to focus less on motivation and leadership theories, but more on team and team dynamics.
At the completion of that, I of course knew of Steve Kozlowski and I had the opportunity to go to Michigan State, and that was where I really focused and really got deep in my knowledge of team processes, performance, and unobtrusive ways of measuring these phenomenon.
Daniel Serfaty: It’s interesting, because at different levels, at different times, over the last several years, the three of you migrating to teams for almost different reasons, but you were still fascinated by that organizational unit we call a team.
My next question is addressed to you, Steve. After all these years studying teams and being probably one of the top world experts on that notion of team, is there something magical about teams as opposed to any kind of other organizational form? What is it about teams? Is that a uniquely human system or are we seeing other kinds of teams in nature?
Stephen Kozlowski: Well, the way we talk about it, I think, for most of us here, organizational psychologists, it sort of gets defined as uniquely human, but certainly you can see this kind of collective organization take place in higher order animals. Animals that we think of is not having… I’m not an expert so I hope I don’t offend some animal expert out there, but you can see predators that hunt hunt in packs. They certainly have roles. They have strategies in how they play that out. Or you can look at insects, maybe that behavior is programmed in and it’s probably bigger than a team, but clearly there’s a lot of collaborative, coordinated, specialized functioning and behavior that has to take place for those collectives to be successful.
I’m not sure that what we’re seeing is uniquely human. Certainly we have the capacities to communicate and to convey other kinds of responses, liking, disliking, in somewhat less obvious ways, perhaps. But I do think there’s something [inaudible 00:11:02] to just kind of go to your notion of magical, which is not really a scientific term.
Daniel Serfaty: Not yet.
Stephen Kozlowski: [crosstalk 00:11:08] person who forms teams. I’m having to build research teams or I’m on some team. So as a participant, or former, or what have you, is that when it’s all working, it feels really magical, and when it’s not working, it feels not very good at all. You can tell. It’s very visceral. It’s different.
I think about trying to understand workers in organizations, there’s a lot you can learn studying individual characteristics, but people don’t work in a vacuum. COVID has separated us, but I spend almost as much time on Zoom as I do trying to write or read or do the other things that I would do as a professor. And so there’s this interactive component, this exchange component, that I think is really important.
The team puts some boundary around it, so it’s not just free floating, but we’ve got a common purpose, you’re trying to achieve something, often it will be specialized in some way so we’ve got to be able to get that expertise to fit together.
When you get that to happening, you create a winning performance if you’re a sports team, or you create new innovation if you’re an entrepreneurial team, or as a science team you make a discovery, you’ve made it through a bunch of challenges and you find something unique. It feels really cool and it’s something that’s a shared experience. I think that’s harder to feel in that visceral, palpable way, when you talk about the success of the organization, and you would know that. It’s a lot easier to feel and to share when it’s 5-10 people.
Tara Brown: We’ve seen that in some of the Army work that we’ve done. We’ve been talking about climates within teams and what the right level is to really have somebody talk about the climate that they’re in. So [crosstalk 00:12:50] inclusion is one of the focal areas.
It’s really interesting to think about what really constitutes the team, because within the Army, there’s a hierarchical nested organization of teams [crosstalk 00:13:04].
Stephen Kozlowski: It’s classic [crosstalk 00:13:06].
Tara Brown: What’s the right level of team to talk about? And so we’ve had a lot of discussions around, with Army leaders and with soldiers at various levels within the organization, about who they identify as their team, and I think it comes down to what Steve just said, where we typically end up around the squad size element, which is that small enough to feel like you get enough interaction with everybody to be able to really know them and know their role and know their personality and develop some cohesion with them, but big enough that it’s a meaningful team that has a goal that they’re working toward.
It’s really fascinating to think about that nested piece of teams as well. And I think the magical parts, and why teams are the unit that we have been focused on, is exactly what Steve said. We have to identify the level at which people are doing most of their day to day interactions with, the group with which they identify or some of their identity is associated with, and who they have some shared common goals with.
I think you can have an organizational identity and there’s obviously organizational level practices and systems in place, but I don’t think people typically identify as strongly with their organization on a day to day basis as they do their smaller team unit, who they’ve really had a chance to develop some of these critical states with, trust and cohesion and all of those things. I think that there’s a sweet spot there at the team level, which is why it’s, I think, the focus of our study.
Samantha Perry: I also think there’s a good example within just any organization of this “magical” phenomenon of teams, which is, for me, in brainstorming. So when we’re kicking off a project or when we’re developing a proposal, getting a few people around a whiteboard and seeing the ideas bounce around in conversation and seeing how they flourish and grow from one person’s initial concept to what comes out of that conversation, even if it is just a few hours, is really something unique, because it’s not something that would happen asynchronously in the same way.
I could send Tara an idea, and then Tara can send both of our aggregated ideas to Steve, and then we can workshop it individually, and it wouldn’t be the same as if the three of us came together and bounced ideas off the whiteboard in real time. There’s something unique about that phenomenon, even applied to just a normal organization. And I think that’s a really critical aspect of teams, breaking that down. And why does that happen?
Daniel Serfaty: Yes. I would like eventually to explore that as we move to a different paradigm of co-location. I know that the reason we say magic is because maybe we are genetically primed to interact only with a couple of handful of individuals and feel as part of a living organism in a sense, but maybe the next generation, my kids, who are teenagers, are very comfortable having dozens and dozens of people in their immediate circle. Many of them they’ve never met. And that comfort with connectivity is really something that is generational.
I think we are observing a change in that, but let’s leave that as we speculate about the future a little later in our discussion. Before we dive really into the core of this session, I would love for you to think of an example for our audience of the best team you have ever observed, or you’ve ever been part of. Something in which you were impressed by the, let’s call it right now, in a non-scientific way, until we dive later, the teamness of it.
And also perhaps on the other extreme of the spectrum, perhaps the worst team, on a non-attributional basis, that you’ve ever been part of or you have observed. And why do you think they were best and what do you think they were worst? Who wants to pick that up?
Tara Brown: I can provide a very salient example of a best team experience, and that’s honestly a team that I’m currently working with at Aptima, which was nominated for Best Teamwork Award and did not win, although I’m challenging the vote. But it’s, ironically enough, a project team that’s working on a contract on teams research. And again, we’re our own kind of nested team structure where we have an Aptima team along with five university teams and a team from [Go 00:00:17:28], along with ARI, all kind of working together on pushing the future of teams research.
Daniel Serfaty: ARI is the Army research Institute, yes? [crosstalk 00:00:17:39]. For Behavioral and Social Sciences. [crosstalk 00:17:41].
Stephen Kozlowski: And it’s a multi-team system because [crosstalk 00:17:44] different organizations involved in contributing those team members.
Tara Brown: Exactly. I think there are multiple levels of goodness of our multi-team system, but really focusing in on our Aptima team, one of the things that has stood out to me from the beginning is we exhibit the team processes and team states that make up a good team. We are a very cohesive unit, both socially and task oriented. We have very shared goals. We’re all on the same page in terms of the vision for our team that we execute. We actually like being around each other and like meeting twice a week to contribute ideas. And there’s also a very strong level of trust that’s developed in our team.
It’s a very complex project with multiple moving pieces and different levels of expertise, different types of expertise. We’ve got mathematical modelers and engineers and UI developers and psychologists and all of those people coming together that have their responsibility and role that they need to perform for our team to succeed, and all of our pieces and parts have to come together. And there has been a development of trust over time that people will complete their tasks. They will do it well. And you can count on people over time.
I think the other piece that’s made it a really good team is that we back each other up and provide support in an anticipatory fashion, so we have developed a shared mental model enough that we can kind of anticipate or predict when somebody is overloaded or when somebody might need help or are struggling.
Stephen Kozlowski: Implicit coordination.
Tara Brown: Yeah, it’s implicit coordination.
Stephen Kozlowski: Somebody here might’ve been somewhat responsible [crosstalk 00:00:19:33].
Tara Brown: I don’t know. There might’ve been a paper out there with somebody’s name on it somewhere. But we have been able to get to the point where we do implicitly coordinate and we back each other up proactively and continue. We have excellent communication, and as a result, we have navigated a lot of bumps along the way, a lot of external factors and constraints.
You throw COVID into the mix of field teams research, where you’re trying to develop a paradigm to collect data with in-person teams during a pandemic, and that’s an external factor that you have to consider and adapt to, and I feel like we have been, because of the shared mental models we have developed, because of the cohesion that has built and the trust that has built, we have been able to weather that storm and the challenges that have come up very gracefully and very productively over the course of the last year.
Stephen Kozlowski: It’s really cool to hear, even anecdotally, that what the research literature would suggest after 75 years of research on small group and team effectiveness, seems to work.
Tara Brown: That’s amazing.
Stephen Kozlowski: That’s very comforting. It’s nice to know that science works.
Tara Brown: It’s almost like the leaders in this field who have done all that research knew what they were talking about.
Daniel Serfaty: It’s a disruptive idea, sometimes science works. That’s wonderful to hear. Steve, Samantha, do you have examples, either on the positive side of things or even on the difficult side of things, when teams that you have observed tended to break down or to not work?
Stephen Kozlowski: I’ll give a short example. I mean, I would echo a lot of the things that Tara says. I have a much more focused team, so it’s not multi-team system or people from other… Well, I guess technically they are. So I have a research group that’s three of my former grad students, my wife and colleague, who’s also an organizational psychologist, and me. So we’re a core five group and we’ve been working together for about a decade. About the time these two guys were at Michigan State those folks were there too.
We do a different sort of brand of team research. We’ll probably talk about it later. But we’ve been a very productive, cohesive, and innovative group. And it’s all the things that Tara talks about. It’s also our specialization and our ability to coordinate and, I don’t want to say optimize because that sounds a little too engineering like, and we can’t prove it, but to really kind of try to maximize what each individual is really good at in terms of our collective product or the effort that we create. And we’ve gotten really good at that.
Usually as graduate students graduate, they don’t work with their former professors anymore. They’re discouraged from doing so, and there are other impediments. but we have created such a great team that we’re all motivated to keep working together on this team and just kind of manage any of those negatives from other views.
I would say when a team doesn’t work, I have a different research team and it began to break down. And the breakdown was basically when people stop communicating and making decisions without collaborating, and then trust gets undermined, you no longer have that sense of cohesiveness, you don’t have the common mental model, the shared goals begin to break down because it feels like someone’s pursuing their own individual goals at the expense of the collective, and so then you begin to do what you’re required to do like there’s some professional sense or some contractual sense and no more, no less, and, when you can, you exit. I’ve recently exited that team.
Daniel Serfaty: Maybe that’s something we will pick up in a second, because teams being lifeforms, in a sense, have the beginning of their life and the middle of their life and even an end-of-life. We don’t talk very much about sometime the need, not just the happenstance, but the need to let the team go.
Tara Brown: [crosstalk 00:23:19]. I was going to say, sometimes that’s the most adaptive strategy, is to let the team fall apart.
Stephen Kozlowski: Yes.
Daniel Serfaty: Sam.
Samantha Perry: I was [crosstalk 00:23:28] in my example, it wasn’t one specific moment, but I observed many medical teams. Some work extremely well, some not as well. And it’s not necessarily because of a lack of outcome, but the team process component can sometimes break down, but still patient outcomes remain stable.
Stephen Kozlowski: Or not.
Samantha Perry: Or not, [crosstalk 00:23:53] I’ve seen really bad team processes where the patient didn’t suffer, which is excellent, but you could see that they weren’t a very good team. Now, a lot of medical teams of different fields, like emergency medicine, are very…
Now a lot of medical teams of different fields, like emergency medicine, are very ad hoc. They come together, they do a task, they treat a patient, they break apart. You get different people from different specialties who respond, but it’s a very short-lived team. So, thinking about those teams is completely different than thinking about a project team or a business team or something within an organization. How do you frame team dynamics and those ad hoc teams can be very different and distinct than how I would expect the people I’m looking at here to act like a team.
I wouldn’t expect the same assumptions on cohesion and mental models, except as it relates to my tasks and my job. And I’m anticipating that Tara, being another organizational psychologist, can come onto my team and have a certain basis of knowledge that I can rely on.
And so, when you have these ad hoc teams that are functioning versus non-functioning, it’s interesting to me to look at the reasoning why and where the breakdown is in the team, and how does that perpetuate through the different tasks that they’re trying to accomplish? So, that’s both positive and negative examples in my mind.
Tara Brown: I was thinking about medical teams as my example of poor as well. And I remember reviewing videos of emergency medical teams, students or residents who were training through these simulators, and one of the things that you saw break apart or fail was monitoring and backup behavior.
Specifically, being willing to correct somebody else’s mistake if that person was viewed as higher on the hierarchy. So, if a nurse noticed the doctor making a mistake, they weren’t always willing to vocalize that at the detriment of the patients and the breakdown of the team processes. And so, really-
Samantha Perry: And that involve the air crews as well, there’s a lack of backup behavior and speaking up, and there isn’t that psychological safety [crosstalk 00:01:45].
Tara Brown: So you see that a lot. And I think that’s something that needs to be at the forefront of people’s minds as they’re examining the reasons that teams break down. I think oftentimes it’s that lack of psychological safety or that lack of trust in that context where people see something that’s not right and are afraid to speak it.
Stephen Kozlowski: And that brings in leadership, right?
Tara Brown: Yeah.
Stephen Kozlowski: Leadership becomes another one of those. Everyone needs it, but not everyone has it. You know, do you have good leadership that has a team is doing well, but leadership should basically be very unobtrusive if you’re already did his or her job by getting the team to that place. And if the team’s not doing well with the leader is stepping in to help get it back on track, whatever that might be. Certainly in the case of what Tara is grazing in PenNSAM, in terms of the medical teams that creation of psychological safety, that’s basically the leader not having done what needed to be done sometime in the past. The team now has this, I feel I can’t speak to power and say, “You just sowed an instrument inside that patient. I think I learned in medical school group, that’s not good.” Or “It looks like that heart has stopped. Do you think maybe we ought to take care of that?”
Thank you for those stories. I think they are very useful for our audience to situate a little bit, the space of themes. And I knew that having three brilliant scientists in front of me, I won’t be able to keep you away from science for too long. I already heard the term share mental models and [inaudible 00:27:17] and back up behavior and compensatory these. Let’s dive into that. For the next few questions will be about the science of teams because they are being studied. As Steve reminded us earlier.
They have complex system. They are not easy to study, but it’s a very rich area of study. So perhaps for our audience, let’s start from basic principles. Is something that has been discussed I know in the field. What is the definition? What is a team? Is any group of individuals together define as a team? Steve, can you share us your definition of teams?
I don’t have my well cited definition.
Okay, but for our audience. The way you teach your students, the way you have written about. What is a team?
So, I can start checking off features, but I think the important thing to distinguish would be a group from a team and not to imbue these words with a reify them with great meaning. But you know, there are a lot of social groups, voluntary groups, groups of friends. They’re not teams, at least as I would define, or at least as we try to, make some distinction in science. Then they may share common goals. Let’s have a good time, or let’s be fashion forward, let’s be whatever. Whatever it might be that brings them together. But they’re there because they like each other and they pursue some common interests. So that’s a social group. Teams would share that. We hope that they like each other. We want them to communicate and interact, right? You need more than one person and you can find some debates. Is a dyad a team? Do you need three?
Tara Brown: You need three.
Stephen Kozlowski: Three is more interesting, but you need more than one. Let’s just say that’s a big distinction, common goals. And now you start to get into, well, what makes it a team? Well, they’re there because they have some skills. They’re there for some organization, an army put them together on a squad or after my hired them to be on this research project. Or I put them on a research project in my area, right there, you’re there for a particular reason. Usually there’s some expertise or skill or at least role that drives staff, which now begins to distinguish you from more of a social psychological let’s look at group members interacting. And then, they’re embedded in a broader organization or in some kind of task environment. That work that they’re accomplishing there’s a context that surrounds it. They may need to communicate with other teams or with higher echelons in an organization. There’s some boundaries. Although with virtual teams, project teams, those boundaries might shift and change over time.
In the moment, there’s some makeup of the team. Who’s on that team? Who’s core to getting things done? So I like these emergency medical teams. We have a lot of people coming in to do things that they are part of treating the patient. I wouldn’t define them as team members that are [inaudible 00:30:02] providing information back to the core trauma team, but they’re not really team members per se.
There’s some degree of persistence for some identifiable period of time. It might just be a few moments, but there’s some boundary around which this group is doing this thing. That’s the common goal I would say.
Daniel Serfaty: That is very useful to put this envelope of definitions because then to distinguish, as you say, teams from any other social structures that wouldn’t cement them. So if a team is really a collection of two or more individuals, but structured or constrained in a sense by the components that Steve just shared with us expertise, skills, boundaries, perspectives, common goal. In what way is a team better than the sum of its parts? Can a team be worse than the sum of its parts?
Samantha Perry: Yes.
Daniel Serfaty: Yes [crosstalk 00:30:53] Okay. So tell us a little bit about that Samantha and Tara, and notice that you will be graded on this particular subject.
Tara Brown: Well, I’m going to go back to the interjection I made earlier about whether it requires two or three people, at least to make a team. I am a strong proponent of teams being three or more people, not two or more people, because I feel like a two person team as a dyad and the dynamics between two people are very different than when you introduce a third person where there can be a two versus one situation or other kinds of in-group out-group types of behaviors that I think just add a layer of complexity to teams.
Stephen Kozlowski: I would point out that underneath that structure are dyadic strong groups. So this point of two versus three, I get all of that. And I’ve seen these debates go on in the literature, I think it’s kind of a dog chasing its tail. [crosstalk 00:31:46] At the end of the day you’ve got social linkages and it’s how those social linkages play out over time. That’s really important. So you do get more complexity. If you have more than two.
Tara Brown: You give more complexity, which I think is where the real fun stuff happens. So, anyway, I’ll answer your question. So I think a team, a high functioning team, what I would call an effective team always is greater than the sum of its parts. I think there is a synergy that happens when you bring individuals together. And I think that synergy is at its greatest when the task or the activity that that team is performing requires a high degree of interdependence. And so, when a team needs to draw on expertise from multiple individuals and really coordinate to get activities done, I think there’s the potential for that synergy to come like Sam was saying about the brainstorming activity. There’s something that happens, that magic that happens when you bring multiple minds together that should generate something that’s greater than the sum of its parts. And I think through those unique elements of being a team, like cohesion as something that you can only experience in teams, you can’t experience that as an individual.
So these emergent states that evolves are kind of the unique qualities that I think create the synergies of a good team. But I think equally, if you have a team that’s not high functioning or effective where these processes and states are deficient or even toxic at times, I think the team environment, the team context within which individuals are working on can actually impact the individual’s performance to the point where they no longer are contributing as effectively as they would outside of that team context. And so, I think there are definitely examples of both.
Samantha Perry: Made me think of this individual level construct that my dad, Paul Barton introduced me to ever since I was a little of psychological flow, but just in me wholly. So you have this ease of thinking, you just get in the zone, if you will, when you’re writing or doing something at the individual level. And that’s kind of what I was thinking from teen level, when you’re brainstorming you’re in this flow, the other people in your group and ideas are just bouncing off of each other. It’s different. It’s different than just sitting there writing a paper. If you’re not in this flow, the psychological flow, it’s like the white page phenomenon. You’re just staring at the screen. You’re staring at your notebook and you have nothing going on and you can’t think about what to do next, but sometimes if you’re in that flow ideas just stream.
And I think that’s the same when you’re in the team. And it helps you get out of that when you are working with other people. And so it’s not just, I could sit there looking at a page or I could just call Tara up and say, this is what I’m thinking. We’ll talk for five minutes and then I can go and write for two hours. And it’s just that different, it kind of spurs your thinking, there’s something else that happens. So I do think dyads can be teams, but you know, I think there are key benefits to be able to talk with other people because it gets you out of your own head. And it has you articulate concepts that help you be more effective.
Tara Brown: Yeah. I think there’s a knowledge generation that happens as different individuals provide pieces of knowledge and different perspectives. It’s not necessarily that you just have the sum of all the different perspectives and all the different ideas, but those ideas merge and blend and get refined. And over time, the end result produces something that no one individual could have produced by themselves.
Stephen Kozlowski: It can be behavioral too. I don’t like using sports as exemplars because I think they get overused. But for this particular question, they’re occurring. So, famously, I don’t remember the year, but there’s the US hockey team and [inaudible 00:35:30] you know, basically a bunch of Rockies, Olympians, but they all play the Russian national team because they collaborate better. They interact better. They coordinate their play better. And then, famously we have the US basketball dream team at the Olympics, all NBA stars, they’re all individually fantastic. But they’re there just to basically be there to be in the Olympics. They don’t coordinate very well. They don’t play well together. They’re just playing as individuals and they get defeated. So they were less than the sum of the parts. And the other team was more than the sum of the parts.
Tara Brown: Yeah.
Stephen Kozlowski: And therein lies the magic.
And that’s why we need the science to understand really how to extract that extra energy and minimize a waste of energy that comes from basically a team that is not well [inaudible 00:36:20] adjusted, coordinated, cohesive, et cetera. Steve, I’m going to ask you to do something if possible, which I love to ask people to do because usually they come through. You publish so much and your research is so rich. And also varied each time. Can you share with the broadcast audience, some of the key milestones in your research of that covers several decades of team research, some of the key milestones, the key moments, when you say, “Wow, that changes a paradigm. That changes my understanding of teams.” Can you pick two or three of those out of the 500 plus publication that you have?
So remember, I mean, I got into teams because it was a way for me to understand systems and not to over jargon this, but what we would describe that in organizational psychology and in management would be multilevel theory. So trying to understand how individuals, groups, organizations, theoretically, how do these constructs, these concepts that we talk about that are in your head, right? Cohesion is a perception that you have or a mental model. That’s something inside your head. How can we talk about that and measure it as a collective construct? Right? So thinking about that, theoretically in helping to think about how do we do this methodologically. So I’m certainly not the only one, but it was a small group of people interested in doing that for about 20 years. And we were at best a boutique area in science. It was not mainstream. Most people didn’t get it and didn’t want to get it.
But by the turn of the century, we were able to get some traction. I worked on editing a book with Katherine Klein, which really pushed that out. I mean, it basically laid out a set of principles for how you could do this. And again, it was not us just writing it, but mostly synthesizing and being able to take some risks about thinking about things in new ways. So that made possible team research, which a few people were pursuing, but not many to go really mainstream and organizational science across the board. So, I mean, if you just look at how much team research was being done or multi-level research, cause it tends to overlap a lot. It just takes off at the turn of the century. And a lot of that was really fundamentally founded around building these principles, giving people some tools, reviews then know what to look for when you send in their paper instead of rejecting it.
So it really changed the nature of research. And I did research in there as well. I want to talk about methods when we get to dynamics, but I think methods is really a big part of advancing the science. And it’s not often not rack news. To me, that was a big milestone. And I would say, a decade later looking back on this, I’m like, “Gee, that was cool. I had some good points there.” But I also recognize that half of what I wanted to do was getting done and the other half was not getting done. So the half that was getting done was the easier half and the other was… You know, we could learn how to think about these collective constructs and have some I’ll call them measurement principles, good rules of thumb for how you can collect data from individuals and aggregate that up to represent something like a team or even a larger social unit.
So if you follow the rules, you can get a means by which you can create data, but it limits what you can study. So what you’re studying mostly are things that are statics rather than dynamics and mostly things where higher level phenomenon, how the organization limits or constraints the team, or things that are organizational level can influence the team. So, the nature of the organizational structure, how flexible or how rigid it might be, influences workflow systems, technology design, which influences me the person, because my job is a piece of this, right? So we could see how the top influence the bottom. We don’t get the look at how was the bottom percolate up and become collective or come back and influence the bottom. And so that part was not being done. And it’s kind of interesting because that’s about the time that Tara and Sam and this other group of students got to Michigan State and it’s not like I was unaware, but I was thrust in a position.
There were opportunities to get funding where we could take some risks and begin to do the hard work of how do you study team dynamics.? And so, I would say writing about and thinking about, and really helping to think through new methods for collecting data, high-frequency data, where you can begin to capture some of the dynamics of how do these phenomena unfold over time? How do they start from it’s a thought in my head to something that is now this tangible collective construct? This is what I’ve been doing for the last decade or so. And for me, at least it’s a paradigm shift. It’s a less productive by the way, too, because it takes longer and we’re trying to pioneer new methods. So these guys were involved in medical team research, which is very laborious. You know, you have to create scenario, you would know Daniel, because you’ve done this as well, but you’ve got to create the scenarios.
You’ve got to get behavioral markers, you’ve got to train coders, you’ve got to extract the behavior from video. So you create simulated situations. You put people in them, you’re looking for particular behaviors, and then, you can extract them and they show you a story over time of how the team performed. You know, what did they do? And did the medical teams, did the patient liver die simulated folks. So it’s safe, which is where you want to do this kind of research.
We’ll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS but don’t have time to listen to an entire episode? then we have a solution for you. MINDWORKS minutes. Curated segments from the MINDWORKS podcast condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, Best Buy or wherever you get your podcasts. I want to go back to the notion of methods because for many members of our audience is kind of a mystery. How do you go and study teams? They understand the concept of they are all parts of teams. And there is as much innovation into the finding as it is into the method themselves that led to those findings.
And I really want to explore that. But before we get into that, I wanted to ask both Sam and Tara, it’s like in political debates, once you mentions somebody’s name, they have another minute they can talk about. So since you mentioned both of them, is there one particular key idea, kind of an “Aha!” that you had as you were reading all these rich literature now, but also participating and generating your own. Is there a particular concept, one concept that really appeals you in this whole theory of teams, study of teams methods to study teams, Sam.
Samantha Perry: So I’m going to maybe answer your question, but first I wanted to point out one of my fondest memories was maybe the first time that Steve and I talked at MSU, I just arrived. And he told me he wanted me to be a part of his NASA team. And he promptly said that he wanted me to work on figuring out unintrusive methods to measure team dynamics and these NASA teams. And I remember being kind of overwhelmed, but kind of excited. Like this is a new way of thinking about team dynamics and what kind of behaviors can we capture. And it was kind of my introduction to kind of a new methodology and to really dig deep and think about this from a new perspective and that perpetuated a line of different ideas and thinking that I engage with, with Steve and the other students in his lab.
And I remember maybe not one specific construct. I know we worked a lot on the construct of cohesion, but the comparison to known methods like self-reports, we struggled a lot with how do we understand behavioral metrics and the associated self-reports usually with construct and criterion related validity, anyway, those are science-y words, but basically you want to make sure that you’re measuring what you think you’re measuring. And so, you need to use established methods to do that. But the problem is you’re in a different head space when you’re answering a question. If I ask Steve, how cohesive are we as a team, it’s going to be different than looking at the behaviors of whether or not we hang out, whether or not he comes and talks to us when he has a problem. Those types of behaviors are different in kind than asking for a summation of his approach to our team.
And so, it’s not necessarily construct [inaudible 00:44:50] you know, may or may not be answering your question, Daniel, but that’s something that has highlighted to me over the years and why I think Steve, it’s so hard to be productive in the literature because there are so many barriers to get over with. How do we describe to the academic community that this is a meaningful, purposeful way in which to pursue the dynamics?
Stephen Kozlowski: I think that if I hear you well, that first challenge at that first conversation you had when you enter the professor’s office, and he challenge you for that, I am amazed that after this high challenge you stayed at Michigan State.
Tara Brown: It was literally in the first five minutes.
Stephen Kozlowski: Okay. But this notion of looking again as a team, as a living organism with observables and thinking about I can actually measure those things that Steve told us are literally in the head of people, cohesion or mental models that we can actually in an unobtrusive way without asking people opinions, be able to measure something. And from that measurement to infer basically the hidden variable in a search. I think that’s a key idea. That’s a key idea or transformative idea, at least for teams. Tara, one key idea.
Tara Brown: I think one of the things that caught my attention and that I’ve been grappling with ever since is to really distinguish between longitudinal studies of static snapshots of cohesion and other states versus really studying the dynamics and the emergence process of those states. And so, one of the things that I’ve really been thinking a lot about since my time at Michigan State, and even as we’ve studied cohesion and other team states within the context of some of our work at Optima is what is the right way? And what does it really look like to actually study the dynamics of the emergence process? I think oftentimes we don’t consider the temporal nature of teams and really think about where they’re at in that emergence process when we’re measuring their cohesion level or their trust level or other things.
Stephen Kozlowski: Yeah. Add as a clarification. So say a team performs a two hour mission to a military team for as an example, are you talking about this timescale or are you talking where in their life as a team over the years?
Tara Brown: The life cycle of the team. So they’re [crosstalk 00:47:21].
Stephen Kozlowski: They’re different. The methods that are dominant have been dominant for a century and I’m not slamming the methods. They’ve been very effective and very productive, but it’s essential . You know, we might think about diversifying a bit. So the lessons are asking questions, having a lot of rules to make sure that the answer to those questions tap the concept that you’re interested. Because often it’s an observable, it’s something in the head, right? So we have a lot of rules on that in psychology. And we bought that. But once you have that measure, you can correlate it with other measures and you can have some statistical techniques and they’re very fancy techniques, but underneath they’re correlation still. So you’re looking at relationships. It’s important. It tells us that this is related to that, doesn’t necessarily tell us why it’s related. And so at some level the methods…
… Why it’s related. So at some level, the methods have to advance, in some ways let go of some of the rules, because if you want to measure more frequently to begin to unpack it, then you have to measure things that are quick. So they’re either single item questions, if you’re still stuck on questions or they’re the behaviors that you extract from video or in a laboratory, and we can look at button pushes. This sequence of buttons means they were trying to accomplish this, so I don’t know what that is. Or I know, we do some modeling as well, where they’re software agents that are behaving according to theoretical mechanisms and we can study them at scale.
But this is all removed from the dominant, “Let’s ask people questions and correlate those data.” And beginning to move into how do we actually capture these snapshots? And I think Tara raised a really key point, what is the timeframe? Well, the timeframe is really well, what’s the phenomenon that we’re trying to capture? And so some might require days, months, years, and some might be 20 minutes. I can get it. Then the method has to match that unfolding and I think that’s not well appreciated. And of course part of it is, the toolbox is under construction.
Tara Brown: Yeah. There’s no real theory that tells you what the right temporal dynamics are. I think that’s where the theory is lagging behind the methods, and it can inform the methods. And so we’re developing theory as we go. But I think the other thing that I’ll just touch on quickly before we move on is that it’s commonplace in the way we study cohesion and other emergency states, as we look at the average of the team and we say, are they high or low on cohesion? Then does that average change over time if we sampled them this month and then next month or this hour and the next hour?
But what we don’t really look at is, have they been together long enough where cohesion should have emerged? Or are they still early enough in the process where it is still emerging? So really not looking at the variability across individuals and their perceptions of cohesion and, is that variability growing or shrinking? Are they converging on a shared states or shared perception of cohesion? Or are they getting further apart and really understanding not only the strength of their cohesion, but also the agreement of those states and whether they’re converged or not.
Daniel Serfaty: So the three of you actually are bringing this notion that is quite new I guess in a team research is, notion of the life cycle of, again that of a team of a living organism. That brings basically two of the sets of questions I had for you, you started to answer. But the question of methods, so I know how to study a team. Maybe we would know to study a team that has an emergency medical team, whose task probably last a few hours. I can perhaps compress it in a lab in a couple of hours and be able to accept the result that is or observe it in the wild while they are actually doing their emergency care. But how do I study a team that evolves for ten years or five years? What are the scientific method to do that? In a sense, once you see that team, you don’t have access to its electronic team history, like we have an electronic medical record when we see a patient.
Stephen Kozlowski: Well, organizations do. I mean, so to some extent you’re pointing to what the future may look like for research when these kinds of digital traces or behavioral traces that can be fused across different platforms while some degree of tracking over lengthy periods of time. I’ll go back to, most of this research, 70 some odd years of research, it’s static. That correlation captured at one period in time, or maybe with, two or three times slices. What we’re trying to pioneer are techniques that allow you even over short timeframes to be capturing data every second, let’s say, or once every couple of minutes for a longer timeframe, where you can actually begin to see how things begin to play out, again scaled appropriately to the phenomenon.
We did a project with my medical colleague where we were filming [inaudible 00:52:16] in a regional medical center. That’s about 20, 30 minutes total. They’re in, they’re stabilized and they’re off to the ICU. So it’s a very definable timeframe. And you can study what happens to the team. And this instance, we’re focused on leadership in that kind of compressed timeframe. For some of the other phenomena we’ve been studying analog teams for NASA, where you can look at a team for upwards to a year.
But we look at one team at a time potential. Want to look at large numbers, you got to use this technique. We do computational modeling. If we want to do something else, I want it more in a while and I want it naturalistic, you can do this other technique, but you’re not going to look at as many teams. Every method has some offsetting liability in terms of the advantage or strength that you’d get. It’s really important, at least I like to have lots of different tools in my toolbox to be able to use the different tools, to understand phenomenon that better fit one approach or another. I still do research that ask people questions, because there are some things you can’t get any other way.
Daniel Serfaty: Sure. But it’s an evolution. Sam, If we took the example that just Steve shared with us, because again when you share that visual picture of the emergency care team coming in, stabilizing the patient, and moving in after 20 minutes, they have a whole lifetime. I don’t know whether or not they’ve worked at a team before they enter that room, but this is essential data to understand their dynamics during those 20 minutes. Sam, how do we capture that?
Samantha Perry: The idea of digital traces is something that I’ve been pursuing a lot up to my lead, a technology called TeamVitals, which has about a decade of data, that’s gone into its development. Basically the idea is, how can we understand how individuals are interacting with other individuals by capturing emails, chats, any kind of interaction based, communication based, or even just behavioral based data? The concept is utilizing some social network theory, which is basically the classic spider web of who’s talking to who? But how can we use that data to understand what events are happening, what outcomes are super imposed on different interaction patterns? In that example, Daniel, it’s which teams have worked together in the past can be tracked and traced through pulling in historical records, by pulling in which patients have individuals worked on and correlating, which patients have had the same temporal hack. If you saw patient X at time one, but Steve saw him at time two with Tara, then we’ll be able to know this is the history of how people have worked together, which can inform longevity, which can inform common knowledge bases.
Then our jobs as IO psychologists is to figure out, to what degree does that information help us understand that team in that moment? So if Steve and Tara had worked together in the past on patient X, you share common knowledge with that patient X, Daniel. But what if there was a particular conflict or a negative thing that happened during Steve and Tera’s interaction with that patient that caused a rift between those two people? And we never saw them work together again, whereas in the past they worked together every week. Those are different pieces of information that we can capture from that just behavioral data source or historical data source that we can aggregate and understand, not having to ask them any questions. And there’re all sorts of caveats and things that we need to think about, but that’s kind of how I would start it.
Stephen Kozlowski: And you can never get that data by asking people questions. It’s not the kind of questions we’d ask. It’s this idea of whether your dominant methods are snapshots, mostly it’s just one, sometimes there’s a few strong together versus making a movie. And it’s either one of those old fashioned movies, it’s kind of slow and clunky because you don’t have that many data points or it’s literally high def and you can just see everything unfolding.
Daniel Serfaty: But what it brings actually is extraordinary complexity. Perhaps even more than complexity, it’s an extraordinary amount of data that is necessary to really understand in this new way of looking at teams, which is exciting. I wish I was a team researcher again.
Samantha Perry: [inaudible 00:56:23] be a CEO.
Daniel Serfaty: To have that amount of data that can actually inform even the question I ask at the moment of observation or the construct of the variable, I am interested at the moment of insalvation, It means that we have to carry those data with us for a lifetime in a sense, since we’re going to be part of different teams. What do we do with all this data? Is the data essential to construct the theories, Steve now.
Stephen Kozlowski: At one level I want to say, each team is unique. Each team has its own ecology, and so I really want to be that predictive. Then yeah, I probably need to follow those people working together as long as possible. So NASA sending people to Mars, that team will be metricked to the gills or should be, and we should be able to be doing predictive modeling of that team. A And we can talk about it, but at least some of the work we did, we can do that now without, I’ll say relatively primitive tools. 20 years from now, it ought to be like a piece of cake. But to think about it from a more generalizable perspective, like what’s more typical? I don’t know that I want to be trying to sift through all of that data. I think that’s where, how to use methods intelligent quickly becomes useful. Can I think these guys, Sam and Tara are in a really great position to be anathema because one of the neat things about Optima is that you have this range of skills and capabilities.
It’s not just IO psychologists and the typical statistics would use, but do you have data scientists, computer scientists who are using analytic tools that are designed to unpack the dynamics. They’re not commonly known necessarily in organizational psychology, because they’re a little bit alien. But these are the people I collaborate with in order to have the capacity to do the things I want to do. I would say at scale we rely on agent based modeling to see what can happen to teams and to identify, I would say, promising targets for when I want to do the very costly research with humans rather than asking some, what if question, well, what if we did this?
Well, let me run 200 teams, over a two year period to see if I can figure out what happens if, or I can run a simulation with millions of agents and say, here’s what can happen, here what’s likely to happen, here’s what can happen in these dark corners that I could never even get data to really see in the real world are only with great difficulty. And here’s where I really want or verify a finding, I really want to see how robust this effect is with real human data. It’s easy to get overwhelmed with data. Even though I would like to see more and more data, there’s a point which you’ll say, okay, enough. How do we think about how you use data intelligently? When do I need the actual data from the actual people in that great deal of density? Because I really want to predict this one team that we’re sending to Mars, versus, “I need to know what happens on average and I need to know where this team might fit in that distribution.
I have a better sense of what’s likely to happen to them. Then that would just narrow if I’m trying to lead them, if I’m trying to do some intervention for the team, it would give me a lot of guidance. I would say that in the work that I’m doing, at least in its pieces, we’re trying to really think about how do you appropriately use methods to do really different things. Well understanding to me, it doesn’t have to be a one size fits all where we need every day point from every thought you’ve had about every team you’ve been on, to figure out how you might behave on this next team.
Daniel Serfaty: I think that I want to elaborate a little bit on that because I think like in many other fields, we are seeing a major shift between daytime theory. It’s almost like the data or the data that is possibly available is getting ahead of the theory. Before we needed a theory first to go and test the hypothesis and collect data in order to test its hypothesis, now the data is here and we are trying to build in a sense the theory or guidance, the prescription because the data is already here. That’s an interesting tension that is pervasive in many scientific fields involving human behavior and performance these days.
Samantha Perry: When I think about some of [crosstalk 01:00:34] that we have on, and some of the psychological phenomenon that we’ve measured in metric with self-reports I feel like we’ve taken it for granted that that is how that construct is measured. But I think some of the theories lend more in depth data to the answer to it, but we haven’t considered that really in some of our theoretical development. We laid out what is cohesion and it has an inherently longitudinal inherently behavioral component to it as it emerges over time, but we measure it with snapshots and that’s something that we’re comfortable with. But thinking back on the theory, can we not associate it with the data that we’re capturing now in a much more realistic way, but it’s difficult to make that case in the literature, even though it really does tie to the original theory. I just thought that that was-
Stephen Kozlowski: I would comment but I know Tara has something that she wants to say.
Daniel Serfaty: I would love to hear your comment after Tara’s, Steve, by all means, Tara.
Tara Brown: I have a lot of thoughts on the data theory balance. My personal experience is that we are a bit ahead of the theory in terms of the data that’s available to us now. And what that puts us in danger of is, I think becoming too atheoretical. I think we’re in this interesting tension of having to create theory. But as Sam said I think there are concepts and conceptual information within the existing theory that we can’t lose track of. One of the ways that we handle it in terms of thinking through these unobtrusive of novel measurement approaches for cohesion and other team states is really taking a top down and bottom up approach to it. So really grounding what behaviors or characteristics exist that align with how we conceptualize something like cohesion. And then matching those to the data that’s available to make sure that the indicators and the unobtrusive data that we are pulling into our measurement of cohesion is at least grounded in theory, even if the way that we compile it ends up making the assessment, is more data-driven or diverges from what we typically do within the literature.
I think having that grounding within the literature, within the theory is really important. Then I think if we become too atheoretical to say, we have all of this data available from teams, and we’re going to just throw it in some machine learning algorithms and see what spits out, and say, this is cohesion, I think we are in danger of ending up in a place where we can’t really explain what we’re finding. But I think the other challenge with that is, even when you go through that process of developing these theoretically driven indicators and gathering data on it, it’s still extremely rich but complex set of data that as an organizational psychologist, I don’t think we have a way of making sense of outside of bringing in other data scientists and folks, like Steve was saying, that can help us think about that data in a way that would allow us to think outside of the box analytically.
But I think there’s a decision point and assumption upon assumption that has to happen when you get that kind of data about, how do you aggregate it? Not just to the team level, but across time. And what are the assumptions you’re making that help make those decisions? I would say it’s easy for us to fall back on, that’s too complicated. And so that’s why I think our field continues to stick with the tried and true. But it’s also the fact that we’re bringing these novel methods and novel approaches that are theoretically driven, but at a different level of granularity than we typically have measured these constructs. Therefore there’s a lot of resistance in the journal and publication outlets of, are we really getting at the same construct? Is what we’re getting at really cohesion or is it some behavioral result of cohesion that shouldn’t really be called cohesion? I think we open up Pandora’s box in a good way, but there’s a lot of questions that emerge as soon as you start going down this innovative path.
Stephen Kozlowski: I like to think of myself as a theorist. So theory should reign supreme. But I would also point out that methods constraint theory. So most of the theory in my ox field is basically constrained by, you’re going to turn your thinking into a hypothesis with core measures of those constructs, and then you’re going to use some cor relationally based techniques of correlation. And the ability to correlate the data is at the base limiting the way I think about how things work, which is why most of our theories are static, and really don’t think about how they play out over time.
How does a phenomenon emerge? That it’s not a correlation that you can examine. You really have to look at really the underpinnings and some different ways of visualizing that data or that phenomenon as it’s going to manifest. I really think that rather than, this theory has to lead everything, it has to appreciate where does theory come from? Where did Darwin come up with his theory of evolution? In sitting in a chair, drink a scotch or whatever, and come up with evolution. He observed, he collected a lot of data and then he tried to make sense of the data.
I agree with Tara, there’s a danger of relying too much on machine learning techniques where we don’t know what the machine knows so we don’t know how it came to that conclusion. Of course, then the quality of the data becomes really critical. But there’s value in having that data and using those and other techniques to try to figure out what in fact is going on here to begin to inform theory and quite frankly to begin, to get theorists, to be thinking more dynamically. Because most the theories are really static. Even when people think about dynamic they think, here it’s the theory at time one and at time two and a time three, which is not dynamics. We have complex connections, feedback loops, things of that nature.
I really think that the methods and the data can help Porsche theory to begin to catch up with these techniques. And we’re at that point. We’re at that point where that needs to be happening. Yeah, I think it’s an exciting time if you’re interested in dynamics of phenomenon and systems, because we now are beginning to see this kind of informing from different disciplines that really help each other out in ways that certainly I didn’t get when I was trained as IO psychologists.
Daniel Serfaty: Yeah. I think the three of you make excellent points. I think this is a debate that it is not just certainly for team research, it’s not even for psychological research, you see that again and again now in the pharmaceutical research, in other places like that, when the data advocates or not theories say, quantity will trump quality. And there is an elegance at least with those of us who got educated in the classical way, an elegance in theory, that you don’t have in massive amount of data. But that tension is, as you said, Steve, is very current. We can turn it into a creative tension and it’s very exciting time to be a scientist, because now you can actually have multiple tools at your disposal. You have the data and you have the theories and you have the models and you have the methods. And all of these together can lead to a deeper understanding of teams.
Well, all these discussion about the preponderance of data and the need for theory to balance those data, we need basically to the last portion of this discussion, which is the future. And I’m going to play first a little game with the three of you with your permission. I’m going to challenge you with a little problem, and we can discuss that for a few minutes. Let’s imagine hypothetically, and from the remarks of Stephen, we know that it’s not fully hypothetical, that NASA comes to this team today and say, “Hey, you guys are experts. We are sending a team to Mars in a few years and trying to bring it back alive to earth. And we’re going to ask you as experts, questions, how do we compose or form that team? How do we select the particular individuals within that team? How do we train them? How do we keep team there? So the team cohesion of the team over a long period of time, what is the worst thing that can happen to us from a teamwork perspective over a long period of time?”
Can we answer all these questions now? And if yes, let’s start answering. Here you have unlimited budgets, limited time though, to basically start thinking about that. Let’s assume that for reasons that are outside of the scientific realm, NASA decides that we need to send five people in that first mission. What kind of questions, if not the answer, are you going to start asking about how to compose that team?
Stephen Kozlowski: I’m going to offer an opinion here. I would say, probably the biggest challenge here is how to compose the team. Selecting, there are going to be certain skill sets or an experience profiles, NASA has that down. They’ve been selecting astronauts for a long time. Composing a team, all the questions that people might ask of science to answer. Here’s a pool of folks and we want to allocate them to teams, or we want to build army squads or organizations want to, who should we go together? Theory is lacking, the data is lacking, really sorely lacking. It’s because people differ on an extraordinarily wide range of things that are important potentially to composing the team. We don’t know what they all are. Can’t measure them all or it’s expensive. But more importantly because of that, there’s very little science. The database, by which one could inform theory and help build theory out, it’s really not there.
I would actually say, one of the things that my team and I are doing is, this is where modeling comes in. So we can compose teams on a fairly wide range of characteristics, certainly much more than you can study with real people in the real world. Then we can run simulations and see what happens to those team members. I would just say, a work in progress, I want to answer a couple of others. Almost everything else here is tractable. The other one that’s really difficult right now is, how do we keep cohesion over long periods of time? As I said, Sam and Tara were at Michigan state where I think I had just gotten a NASA grant. And we were just getting into doing some research with an engineering group that built a sets of platform that we were using that could track interactions who you’re interacting with.
It’s only now almost a decade later that we’re working with data where we have teams, I’ll call them in the wild, but it’s a controlled wild, where people are in a mission simulator. And they’ve spent anywhere from eight to 12 months together, living in a habitat, a facility with limited opportunities to explore the external space, pretending to be on the surface of Mars. We don’t have many teams, we had to collaborate with lots of people to get access. But I can tell you what happens to the teams over time to cohesion and it breaks down. Teams start high, they like each other, they trust each other, they’re glad to be there. And basically four to six months into the mission, it varies a little bit for each of the teams. But reasons why it vary a bit for each of again, three teams, not a lot of data, cohesion begins to destabilize.
Daniel Serfaty: Is that because they’re isolated for a long period of time?
Stephen Kozlowski: It’s an immersive experience-
Daniel Serfaty: So delayed it for a longer period of time?
Stephen Kozlowski: It’s an immersive experience. You live, sleep and work with the same people, you have limited ability to email or communicate with your friends and family outside, because NASA wants to simulate a mission to Mars. So your social world is very small, this is a team of five or six people, and it begins to destabilize. And it’s usually just, one or two people start to feel less cohesive than others, and then it will contage across the rest of the team, and by the end of the mission, y’all just want to get away from each other and go home.
Now, this is eight or 12 months. So if we’re going to send people to Mars, that’s about 33 to 36 months. This is about the time they would be getting there. So the answer to your question, how do we maintain it? Well, we don’t have the answer yet, but we have indications from the data we collect that we can detect it from the sensor platform. So if you know that things are beginning to go bad, what do you do? And I think a lot of it has to do with how you communicate that information to help team members maintain their cohesion, assuming that that’s something they want to be able to do, because once they fall into conflict, it does not get better.
Daniel Serfaty: So basically you want to use the tools at our disposal now to remote diagnose the onset, or to have a leading indicator, that we tell you the team is about to lose something, and intervene at that point?
Stephen Kozlowski: What we were proposing to NASA was not reporting back to big brother and having someone communicate with the team. I mean, you could have a range of interventions, but basically, how do you help team members self-regulate their social cohesion with each other? If you’ve ever had a conflict or a problem with somebody, you might not know it, right away, that you got somebody angry, you did something, said something, didn’t do something, didn’t say something that you were supposed to say. And so right now, there’s something going on and you don’t know about it. The longer you don’t do something to resolve it, the worse it’s going to get. Like all of us vary on our social perceptiveness, some people are really good at this because they monitor a lot and know what to do, and a lot of people, well they don’t monitor or they don’t know what to do.
So if you could provide feedback and some guidance, and think about how this could roll out across the team, and make it something as a self management tool, not a big brother tool, that’s what we had proposed as a kind of architecture, an asset. [crosstalk 01:14:26] that latter part, but we do have the sensor platform and that technology got transferred to NASA. So it’s theirs to figure out what it is they want to do with it, but we have data that shows that really using very simple algorithms that track how frequently you interact with people and how that changes over time, we can predict social cohesion, and in particular, breakdowns.
Daniel Serfaty: That’s great, and here I was trying to give you a hypothetical problem, you’ve been working on it for years. Let’s take the next step. Samantha or Tara, pick up the answer. So let’s assume that those sensors are on people, they collect data. The data and the theory collaborate to tell us that something is about to go awry. And they let the team members, on month seven, somewhere, getting closer maybe to Mars, that things are not going so well with the team. Two questions, how would they know that? Okay, hypothetically? And second, if you were there to advise about an intervention that they could do, what would that be?
Samantha Perry: I think it depends on the nature of, if the situation is task-based or not task-based. So I don’t know if we covered this, but this idea of task cohesion and social cohesion are complimentary in nature, but based on different behaviors. So task is really focused on what the job is, and social is how well do you want to do you want to spend time with them outside? We tend to focus on task cohesion, in particular, when we’re at our workplace and in our jobs, but in these environments that are so interconnected that should you ignore the social cohesion part in the selection or in the team building part of establishing these teams, it can be catastrophic because they are your social network in addition to your work. There is no work life balance, it is the same. And so I think not recognizing that can be really quite terrible. To answer your question-
Daniel Serfaty: I won’t let you get away with that answer. Let’s assume that you can actually measure the task-based versus [crosstalk 00:01:16:30].
Samantha Perry: Exactly, yeah, I just wanted to highlight that in case our listeners aren’t used to hearing task and social.
Daniel Serfaty: How would you do the hard work of suggesting or provoking an intervention? Even without the big brother model, that the people can self-diagnose and then self-remedy.
Samantha Perry: You can give individuals feedback, like when you have a Fitbit or something, and it says, “You’re only at 8,000 steps and you have an hour left of your day.” It highlights, like, “Hey, you have so much time left to fix this.” In a conflict situation, you might say, “Hey, a couple of days ago, you had this point of conflict. You haven’t talked to that person since. You might want to go talk with that person because you haven’t engaged after that.”
Maybe a high arousal situation, you both had really high blood pressure or something, let’s say there was a sensor that captured that. Maybe making sure that if there was kind of this non-interaction from that time point, and I’m just making this example up, it’s just off the top of my head, perhaps there’s a system that’s able to predict, “Hey, we’ve found in past data that when you have a conflict event and you wait three or four days, that becomes a problem. But you here have a little bit of time to fix this before it becomes a problem.” You can give that to an individual, and say that three or four or five days go by, maybe the system is able to tell the leader of that small team to say, “Hey, let’s have a team meeting, make sure let’s maybe do a team activity. Let’s build some social cohesion again. Let’s go have some space drinks.”
Daniel Serfaty: So your suggestion is to have some kind of a presence, I dare say, artificial intelligence, that is there monitoring, roaming around, knowing the theory about team cohesion, collecting the right data, and then suggesting some solution. Basically it becomes not truly a team member because it doesn’t replace the function of a team member, but it becomes kind of a rolling attendant that is there trying to help the team. Tara, you agree, or you think that’s absurd?
Tara Brown: I guess it depends on how you talk about artificial intelligence and what that is, and that’s a whole other discussion that we’ve had. But I think, for me, what is key is collecting the right data that can identify when cohesion might be going in a downward trend, and then providing some sort of feedback via some display, some alert, something at the individual and team level, to make them aware of it. Because I think there’s different levels of intervention that can be used to help guide, as Steve said, this team self-regulatory process.
There might be situations where cohesion is declining and the team is not aware that cohesion is declining. So the simplest intervention might simply be making them aware that their cohesion is declining. And I think by providing them with specific indicators from the data that help them understand what is changing. So, we’re seeing less frequent interactions, or there seems to be more negative affect being displayed, and communications or whatever. Letting them know what you’re seeing in their interaction patterns that is indicating that cohesion might be off helps them, at a very initial level, be aware that there’s a problem and understand what it is that’s indicating that there’s a problem.
Beyond that, there are times where they might be very aware that cohesion is going down the tubes, and so it’s a different type of intervention, it’s providing them with strategies for how to start repairing that. And frankly, helping them see why it’s important for that cohesion to be repaired, and reminding them of the consequences of that going beyond repair, and is there a point of no return of that cohesion? So reminding them of their mission, what is their shared mission? How is this getting in the way of them accomplishing that? Having them come back around that shared mission and reuniting around that, even if there are maybe social frictions, helping them come back around the mission that they signed up to go accomplish.
So I think there are interventions along that point, giving them other strategies for how to engage in conflict resolution. Some people don’t know how to resolve conflict. And so, if there is a conflict that is causing the cohesion, the decline, I think there are conflict resolution strategies that could be provided, and making them aware of that.
And I think just having that kind of real time feedback of allowing them to see trends, not always having to push an alert, but providing a system where they can monitor. Like very simple, things go from green to yellow, to red, “Oh, that’s a very simple cue that something needs my attention.” And so I think that it can’t go through the big brother system down on the ground because one thing we know about the Mars mission is they’re not always going to have communication with ground. And if they do, it can be significantly delayed. And so I don’t think that’s the right path. I think it’s got to be feedback directly to the team, and the whole assessment and feedback and intervention needs to be all around them learning to self-regulate and resolve conflicts and repair cohesion.
Stephen Kozlowski: And this is where it’s like, you’re not just “the team,” because the conflicts are often between a person and the team, or two people, or two on one. Now you’re getting into really having to differentiate, not just individual and collective, but where has the friction developed in the network? One of the things we can see with these data, because we can look at interaction patterns now, basically for 16 hours a day, over months and months and months, is when there are these friction points. How it does not, the collective, it’s really starts with a couple of people and it changes the structure of those interaction networks.
And the danger is when you have a structure that’s more or less persistent, and then you’ve had a conflict that changes the network forever beyond that conflict point. The network does not recover. If you’re not able to detect that that could happen, or intervene quickly after it does, then there’s a change in that social system that, it’s got a new attractor, so it has its own equilibrium now. It doesn’t go back to this, “Hey, we’re all interconnected.” It’s more differentiated. And so you’ve lost the opportunity to fix it.
Daniel Serfaty: It’s almost you have created by then a new structure, in a sense, so it does evolve into a new structure that is not reversible, in a sense. So I like these multiple visual idea, maybe what NASA will have, the same way where they will have a status board to tell them how fast they’re going or how close to Mars they’re going, or what’s the hydraulic system in the capsule, the health, they would have the team health.
Stephen Kozlowski: Dashboard.
Samantha Perry: Or TeamVitals, if you would say.
Daniel Serfaty: Yeah, or TeamVitals. But in space, it’s kind of the Fitbit on steroids, in a sense. But joke apart, one thing that you’re pointing out is that just a static snapshot is not going to do it. You’re going to have to have a sense of the dynamics and the evolution, and maybe to stop, basically, the team disease, in a sense, before it metastasizes into the entire team body.
Stephen Kozlowski: It’s an instance, too, where you’re going to need a lot of data on that particular team. So we’re not interested in a hundred teams in NASA in particular. They’re interested in this one team for the next three years, and for the three years before that when they’re training, and the three years after that when they’re debriefing and training other people, or what have you. So you’re really looking at the ecology of a single team, and this is where some of the deep data techniques, the artificial intelligence based algorithmic techniques, really have a lot of posit because… I mean, one of the things that we observed, again with small numbers of teams, is some people interact a lot, that’s their normative behavior, and you see that reflected in what those patterns look like within a team. In others, not so much, but that’s normal. So there’s no one size fits all here. Gee, they didn’t hit what we think is the mean or the average, it’s got to be calibrated on that particular set of teams, and those people, how they’re uniquely interacting, when you’re doing it well, and then you mark departures from that.
Daniel Serfaty: That’s a sense of the future here, Steve, because in a sense it’s a new, perhaps a new individual differences, but for teams, understanding that each team is different and each team should be treated differently, and each team should be trained and augmented and taking care of it over time differently. That is perhaps the ultimate goal here. It’s really a very exciting, even ambitious perhaps, but exciting view of how to deal with team research in the future.
Stephen Kozlowski: This is a high value team, right? It’s going to cost bazillions of dollars to send a crew to Mars. So we’re very, very interested in the outcome. It’s worth that investment. And they’re going to be other high value teams in organizations and government where it really, really matters that they’re working together optimally. And so you want to put all the resources that you can afford to invest to help them function as effectively as possible.
There are other instances where maybe an off the rack solution is going to be really helpful and fine, but I think there’s some great nations in this, but yeah, I think the future is customizable.
Daniel Serfaty: Maybe this is what my very last question, for which I will require the one minute answer from each one of you, and we will end with you, Steve, actually, is exactly about that. We tackled a few things and perhaps what is now is a super designer solution for an elite team of astronauts that will go to Mars, 20 years from now will be basically the popular solution for teams in management teams or in corporations or in the army, et cetera, that will be something.
I’d like you to think for a second, given what we know today, given that shift that you have pointed towards the theory and looking at teams over time, and looking at this interplay between the data that we collect and the data we use to optimize the performance of those teams. Close your eyes and open them, and it’s now 20 years from today, the end of 2040. What have we done in the past 20 years that are really transforming our ability to understand, and maybe even help teams?
Samantha Perry: So the thing that comes to my mind is being able to metric not only the people, but the context and understand what are the components that are really critical in those contexts. So thinking about what are the requirements for effectiveness in an astronaut team is entirely different than an executive team. There may be shared components, but the core attributes that a team must embody for it to be successful are likely going to be different, and require different understanding and measurements, in order to track whether or not that team is hitting the milestones of effectiveness that that context requires. And so, whereas our entire conversation has not been about context, I think that’s going to be a critical thing for us to really capture in our measurements, and even in our theory of teams. How do we establish effectiveness of teams, now that we have so much data, given that we have different contextual environments in which those data are being captured in?
Daniel Serfaty: Thank you, Sam, for that. Yes. I think we talked implicitly about context, in a sense. As a system engineer, I cannot think of a word that doesn’t take into account context. I know it’s new for a lot of disciplines, but you cannot just design a controller without understanding the context in which that controller operator, that decision-making operator, actually you need that.
Samantha Perry: I think a lot of the team’s literature attacks it, but not necessarily directly.
Daniel Serfaty: Yes.
Samantha Perry: [crosstalk 01:28:29] bounds the theories based in context, as opposed to articulating the role of context in the theory, and that’s more of the distinction I’m trying to make here.
Daniel Serfaty: Which will, again, increase significantly, maybe drastically, the amount of data you have to carry with you in order to truly understand what the team does and why it does it that way. Tara, your prediction?
Tara Brown: I second what Sam just said. And I think one of the importance of contextualizing is interpretation. And so, what good and bad look like on these teams, dates and processes might vary significantly based on context. So I think context is important for interpretation and prediction as well.
One of the things that I keep thinking, just given the conversation with you and Steve a few minutes ago, is teams are as unique as the humans that comprise them. And one of the things that we’ve seen in the individual learning and training and literature is this move towards personalized, tailored learning experiences and training experiences. And I think in some way, as we gain all this data about teams, and we do, I think, will have environments and people more fully equipped with sensors that provide a more consistent continuous assessment of them, I think we will be in a position where the way we train and augment teams will become very personalized, whether that is through artificial intelligence and what we call sidekicks or digital assistants, or something that comes alongside of the team and provide tailored or personalized augmentation, depending on what that team needs and what contexts they’re operating in, or maybe it’s infusing the team with some sort of, maybe at that point, actually intelligent artificial intelligence that takes on some characteristics of a human.
And I think we will be to the point where maybe the AI is actually more intelligent and able to function more as a team member in another 20 years or so. And so I think the nature of teams is going to change in terms of bringing more of the artificial intelligence into it, as well as because of the wealth of the data and the assessment that we’re going to be able to create from that data, I think there’ll be much more tailorized and personalized training interventions and training opportunities.
Daniel Serfaty: Thank you, Tara, for that feel for high personalization and individualization of the teams. Steve, take us home, and give us your 20 year prediction.
Stephen Kozlowski: Well, I’m going to build on both Tara and Sam, because I think they certainly become more salient from covert, but all it’s done is accelerate trends. So what trends do we see? Scientifically, we’ve been studying teams for like three quarters of a century. They’re co-located, they live together for a long time, that’s the science base that we have. But if we look at what’s happening, I’m on multiple teams, there’s a lot of churn on team membership, and so really, when I think about teaming of the future, it’s not going to be sort of I’m on one or two teams, more or less constantly for long periods of time, I’m part of a reconfigurable network. I’m connecting with lots of different people to perform a variety of different tasks and projects that are in different stages of the completion.
I mean, this is organizational life now, it’s just going to accelerate. So I’m going to go from a team centric to more of an individual. I’m a person who’s got to manage these relationships on these multiple teams. I’m going to need a teamwork coach. I want that artificial intelligence, but not launching at me on one team, but helping me work with a variety of teams. I’m on so many that my ability to maintain all those relationships and remember all the things I need to remember, I think might get somewhat challenged. I mean, we build trust, we’re with people a lot and together we get to know their unique characteristics, and that won’t completely go away. But if I’m on 50 or 100 teams, I need some help here.
So I see where you’d be looking to have teamwork coaches that are helping the individuals, and talking to each other, so that they are not individually based, but more collectively based or dyadically based solution. So it’s much more of a, not the individual and the networks that create the various teams that I’m on, and helping me manage changing relationships. Because the way I deal with Tara for a conflict is probably really different than the way I might have to deal with Sam or somebody else.
So understanding a little bit about what are the best styles to use, et cetera, et cetera. So I would see it as this kind of customizable solution, really on steroids. And when I talk about some of the NASA research that we’ve done, or the sensors, there’s invariably somebody who brings up the ethics of all of this. And I just bring out my smartphone and I say, “Do you have location services enabled? Because right now you are providing data, data about where you are and who else is around you, to a variety of entities who are using that data right now to draw inferences about you.” And maybe they’re anonymizing the data, so they don’t care about you per se, but those data are being used to understand the dynamics of group behavior, so they can sell me things mostly.
But I’m a social scientist, I want to be able to use this computational social science to understand human behavior in ways that heretofore have not been possible, and to use it to help people do what they want to do, accomplish what they want to accomplish. So I don’t think it has to be where this is some kind of big brother, but these are tools to help you navigate this much more complex social world that we’re going to inhabit in the future.
Daniel Serfaty: Well, bravo, I think what an exciting vision, using all these technology and this data for good, as opposed to just for selling or commercial purposes. I think this is a mission of the social scientists of tomorrow. And thank you, Samantha and Tara and Steve, for sharing with us all your thoughts. You really have increased our knowledge in a very meaningful way about our exploration of the magic of teams.
Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast, and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Inc, my executive producer is Ms. Debra McNeely, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Daniel Serfaty: Welcome to MINDWORKS, this is your host Daniel Serfaty. This episode is part four of our ground breaking five part series exploring the magic of teams. Over the course of this series we’ve gone from the ABC of teams to the future of teams, and this week’s episode is on human-AI teams. We will explore one of the most important technological development of our times, the marriage between Artificial Intelligence and human intelligence. We’ll go deeper into how we can use the best of both human expertise, human experience and data rich Artificial Intelligence . And examine how human intelligence can be augmented by Artificial Intelligence to not only improve work effectiveness, but perhaps even better human kind in general. My two guests today have been deeply immersed in this area, working on a portfolio of interconnected projects, exploring the symbiosis of Artificial Intelligence and human intelligence. But even beyond that, laying the foundations for a whole new science of human-AI teaming.
Dr. Jared Freeman is a chief scientist at Aptimas and Dr. Adam Fouse is a director of Aptimas Performance Augmentation Systems Division. Welcome to MINDWORKS. Welcome to MINDWORKS Jared and Adam. For the benefit of our audience, would you please introduce yourself and specifically what made you choose these particular domain? The mind in a sense, the understanding of human or human performance as a field of endeavor. Jared, could we start with you?
Jared Freeman: Thanks. So I took an undergraduate degree years and years ago in urban design where we encountered the phrase wicked problems, meant to describe the difficult dynamic systems in which every move you take changes the entire social problem you’re trying to solve. After that, I was a journalist in the architecture field for a while, doing a lot of writing and a lot of design, both of which are ill defined problems at best, right? There’s no prescription for doing them well, wicked problems at worst. So when I decided to take a doctorate like focused in cognitive psychology on human problem solving and learning. That is, how do people understand complex problems? How do they solve them? How do they learn to do this better over time?
Daniel Serfaty: So do you consider that also we can endeavor in the sense that really understanding how humans solve problems is more complicated than the previous domains that you studied?
Jared Freeman: I certainly consider human-AI teaming to be a wicked problem.
Daniel Serfaty: Well, we will explore that one. Apparently you don’t consider human intelligence by itself being a wicked problem. That’s good news. I’m obviously saying that in jest, because we know… and Jared knows better than many of us, how complicated the human mind is. Adam, what made you choose this field? Of all the fields that you could have chosen in engineering or psychology, why these particular field?
Adam Fouse: Well, ever since I was a young kid, I was interested in the interaction with people and computers. I remember when I was small and my father brought home a Shirley Macintosh, and he was like, “This is pretty cool stuff. Let’s figure out how to do this even better.” So when I got around to going to undergraduate, I did both computer science and cognitive science. I thought you really need to understand how people work [inaudible 00:04:02] this stuff well. And as I progressed in my career, when I went to get my doctorate, I wanted to look at that even more from the cognitive science perspective. I ended up doing that with some folks that were very much looking at it from this idea of distributed cognition. Where cognition happens is in the interaction between people and other people and the people and the things they use and the tools they use and the technology they use. And so I was looking at that in more of a traditional human computer interaction perspective. That’s led very naturally to be thinking about that from the perspective of how do you bring Artificial Intelligence into that.
Daniel Serfaty: Okay, so you’re both division directors in charge of a few dozen scientists and engineers, but you’re also the scientists yourselves. Can you describe for our audience, what is it that you do in your day job?
Adam Fouse: I spend a lot of my time as the lead investigator on projects that are looking at this question of human-AI teaming. But think, say five, 10, 15 years down the road of how do we model these things? How can we describe them in some way, using mathematics and computational techniques to understand how to bring these things together? Can we do that in a way that goes beyond just trying to look at a specific problem and say, “Well, maybe we have people do this and we have the machines do this. Can we try to develop a more principled way of doing that?”
Daniel Serfaty: Well, we are throwing words around for our audience. I promise we will put some order in them like fusion and teaming and marriages and all kinds of things when we talk fundamentally about carbon-based intelligence on the one hand and silicon-based intelligence on the other hand. Perhaps Jared, as the chief scientist of a research development and engineering organization, what is it that you do in your day job?
Jared Freeman: I’d say I have two jobs that I try to weave together. The one at the corporate level at Aptima is to envision what science and technology should and will look like in the future. And to manage internal research and development funds, manage our interactions with a very distinguished board of S and T advisors to get us to that future. Like many of my colleagues, I also do a good deal of technical work. I serve as the principal investigator or as a researcher on several DARPA programs in which Aptima conducts test and evaluation work on AI. And these range from a program concerning development of artificial social intelligence, program concerning development of tools for biological and chemical research and a program concerning detection of deep fakes.
Daniel Serfaty: Wow. That sounds pretty involved. It’s difficult to imagine with a person of your experience, Jared or researcher of your experience, to still be doing things that are new or surprising. Have you had recently or not recently a big aha moment as you are exploring what sounds incredibly futuristics endeavors of social intelligence for machines and things like that? Were there any aha moments in the past 25 years that really surprised you?
Jared Freeman: Ironically, the strongest epiphany I’ve had in the past few months grew out of work that Aptima did in the late 1990s. Aptima was born on a contract to put computational modeling and computational power behind the design and evaluation of organizations. And that meant we had to structure our understanding of what an organization is, right? How do you represent people, their skills, tasks, the sequencing of skills? How do you optimize all of that? How do you understand if your optimization is something people can actually execute in the real world? It finally dawned on me, decades after having engaged in that work that you started, Daniel. That we can use that understanding of teams to help us position AI within teams and even to run very large scientific enterprises, such as the 200 person organizations that each DARPA program is, to make those ventures productive for the scientists involved.
Daniel Serfaty: There’s a tendency, I’m saying that for our audience, of us in this field to use a lot of acronyms. And then I’ll try to help our audience. DARPA, by the way, is a Defense Advanced Research Project Agency, for those of you who don’t know, and it is the elite agency really looking at pushing the envelope of science and technology. Sometime for the very long-term and sometime for the short term, is fundamentally to avoid what they call technological surprise. And so many of the fundamental discoveries of the last 20 years or so, or 30 years, or even more have found their origin at DARPA. And that includes by the way, the internet itself. What about you, Adam? You don’t have the many, many, many years of experience that Jared has, and that’s about as much as I’m going to say about the age of my guests here today, but any aha moments that you can remember?
Adam Fouse: Yeah, so one thing I was thinking about is you asked me what I do in my day job looking out further, but in fact in my job as a director, I tend to be looking at projects that are a bit more near term. How can we take things and make them useful in the next couple of years? And earlier on in my Aptima career, there’s a project that we were involved in. It was supporting an analyst at the air force. These analysts are actually very much what you were just describing about DARPA. They’re trying to understand where technology is headed and try to avoid technological surprise. And so their job is try to collect as much information about a particular topic as they can. Some new types of airplanes, some new types of Artificial Intelligence, and understand where things are going so that we know what to be prepared for.
We’re working on this project and we’re trying to understand what can we offer here, say, just better than Google? Why not just give them Google? Why not just let them search all the information they can? And the answer ended up being that can be useful for them. You really get ahead to understand what it was they were trying to do, what kind of their work looked like, how AI technology is, things like machine learning could actually fit into that in a good way. That could both help them do their job and help them learn how to do the job better in the future. And that was a real nice instance where it really became clear to me that understanding the pairing is sometimes a lot more important than understanding just the base technology itself. And that’s a plate that helped to point my career at Aptima more in that direction of looking at this combination of human and computational technologies.
Daniel Serfaty: That’s a wonderful setup for the next area I would like to explore for with both of us. Which is a fundamental dilemma in a sense of paradox. So we know, and a lot of studies have shown that automation and robots and AI are going to displace a lot of jobs over the next 10 years. Some people even think that 25 to 30% of the jobs on the planet are going to be disrupted by the introduction of those technologies. In some areas we’re already seeing that. So why is that still important to understand human performance? If humans are going to be replaced, taken over, perhaps by all these technologies or is it something else there? Should we go even deeper? Jared, what do you think?
Jared Freeman: So I think there’s a fallacy lying in there. Every introduction of a new technology begins by changing a job or eliminating one, speeding up the tasks, making task work more accurate. But the really significant technologies, right? Think of the weapons of war, think of medical discovery and invention. They change everything, they change the tasks, they change the tactics by which we string tasks together. They change the strategies by which we choose which missions we’re going to execute. So we have to think larger than simply displacing individual jobs. We have to be able to look at an incredibly dynamic future in which there are new jobs, new missions, and figure out what we can do with that to enrich human life.
Daniel Serfaty: So you’re saying that it’s not really about elimination, but rather transformation?
Jared Freeman: Yes.
Daniel Serfaty: And that transformation forces us to double down in a sense, in our understanding of human cognition, human performance because it’s going to be transformed by those very technology that we introduce. It’s almost like the introduction of the car, for example, or the introduction of the airplane eliminated a bunch of things, but primarily created new jobs and transform old jobs into something else. Do you agree, Adam?
Adam Fouse: Completely, I think that it’s not so much about displacement as it’s about transformation. And I think going along with that transformation, this understanding about human performance and understand that in the context of technology is really important to help us avoid bad things and make the good things better. There’s potential when you introduce technology for all sorts of negative unforeseen consequences, and we want to make sure we avoid those. But there’s also potential for really amazing, great things to happen. And we can’t do either of those things, we can’t avoid making bad things happen or ensure that good things happen if we don’t understand what this transformation looks like to what humans are able to do when new forms of technology are introduced.
Daniel Serfaty: Yes. And we already witnessing several of those examples today, we will explore them in a little while. Like the job of a radiologist, for example, has changed with the introduction of Artificial Intelligence that can interpret MRI or ultrasound pictures for example. We are not eliminating the job of the radiology, its just a radiology test to adapt to that new reality. Hopefully as a result of that, patients will get a better service. So let me backup for a second, because I think our audience deserves an explanation. Humans have been working with computers for a while. Adam, you say, as a young kid, you are already banging on a Macintosh or something like that. Would you explain to me two things, what is human computer interaction engineering? What is that? And whether it’s consistent? Then the second question that I like both of you to explore is, isn’t AI just a special case of this? Just yet another computer or another technology with which we have to interact?
Or is there something different about it? Maybe Adam, you can start with telling us a little bit, what does a human computer interaction engineer does for a living? And then maybe the two of you can explore that is AI exceptional?
Adam Fouse: Sure. So a human computer interaction engineer fundamentally looks at what should the interaction between a person and the machine look like. But computer broadly construed because these days what a computer is isn’t just a box sitting on a desk, but it’s a tiny little box in your pocket, or it’s a refrigerator. Fundamentally we want to say what are ways to make that work well? What are ways that can let us do things we weren’t able to do before? Part of that is user interface design, what should buttons look like? How should things be laid out? But part of that is also what different types of interaction might you have?
And so a great example of that is the touch base devices that we’re all familiar with these days with smartphones and tablets and things like that. Years before those came out, say, I think five years, a decade before those came out, computer interaction engineers were designing bigger tables to study how you might do things with touch. Like the idea of pinch to zoom is something that showed up in that research field years before you saw it happen on an iPhone. And that’s trying to understand and invent the future of what those interactions look like.
Daniel Serfaty: And we know what happened when this is not well-designed. When that interaction is left to random acts or improvisation, we are witnessing many accidents, many industrial accidents, sometime very fatal accidents that happened when that interaction is not well-engineered. But Jared, perhaps based upon this notion that there is a discipline that deals with, how do we use the computer as a tool? How do we engineer the interface so the intent of the human is well understood by the machine and the machine actions are well understood by the human? But when we introduce AI into that machine, Artificial Intelligence, is there a paradigm shift here or just an extension?
Jared Freeman: I think there’s a paradigm shift. I think that AI is qualitatively different from most computing systems and certainly most mechanical systems that we use now. And it’s different in two ways. First it’s enormously more complex internally, and second it has the potential, which is sometimes realized to change over time, to learn new behaviors from its experience. So, this has a couple of effects, it makes it harder for humans to predict how AI will react in a given circumstance because we don’t understand the AIs reasoning. We don’t even understand its perception often. Additionally, current AI at least is quite fragile. We have all seen the examples of putting a piece of duct tape onto a stop sign and suddenly the AI can’t identify what that object is. Yet in human teams, we’ve put immense value on the reliability of our teammates, that they be competent, right? They not fail when the stop sign has a piece of duct tape on it, and that their behavior be fairly predictable.
These are exactly the attributes in which there’s weakness in AI, and yet there’s huge potential there, right? Such that it’s really in a different domain from classic computing system.
Daniel Serfaty: We’ll explore that notion of trust and reliance on a teammate and certainly the metaphor that is sometime disputed in our field of the team, the AI as a teammate. And primarily because I want to explore this notion that the Artificial Intelligence, unlike other machines that may be very fast or can accomplish many tasks, AI can actually learn and learn from working with a human and change as a result. Adam, can you tell me about the recent project in which you are trying to do exactly that, what Jared is saying. And combining the human expertise we use Artificial Intelligence, computational power, and any insight you could provide to our audience about what you’re learning so far. What is hard in that new engineering in a sense?
Adam Fouse: An interesting example is on a project that I get to be a part of. It’s a project for the DARPA agency that we mentioned earlier, that it’s looking at trying to identify software vulnerabilities. So this is the idea that we’ve all seen all sorts of hacks that happen out in the world, things, software, where there’s some weakness you have to update your phone once a week to prevent some vulnerability that was found in the software. And it’s a really hard thing to do to find where those vulnerabilities exist. And so what we’re trying to do is to say, are there ways that we can bring together AI based methods to look for these vulnerabilities? Both human experts that know a lot about this, but also human novices that may know something about maybe computer programming or might know about different aspects of technology, but aren’t hackers.
Daniel Serfaty: Those human experts are what? What kind of expertise do they have? They [inaudible 00:20:27] or what do they do?
Adam Fouse: They spend their life looking for software vulnerabilities. Major companies like Apple or Microsoft will hire companies to say, find vulnerabilities in our software. And so they’re people that know things like here’s common ways that a software engineer might make a flaw in a system and we can look for places where those might occur. Or here’s a recently introduced vulnerability that someone discovered and we can look for variations on them. So their job is to keep track of what are common vulnerabilities? What are new things that have existed? And what, just from a very practitioner perspective, what are the tools they can use to look for these things? How can they be looking at things like computer memory addresses to say, is there something here that might let me cause some trouble? So we’re trying to say what that process I just described, those people are very time consuming. It was also a very specialized skill that isn’t well distributed. And so as more and more software gets created, less and less of that software has that process applied to it. And so we need to be able to scale that.
Daniel Serfaty: At that point, the AI can help or the AI can replace that need for very fast reaction or very complex insight into those systems?
Adam Fouse: A good way to answer that is to say that at one point, people thought the AI could replace them. And so there was another project by DARPA that was funding an effort to do exactly that. To create fully automated systems to find software vulnerabilities. And that was semi-successful. They were able to create tools to fit this kind of thing, but definitely not reach the point where it could replace what the humans do. But the interesting insight that they had in the process of creating those, was they would watch what those systems are doing and they had to be hands off. And they realized that they could just get people in there to help guide things, provide a little bit of initial guidance to cut off paths that weren’t going to be fruitful. That they could be much, much more effective.
In this project we’re trying to figure out how can we do that? How can we incorporate this insight from experts? How can we find things where someone has less experience but has human insight and might be able to look at something? Let’s say that you can might be able to look at some image output and see whether something looks correct or not. Which might be hard for a automated system to do without the right context, but a human that will pick that up pretty quickly. And so one of the nice insights from this is, how can we design this human-AI system to bring in input from multiple different things, multiple different people that have different skill levels, multiple different Artificial Intelligence systems that might have different strengths and weaknesses and bring that all together.
Daniel Serfaty: Thank you, Adam. Jared, this is an example I think, of what you said earlier. Where by the job of that cyber defender, of that vulnerability specialist has changed from these very tedious looking at enormous system, and looking at small vulnerability to one of the guides that led to the AI look at those tedious, extremely complex systems and guide the AI here and there. So that’s a job transformation, isn’t it?
Jared Freeman: I think it is. And in some sense, that work is quite similar to work that we’re conducting on the test evaluation team for another program in which, the goal is to build AI that detects deep fakes. A deep fake is a piece of text, a photo, a video, a piece of audio, a blend of all of them, that is either generated by AI or perhaps elegantly edited with machine help. In the worst case, these may be designed to influence an election, a stock market decision. And so you can look at this challenge in Semafore in two ways, one is simply building machinery, which throws up a stoplight, this is a fake, this is not a fake, this is legitimate, right?
Or you can look at it as changing the task, right? Finding some way, as Adam said, to prioritize for an overworked analyst, what to look at, but more deeply giving them opportunity to infer the intent of those fakes. Maybe even aid them in making that inference about whether this fake is evil or comical from the onion and accident. This is the deep qualitative issue that analysts don’t have time or energy to address. And it’s what the AI will give them time to do. And the AI must also help with that.
Daniel Serfaty: Well and I hope they will succeed because certainly society with this extraordinary amount of information we are asked to absorb today, doesn’t let the user or the consumer of that information really easily distinguish between what is real and what is fake. And we know these days that it is a major concern, as you say, Jared. So maybe AI can be used for good here to try to weed out bad. Let’s continue on that, I mean, you talk about this project earlier today and you mentioned the term social intelligence. I would like to know what is it? Are we trying to make AI aware of society? Aware of their microcosm of the people and the other AI that interact with it? What is it?
Jared Freeman: Even a three-year-old human has social intelligence. This is labeled in the philosophy of science theory of mind, right? An ability to watch mom enter the room with a pile of clothes and then open the door to the washing machine so she can put them in. Inferring that what she wants to do is put those clothes into the machine. The assist program out of DARPA aims to imbue AI with a theory of mind, meaning a little more specifically an ability to infer what humans know and believe to be true about the world. And to predict what action they might take, then to deliver guidance, just deliver advice which humans will listen to because it aligns with their knowledge, which they might comply with more readily or be able to critique more easily.
Daniel Serfaty: So in a sense are you trying to develop empathy in Artificial Intelligence? I mean, is that really what we’re trying to do? So the ability basically not only to infer actions by others, but also to understand the reason why others are taking certain actions?
Jared Freeman: Yes. I think people generally associate empathy with emotion. And certainly AI that can appreciate the emotional state of its human teammates will get farther than AI that doesn’t. But here we need to expand the meaning of empathy a bit to say that it also denotes understanding the knowledge that others bring with them. The beliefs about the state of the world, that others bring with them. Their understanding of what they can do in the world and can’t do in the world, right? So there’s a distinct cognitive component, as well as the effective component.
Daniel Serfaty: I certainly want to explore that to the end of our interview, really these kinds of ground limits of AI, social intelligence, emotional intelligence, creative intelligence, things that we attribute to the uniquely human. And hence my next question to you Adam is that, wouldn’t be easier to say, “Okay, let’s look at what humans are best at. Let them do that part. Let’s do what machines are best at. Let them do that part and just worry a little bit about some interface in between.” I remember that call the MABA-MABA approach where men are best at and machine are best at approach to design. Why isn’t that sufficient? Is there something better that we can do by in a sense engineering that team?
Adam Fouse: Well, we certainly need to be thinking more than just about [inaudible 00:28:43]. And the theory answer to your question is, I think that’s a bit reductive in that, just trying to break things up is awfully limiting to what teams of humans and AIs might do in the future. It partitions things in a way that both doesn’t let the team adapt to new things as well but also doesn’t really take advantage of what some of the real strengths are. That MABA-MABA type of philosophy. It’s a very task oriented way of thinking about things. What either side is doing the people or the machines is just about accomplishing tasks. And going back to Jared’s point about the importance of social intelligence, a lot of the strength of human teams comes from the interaction between the team members, not just, “Well, you’ve got someone that can do this thing and someone that can do that thing, and then we can each have their own thing, and then we get the end result.”
But they’re going to work together to figure it out. And if we can add in AI to that mix of being able to work together to figure it out, then I think there’s going to be a lot more opportunities that are opened beyond just crunch a bunch of numbers real fast.
Daniel Serfaty: That’s interesting. So you trust our ability basically to build teams a very same way we build work teams or sports teams for that matter? Not as a collection of individual experts, but maybe a collection of individual experts that are brought together by some kind of secret or a hidden source? So teamwork aspects that the particular quarterback can work best with a particular wide receiver in football because they work together well, they can anticipate each other. Is that your vision of what eventually those human-AI team are going to be like?
Adam Fouse: Down the road, absolutely. In sports, you can have people that are glue guys that are going to bring the team together. You can imagine that same type of thing happening with the right type of AI that’s brought into a team.
Daniel Serfaty: So Jared, one of the big questions that folks are discussing in our tool based on what Adam just told us is, fundamentally should we consider we humans that are still in charge, kind of in charge of our world. Should we consider AI as a tool, just like a hammer or a computer or an airplane? Or should we consider AI as a teammate, as another species that we have to be a good teammate with?
Jared Freeman: The answer depends on the AI. There will always be good applications of AI in which the AI is a tool, a wrench, a screwdriver for a particular job. But as AI becomes more socially enabled and as humans learned to deal with it, I think that AI will become more and more capable of being a teammate. And this means a few things, right? It means that we might have human-AI teams that can collaboratively compose the best team for a job, right? The leader in the AI pick out three good officers, 16 cadets, and a bunch of machinery that can do a job well. It means that we’ll have human-AI teams that can work with each other to orchestrate really complex missions as they’re being executed. And it means that we will have AI helping us to look into the future of a mission, to discover where the risks lie, so that we can plan the present to meet the future well.
Daniel Serfaty: Okay. So that’s a pretty optimistic view of the future of AI I think. Adam tool or teammates?
Adam Fouse: I think that I would give a very similar answer to Jared. When we were talking about empathy, Jared made the comment that we need to think about how we’re defining that and maybe expanding that. And I think how we define a teammate is something that we’re going to need to grapple with. I think we shouldn’t be afraid of taking a look at how we define that and expanding that. Or maybe taking some different takes on it to that are broader and encompass some different ways that AI might fit into a team that go beyond a tool that maybe don’t come with the same presumptions that you would of a human. Not the exact same ways you interact with the human and you’re making a virtual human or making an AI teammate. And so we need to be unafraid of thinking about what we mean when we say, AI teammate.
Daniel Serfaty: Okay. I like that, unafraid. That’s good for our audience who may think, “Okay, are we entering a totally new era where those machines are going to dominate? Where is the center of gravity of the initiative or the decision? We’ve seen enough science fiction movies to scare us.” Okay Adam, so you have been talking about the need, not just to design superior AI with the new techniques of deep learning and natural language understanding, and also having experts interact with the AI, but also looking in a sense of how to build an expert team of both sides. Being aware of each other’s capabilities, of each other’s, perhaps even weaknesses and adapt to each other in order to form the best team. Jared, are you aware of a domain or could you share with us an example where these will intention system of both automation, that is well-designed to supplement the humans that are well-trained to operate a system when failure open, because the two of them are not well-designed together? The human side and the AI side.
Will you share that with our audience? And I want you to extrapolate, because I know that you have been very concerned about that, about the measurement aspect. How do we test that those systems are actually going to behave the way we expect them to behave?
Jared Freeman: Let me draw on the most horrific recent event I can think of. And that’s the Boeing 737 Max 8 disasters. Multiple plane crashes. There in the Max 8 was a piece of Artificial Intelligence meant to protect the aircraft from among other things, stalling. And when you look at the news reports from that event, you see that the Max 8 systems read some system data predicted an engine stall incorrectly, took control of the flight services, and then effectively pitched the aircraft into the earth. 346 people died if I recall.
Daniel Serfaty: But is that without telling the pilot that it is actually taking over?
Jared Freeman: Right. Yes, that was part of the problem. And so you can imagine part of the solution, imagine if the 737 Max 8 was able to infer the pilots belief that they were in control of the aircraft. The pilots were wrong, the Max 8 had taken control of itself, but that was not the pilots belief. Imagine if that system could predict that the pilots would apply the manufacturers procedures to restore that aircraft to stable flight. Even though those procedures would fail in that circumstance, then the AI could guide them away from the wrong actions. But the AI had neither of those, not the ability to infer the pilots current beliefs, nor the ability to predict what the pilots might do next. And so it was in no position to work as every human teammate should, to guide the teammates towards correct understanding and correct behavior.
Daniel Serfaty: Hence your call earlier today about social intelligence is kind of an early form if you wish. For a human team, a pretty and sophisticated form of human intelligence, but for AI is still something that is a little beyond rich of current systems.
Jared Freeman: There are a couple of very basic metrics that fall out of that story. One is simply can AI infer the knowledge and beliefs of the human? And experimentally, we can fix those, set those knowledge and belief, and then test whether AI can accurately infer them. Can AI predict human behavior at some level of granularity? Imagine as we’re doing an assist, having humans run their little avatars through a search and rescue scenario in a building. Can AI predict where the human will turn next? Which victim the human will try to save next? If it can do that successfully, we can measure that against what humans actually do. If the AI can make those predictions successfully, then it can guide humans to better actions where there is a necessary. Let the human do what they’re planning and understand where that’s the most efficient, effective course of action.
Daniel Serfaty: Thank you, Jared. That’s a pretty insightful albeit horrifying example of what happens when that last design, the teamwork aspect of the design of human-AI systems fails. Adam, I know that you’re involved in a leadership capacity on a couple of more futuristic research projects dealing with that. Imagining the future of those hybrid teams of AI and human team. Can you give us some insight about what’s hard there? What is it so hard to design those systems? And perhaps some surprises in a sense of maybe AI could do something here that humans could have done before? In a sense, it’s not just to repair something, it’s also to improve upon something.
Adam Fouse: I want to take what Jared said and just build upon that just a little bit, because all the things he mentioned are entirely true and needed for a good human-AI teaming. But I think one of the things that we also need to think about is not only making sure the AI can understand and infer and predict the human, but thinking about things in the opposite direction as well. And it doesn’t necessarily mean that the human can predict everything the AI is going to do, but that there’s some ability of the human to understand what it is the AI is trying to do. And I think the 737 Max example that Jared was talking about, that was part of it as well. The AI was meant to be magic, was meant to just operate so that the pilot didn’t even know was happening, so that it was just going to do its thing, do it perfectly. And the pilot won’t have to worry about it.
You wouldn’t really want a human team to be operating like that. To have no idea what someone else is doing. And it works as long as that person does it perfectly. That’s not very believable or sustainable. I think one of the things that we’re looking at in some of these projects where we’re trying to think about the structure and behavior of teams of humans and AI down in the future. That’s one of the things we’re thinking about is, what does the type of information that goes between a human teammate and an AI teammate look like? When should that happen? What form should that happen? How can maybe some rules that govern, however, that team is set up, help to mediate those things? Help to make that information flow happen in more efficient ways and ways that let each part of the team know the things that needs to know to be able to both do the job they’re trying to do. But also anticipate what the team’s going to look like.
What are the decisions the team’s going to make? How can they keep those things aligned so that the teams in a good position to keep working together.
Daniel Serfaty: So it’s fascinating to me that you mentioned basically the arrow going in the other direction. You’re asking when does a human need to know about AI? Is AI able to explain itself to the human so that the human understand, not just what is being done, but perhaps even why the AI is acting a certain way? Which is interesting as we look more at this notion of multi-species system. Because for humans, it’s difficult to coordinate in a team generally when people speak a different language, for example. Or when people have drastically different expertise, say an anesthesiologist and a surgeon, they share some expertise, but each one of them goes really deep into what they know how to do. And they know how to develop basically that language, they are not the same.
Aren’t we in AI and human combined systems? This goes beyond expertise isn’t it? Is the fact that the AI thinks differently, has a different structure of intelligence, if I may say so. And that becomes even more difficult for the human to infer just from its action, basically what it is thinking. So, as we look at these new systems, new hybrid systems, some people call it multi-species system, I just mentioned. Where do you think AI being introduced into these human systems is going to have the biggest impact? In what domain? Or perhaps the earliest impact, choose to answer the biggest to the earliest, depending on that, is that in the military domain? Is that in healthcare? Is that in gaming? Is that even in our day to day job? What do you think?
Adam Fouse: I think one of the earliest impacts is going to be in the healthcare domain? And I think when you look at the types of decision-making that needed to happen there, the decisions that doctors need to make about courses of treatment, about diagnoses. And I think there’s a huge opportunity for that to be improved and I think there’s also risks there for that to be changed and unintended ways. And I think that goes back to some of the earliest things you’re talking about today on the podcast of making sure that we understand what that transformation looks like. Because the potential upside is huge, but that there’s some potential downsides as well we need to make sure we avoid.
Daniel Serfaty: Okay, Jared. Can you desired to venture a guess here or where it’s going to have the most impact?
Jared Freeman: I’m going to look in the military domains here. And I think there are good applications of AI as tools in those domains for doing the sorts of tasks that humans already struggle with. Detecting signals, tracking potentially enemy aircraft, things of this sort. But when we look to AI as teammates, I think one of the first applications will be in training. Here’s an area in which AI is occasional deficiencies won’t get anybody killed and good AI teammates will make a difference. So one of the programs that we run for the air force research labs is called the challenge. And in that we are the test and evaluation component for eight different teams of AI modelers from eight outstanding AI development companies.
And we’re looking there to find ways to develop AI that can fly as an adversary in simulations so that our trainee pilots can develop their skills. And this means that these AI need to be what current simulation entities are not. They need to be resilient to the errors that trainees make, they need to be creative in the face of opportunities that trainees open up for them. And they need to have the judgment of what actions to take that will most improve the trainee’s capabilities, not just beat the trainee, right? But train that pilot. That’s a wonderful team interaction that we typically see only with good professors in good classrooms. And here we’re trying to make an AI that can serve part of that professorial role, right? That can be a good playmate in a tough game.
Daniel Serfaty: That’s certainly… this area of education and training is certainly a very fertile ground to not only create all kind of opportunities for learning for our learners and our students, but also to personalize even that learning to that particular student or that particular learner. Given their preferences, their level of expertise, where they are in the learning curve. And it is pretty fascinating to me that I observe in the past year or so some interesting notion of acceptance of AIs in people’s lives. Whether it’s a trainee or a doctor. It’s interesting, I look at the two ends of the spectrum, the very young and the very old. And it seems like that suspension of disbelief that we all have to do a little bit when we work with AI. That caution against anthropomorphizing the AI, you can turn AI into a human and therefore expecting the human reaction from AI, is really more present with the very young. There are some classrooms in Japan where the teacher’s assistant is actually a robot with whom the young children interact very, very naturally while knowing that it is not a human teacher.
And I wonder if you put the same AI into the high school or into a college class, whether or not the adults are going to be more suspicious of it. And on the other spectrum in a lot of assisted living communities, older the people have become very attached, talk about the emotions, to little AI system that remind them about their medicine, that are there like pets sometimes, and that provide real comfort at the emotional level. And so it’s interesting maybe by looking at the two book ends here, in a sense to understand what do you need to do to have a rich interaction with AI? So let me ask you a very threatening question here. Chief scientists, division directors, both PhDs in the sciences. Can you imagine your own work being transformed? I’m not going to use the term replace, but transformed by AI over the next 10 years? Jared, are we going to have a chief scientist with an AI soon?
Jared Freeman: Not only do I predict that AI will transform my own work, I’ve seen AI transforming our work. We have applied some of the most advanced AI to create an entity we call Charlie, who has served with you and me, Daniel, in a radio interview, who has served with you and me Daniel in a radio interview. Who has served with you Daniel on a panel at a major military training conference, as a panelist, who has written parts of a proposal. So this ability to use AI for, in this case, the function of brainstorming, serving as visionary, is here and now can be improved, but we’re putting it to use already. I think there are also much more mundane but difficult tasks in which I look forward to competent AI. AI that can help us assemble from the vast number of partners we have and our large staff, the right team for the job and negotiate all of their other obligations so that they have the time to do that job. These management tasks that could be performed so much better and with much less human pain by an AI.
Daniel Serfaty: Well, our faithful audience is familiar with Charlie as Charlie was the object of our two first podcast of MINDWORKS. And it is interesting that the person in your position, Jared, thinks that Charlie’s moving on your job or parts of your job. So it will be interesting to watch the degree to which this notion of orchestration, of assembling, of disparate sources of information to form a new thought that was not in any one of those sources of information is going to continue. That’s probably a pretty far-fetched use of AI, but personally I’m not worried too much. And you shouldn’t worry about your job yet. How about yours Adam? Are you in danger?
Adam Fouse: I don’t think there’s any AI that can do what I do Daniel. I have very similar thoughts to Jared. In fact, one of the things I think that surprised me about the evolution of AI over the last couple of years, and even just what we’ve done here at Aptima. Is the role that AI can play in creativity, injecting new ideas into a discussion or two in some thought process. In hindsight, this shouldn’t be that surprising, because one of the things that early successes of AI say like in the chess domain and things like that, right? They can explore a million possibilities in a second. The ability to explore a far more things than a person is able to.
And that helped to do things like become far better than any human chess player. But I think that same ability to explore many more things and then to be able to pick things that maybe are novel in some way are new, that haven’t been talked about yet. That aren’t part of the current conversation or the current research effort, the current report, the current proposal. And bring those things in and do that without fear of, “Hey, where do I toss in a stupid idea under the discussion?” AI doesn’t have that problem.
Daniel Serfaty: Well, I don’t think your job is in any danger of disappearing or being awarded to an AI personally. But I think this notion that both of you bring into your own job, which is a complex managerial and scientific job being augmented by AI. Imagine that each one of us in the near future could have some kind of an AI deputy with whom we can brainstorm, that can organize our information in advance, knows us intimately about our preferences, about our biases, about our strengths and weaknesses. It’s not that much in the future. It’s already happening here and there and that it will be fascinating to observe that over the next few years. But are there some ethical consideration to this particular marriage? Whether it’s in your job or in the example that you gave earlier, the learner and the teacher or the pilot and the automation. Are there ethical consideration there that we should consider, we should worry about and perhaps we should guard, present or anticipate? Who wants to go there because that’s a tough one. Jared, go ahead.
Jared Freeman: So I want to give a trivial example that actually has some deep implications. Let’s imagine an Easter egg hunt, and we send our small children out. There’s a little AI robot in the hunt as well and the AI robot discovers that the single most effective way to get the most eggs is to knock over all the little kids. This is behavior that we don’t want our children to observe, we certainly don’t want them to adopt it. It requires some ethical sense within the AI to choose other strategies to win. So where’s the depth here, right? Let’s just translate this into a warfare scenario in which the optimal strategy for war, right, is to remove the adversary from the game. You can do that in a lot of ways, trap them in an area, bomb them and so forth. It is well within the ethical bounds of war, and we want AI to have the liberty to take those actions perhaps of killing others or at least of entrapping and nullifying others. It needs to understand that that is an ethical option in that domain and should use it when it absolutely needs to.
Daniel Serfaty: Okay. That’s a pretty sobering perspective because it can happen and those emerging behavior is actually, but the question is that, is it our responsibility as scientists and engineer to engineer ethical rules, almost in an Asimov kind of way, into AI. Or are we expecting that AI will develop those rules internally from observing others behaviors and derive them and exhibit them in some kind of an emergent behavior. Adam, what do you think? Ethical considerations in designing human-AI teams?
Adam Fouse: That last point you brought up, Daniel, is I think the really important one which is that, relying on AI to behave ethically through observation of humans or society, which does not always behave ethically is something we need to be very vigilant about looking for and counteracting things that might unintentionally happen in that setting. And I think we want to have AI that is ethical and we also want to have the ethical application of AI. We’ve already seen cases where we train AI models to help with decision-making, but because we exist in a society that has lots of inequality, those models they are just encapsulating that in inequality. A real danger there in terms of thinking about this from this human-AI team perspective, is that then humans assume that this AI is objective. It’s doing number crunching and therefore I can’t have any biases that are about race or income levels or other kinds of marginalized aspects of society.
It’s just going to encapture those things that already exist. And so I think one of the things that we need to be very careful about is when we are designing AI, to make sure that we look for those things, but then make sure that when we apply that AI, we do it in such a way where there’s processes or structures in place to look for those and counteract those, even when they do exist. Make sure that there’s humans that are involved in those decisions that might be able to see something that isn’t quite right and either have the combined input of the two, we do a better decision, or feedback in to say “This AI can be improved in some way.”
Jared Freeman: I want to follow on to Adam’s very good point there. So here’s a perfect moment to look at the way that human and AI can collaborate. We know that when AI learns from historic data, it embodies the biases that are in those data. We know that when humans try to write rules for symbolic AI systems, those systems turn out to be quite brittle. And so an alternative or a compliment to those two is to ensure that AI programs in which ethics matter, such as military programs, first establish a set of ethical principles, bounds of behavior. And use those in test and evaluation of AI that learns its ethics or whose ethics get built by programmers. There needs to be at the moment, a human on the top of the stack who has a set of principles, a set of test cases, a way to evaluate AI on its ethics.
Daniel Serfaty: Yes. Well, thank you both for these profound and thoughtful remarks regarding ethics. I think in this engineers career, this is probably the period in which philosophy and design are merging the most. Precisely because we are creating these intelligent machines and we use intelligence really with a lot of caution and as engineers, as scientists, we need to think very deeply about the way we want those machines to behave. We didn’t have that problem so much when we were building bridges or airplanes or cars before, but now it is very important. I believe that all curricula about engineering, all curriculum about computing science and computing engineering include ways to think deeply and maybe even to design into the systems these notions of ethical principles. Jared, Adam, it’s been fascinating. Thank you so much for your insight and we’ll see you at the next podcast.
Jared Freeman: Thank you so much. It’s been a joy.
Adam Fouse: Thank you.
Daniel Serfaty: Thank you for listening. This is Danielle Serfaty, please join me again next week for the MINDWORKS podcast and tweet us at @mindworkspodcst, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated, my executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.
Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. This week is part five of our five-part series on the magic of teams. It’s been a very successful series from the feedback we’ve received from the audience. And so I am delighted to conclude that series today with a very special guest.
Professor John Hollenbeck is an old friend of mine, but he is also a university distinguished professor at Michigan State University, and the University Professor of Business at the Eli Broad College of Business there. His long-term research focuses on team decision making, self regulation theories of work motivations, and employee separation and acquisition processes.
What is unique about John is that he has the unique ability to blend very practical advice that I’m sure he shares with his students in management and in psychology, as well as a very deep and critical understanding of theory and what makes team work and what could make team work better. He has been honored by almost every single society in psychology and management, and I am here to bet that there is not one student in industrial organizational psychology or management that haven’t studied in one of his books or read one of his papers.
John, welcome to MINDWORKS.
John Hollenbeck: Thanks for having me Daniel. When I got a call from you to do this, I can’t tell you how excited I was. We go back together to 1991. We worked together for three years-
Daniel Serfaty: Oh, my God, [crosstalk 00:01:48].
John Hollenbeck: -and that was while you were only a teenager back then. But that was really kind of my first foray into both granted research and into research and part of the ATCT program working with you and you led that. I learned so much from you, watching you organize that group of cats that we really were. You were herding cats in the ring, as you know, you had kind of qualitative people and mathematical modeling people and lab people and field study people, and you just orchestrated that thing so beautifully. Everybody talked to each other. It was a project like nothing I’ve ever been on before or seen really since, in terms of the diversity of approaches that you brought to that.
John Hollenbeck: And I know it was tricky. You were really herding a lot of cats there, and I always admired it, because you were a teenager, I was only eight. So I looked up to you as my older brother.
Daniel Serfaty: Well, at the third leg of a young, very unique personality is incredible sense of humor, even making fun of me of herding cats in an academy industrial environment eons ago. Actually, we go back to that program because that was an interesting, almost meta program, in which we learn how to organize a team while developing principles of team organization amongst ourselves.
But John, you had a choice of career, graduating from a PhD program a few decades ago. What made you choose this particular domain of focusing on teams? As a field of endeavor, I know you have other focuses, but what about the teams aspect? Why that and not any other domains in management?
John Hollenbeck: Well I’ll be kind of quite honest with you. When I first came out in 1984, my initial program of research was on goal setting and goal commitment at the individual level. Team research is very, very difficult to do, and we’ll talk about multiteam system research a little bit later, which is even harder for me. As an assistant professor, I actually didn’t feel that the teams area was a safe place for me to work. I needed to get tenure, I needed to kind of get that done. Individual studies were just, “Crank those out faster,” and it was a little less complex. So for a young person it was kind of easier for me to get my arms around it, and I went up for promotion early. Once I got promoted, I started working on teams and I never looked back.
But I actually do feel like I needed a little bit of security behind me to go to into the teams area. And again, just to put it in perspective, I probably got promoted in ’89, it was really 1990 that we got part of an ATCT grant. After that, I was doing funded research on teams the rest of my life.
But I have news to share with you today. We just got news of a $1.6 million grant from ARI this morning, and we’ll talk about how that fits in, and Daniel, it’s still along the same track that we were in 1991, my friend, in terms of kind of the things that we’re studying. But that project with you was really kind of my first foray into teams and my first foray into funded research, and my first foray into the military. I was using words like ‘boat’. They’re like, “No, you should be using words like ‘ship.'” I was using all kinds of offensive language that first couple of years, and [inaudible 00:04:57] helped us survive that first round and the next thing you know, it’s history.
Daniel Serfaty: That’s funny. I mean today, even myself, after working with the military for more than 30 years, I still make some terminology faux pas so to speak. But you learn, and people are pretty tolerant of that.
So you mentioned studying teams is very difficult. Can you unpack that for our audience? Why is that particularly difficult?
John Hollenbeck: Well, for one point, and we’ll talk about multiteam systems, if I want to do a study and an individual has statistical power, I need 80 people. If I want to have the same level of statistical power in team context, I need 80 times 4, I need 400 people to do that study. So that’s not a lot. Just coming up with another research participant, especially in the field context. You got to be in a context where, not only do you have 80 teams, but these 80 teams are not comparing apples and oranges. If you’re comparing a basketball team or a football team to a software development team, you’ve got to kind of find 80 teams that are doing similar work so we can talk about who’s performing well, who’s performing poorly.
The other thing is that the team level, there are so many different levels going on. Not only is there individuals, but in many cases, those individuals make multiple decisions. And so the decisions of individuals are nested under individuals. The individuals are then nested within the team. And now within the team, there’s dyads. And we just published a paper in JDP on triads, and it turns out triads are a unique, specific thing worthy of their own study. And so if you look at a chain, it’s kind of like a microscope that if you dial in at this level, you see individual decisions. If you take it out a level, you see individual people making a bunch of individual decisions. Now you pull it out, you see there are groups of two people interacting with each other, and then three people… and then there’s the team.
And now if you take this as a multiteam system level where we’re working today, okay, now I need 80 multiteam systems. Dude, that’s really hard. That’s 15 times 80, and they’ve got to be doing something comparable. It’s even hard to do it in a lab, because it’s just generating a number of subjects. And this relates to the grant that we got today. We’re trying to build a national infrastructure for multiteam research, where if you have teams and I have teams and [Debra 00:07:14] has teams, if you have 80, I have 80, if [Debra 00:07:17] has 80, we can get together, technologically we can run a multiteam system.
So we’re trying to build this national infrastructure to lower the volume of the people trying to do multiteam system research. But just going from individual stuff to a team by itself is so much more complex, and I will tell you in 1984 as an assistant professor, I wasn’t ready for it. As you can probably attest, I wasn’t ready for it in ’91 either when I started working with you. But 25 years later, I think I’m starting to kind of get it. And so it’s a lot more complex.
Daniel Serfaty: Thank you. And we’ll unpack all these notions of triads and multiteam systems for our audience a little later in the discussion. There is a myth almost, teams are almost mythical, especially in America with this notion of sports teams and the great teams that are more than the sum of their parts, et cetera. Does it introduce another level of complexity, this notion that there is some magic that are happening, because human beings are designed to work with other human beings. Something at that level?
John Hollenbeck: I love the term magic, because I think we were talking about this before, we kind of used the term magic. I do think there’s a magic there because the chemistry… it’s not what’s happening at the individual positions or the individual or the dyads or the triads. It’s all of those things kind of working in parallel so that in many cases things will happen. And it just looks like magic to you because like with a good magic trick, you’re looking at the right hand, not the left hand. You’re looking at the individual, not the team, or you’re looking at the team and not this particular dyad within the team.
And so the magic occurs because you’re looking at the right hand, at the action level. So breaking that apart is fun. But I got to say, there’s two things about the metaphor when you use magic. There’s both good magic and bad. Again, I really believe that there are decisions that are so bad, so irrational, and so illogical, you can only get them out of people in the social context. An individual working alone would never make this mistake, but I can get my MBA students, my executive development students, to make unbelievably stupid mistakes if I put them in a group context and set them up.
I’ll give you one example. This would never happen to an individual. But you’re probably familiar with the cognitive diversity research on framing, and how if you frame an issue in terms of, these are the things that will be lost, you literally get in a very risk-seeking part of somebody’s brain and they become risk-seeking. If you take the exact same data and just flip it around and talk about what you could gain… I mean these are just inverse probabilities. This is the exact same thing. But you just framed it as a gain or loss. If you tell people, “Frame it as a gain,” they become extremely conservative. And now you take that process, which is a individual process. Now you put it into a group context. We’ll take my MBAs. I’ll take four MBAs that are… We do a lot of surveys, so I’ll take four MBAs that I know are really risk-seeking in general, as a pre-disposition, and I’ll take four of my MBAs that are really cautious, as a pre-disposition.
You can set up the framing, “Oh, you’re making an irrational decision. Relative to the probabilities, you’re being way too cautious or you’re being way too risky.” But what happens in a group context is group polarization. That is if you and I start overconfident, you’re 80% confident, I’m 80% confident, [Debra’s 00:10:34] 80% confident, you put the five of us in a room together for 20 minutes, and then you come back and ask us how confident we are, it’s like 99. I mean, you’re literally polarized because, “Wow, Daniel, I didn’t think of that. You’re right. That’s even better that what I thought of.” Nobody has any negative information. We just kind of beat each other and beat each other, and if you go to my cautious students, they go just the other direction. They’re afraid of everything. They won’t get out of bed. “Oh, this is only going to be 20% successful.” By the time they’re done, there’s no chance. It’s .01.
Only in a group context can you take people that would be a little bit irrational in kind of diversity terms, make them unbelievably irrational, and then I have a slide about it. My students are blown away that number one, this happened, and number two, it was so predictable that the dude’s got a slide about it, and the rest of the lecture is built on this error that we all just made that we didn’t even see coming. That’s why I believe it’s magic, because I can make this happen over and over and over again, with every executive development. Every MBA group.
We do a thing called a $10 auction. I don’t know if you’re familiar with the $10 auction.
Daniel Serfaty: No, please.
John Hollenbeck: Okay [crosstalk 00:11:46]. You can only get this in a group context. Basically, you put a $10 bill in an envelope and say, “We’re going to have a bidding war for this $10 bill.” Now, in most situations, the key to an auction is figuring out what something’s really worth, but you know this is exactly worth $10, and so this auction has an interesting set of rules.
If you pay the most, you get the $10. If your bid is the second highest, then you pay that bid, but you don’t get the $10. Third, fourth, fifth, you’re out of it. So I put that $10 in there, and we start. Usually it just sits there for about 15 seconds, and Daniel, I’ve got execs that are COOs in organizations. Eventually some exec will say, “Well, it’s a no-brainer. I’ll pay $1 for a $10 bill.” Then another guy says, “Two. Three, four, five, six.” They get through to seven, and then they start laughing, because they’ll look at me and go, “Dr. Hollenbeck, you’re a bad man. Seven plus six, you’re going to make a profit on this $10 Ha, ha, ha.” I get it up to nine, and as always, one of the students finally says, “Wow, this is a really great lesson. Yes, I will pay $10 for a $10 bill.”
That guy always says it like he thinks it’s over. It ain’t over, because you go to the person with nine and say, “No, this seems odd.” The decision confronting you now is you either eat a $9 loss for sure, or take a chance. But if you say 11, notice how I frame that as a loss? You eat a $9 loss, for sure. I just framed it as a loss and put this person in risk-seeking mode. Or take the chance that if you say 11, this knucklehead’s not going to say 12, it’s a $10 bill. You know what that guy says, every time? He says, “11,” and then you turned to the guy holding the 10 and say, “I know this seems odd, but here’s the decision that confronts you now. You need to eat a $10 loss, or you say 13 to prevent.” And that guy says 13, and then these two people will go up. If you can get them to go to 19, they’ll often freeze at 19. They’ll have a hard time getting to 20, but if you push them over 20, they’ll go to 29.
The most I ever got, I was actually doing an executive development in Kellogg, and it’s the nicest people you’ll ever meet. I got this thing up to 19, and I just wanted to see if I could push to 20. There was this woman, Sarah, and Sarah just wasn’t willing to go to 20, and she was going against this guy, Frank. I remember their names. Her girlfriend said, “Sarah, we’re not going to let Frank beat you.” They take out their purses and started giving Sarah money. Now, every guy in the audience goes, “Well, that’s just bullshit,” and they take out their wallets. Daniel, I am watching this room full of people taking out tons of money so it becomes a battle of the sexes where they fight for this $10 bill. Keith [inaudible 00:14:26], one of my heroes, University of Illinois, actually got $1900 for a $20 bill one time. This is a record.
No individual working alone would ever do this, but if you put them in a group context, there’s your magic. It’s not good magic. It’s bad magic, and in an MBA class, an executive development class, you kind of have to teach people escalation commitment and why you need to have circuit breakers on certain decision processes. You made the initial investment, but you don’t make the re-investment decision because you’re wasted. You’re done. Somebody with a cold, hard, calculated heart will make the re-investment decision. There’s a bunch of things you can do. [crosstalk 00:15:05]
But I’m not done yet, Dan. I’m going to give you one more, because once I get all these monies from these execs, I don’t want it and so I want to give it back. So we play something called an ultimatum game. The ultimatum game goes like this. There’s two people. You make an offer, and there’s an ultimatum. We don’t negotiate. I can either accept your offer or turn it down, okay? So, it’s an ultimatum. Now, winner of the game is the person, and usually at the end of the $10 auction, I have 30 bucks I want to give away. I want to give this money away, and you make an offer, and the person who wins the money is the person that can take the most value out of that thing but still get the other person to say, “Yes, I accept it.” I will tell you, I usually get it started.
You and I are playing. I go, “There’s $30 in there, Daniel. I’ll take 29 and leave you one.” How are you going to react to that? I’ll tell you, all my exec students, they’re like, “No, that’s unfair.” And then they say, “No.” Then you go down to the next person. “How about 28 and two?” “No.” “27 and three?” “No.” I will have people literally say no to $12. And then you say to them, “Do you understand that you just violated every rational economic principle in the world? Your choice is between $12 and zero, and you took zero.” And then the exec will go, “Hell yeah, because the other guy got…” It’s like, “No.” We call that counting other people’s money, rather than making decisions about your own money. You’re making decisions about other people’s money. There’s so many of these things that you know in advance, you set up.
I want to give you one more, and then I’ll jump to your questions, because this is a really important one. In my business, have you ever tried to teach the Challenger Space Shuttle? It’s very difficult. Thank God somebody came up with the Carter Racing Team. The Carter Racing Team is a case where basically you walk an MBA team or execs through a situation where they have to decide whether or not they want to race in a NASCAR thing. It’s the last Sunday of the year, here’s been your history, the car’s been breaking down, you’re finding the people who sponsor you are upset, and they lay out all of the contingencies of if you race and win, yay, this happens. If you race and you’re competitive, this happens. If you race and you lose, that’s okay, but if you race and the car doesn’t finish because you’ve been unreliable, that’s a disaster.
You walk all of these MBA teams through it, and in the end I will tell you Dan, every single one of these teams? They race. Then you have the greatest moment when you flip the slide and go, “Congratulations, you just lost the Space Challenger.” Then, you show the Space Challenger blowing up. If you try to teach the Space Challenger, everyone looks at that and goes, “Well, what a bunch of idiots. Didn’t they see the O-ring data? Didn’t they see what the temperature was? Didn’t they see where the wind was coming from?” The Carter Racing Team is the exact same data, but now you have to detect it in advance, instead of explaining it after the fact. Again, for most of my students, when number one, we tell them, “You just lost the space shuttle,” and then they knew that I know that they were going to launch it, and now let’s talk about decision making errors under high stress context in the face of previous failures.
All of a sudden, they’re a little bit more open to listening about it, where if you try to teach the Space Challenger as the Space Challenger, it’s like, “No, I would never do that. What a bunch of idiots. Those guys are stupid. What’s wrong with the NASA people? Aren’t they trained?” No, no, no, no. You would do it. That’s kind of the bad magic, that these things are completely predictable. You can only get certain bad things out of a group that you couldn’t get out of an individual. That’s a lot of the fun of it, too.
Daniel Serfaty: Well, thank you for sharing all these stories. They are basically cautionary tales about that magic. Maybe it’s a black magic of team at some point, because it has to do with re-emphasizing why it’s really important to understand team dynamics and to try to put in place the right structures, the right processes, the right interactions in order to prevent those kind of groupthink phenomena that you described earlier. Group work polarization, or other situations in which people, at that point, do not optimize their utility functions but are trying basically to establish, to maximize some other function that has to do with the social hierarchy in the team. Who is the person who’s going to win the auction, for example?
Now, in a lot of situations that I know our audience is going to find themselves, work teams, those things may not happen to the extreme that you can orchestrate in your MBA classes with your MBA students, but they do happen all the time. We see very often in meetings things deteriorate, and when you look at them in posterior you say, “Well, the team forgot what they were trying to do. They got into another situation.” We know from history that this notion of establishing consensus too fast for the sake of consensus is actually dangerous. The Cuban Missile Crisis is a classic example that people talked about, where all these advisors basically reinforcing each other’s mistaken beliefs.
John Hollenbeck: Yeah. I’ve got two things on that, before we leave that, because the Cuban Missile Crisis is kind of an interesting example. I do feel that we team researchers through a lot of predicting after the fact, and we often blame teams for the kind of things that we’re saying here. In many cases where you don’t have the counterfactual evidence. The things that I’ve been talking about, we know what the rational decision was. We know what the counterfactual evidence is, but in so many team contexts, because you didn’t go a particular direction, you don’t even know would have happened had we gone in that direction. It really does kind of promote, and the Cuban Missile Crisis was [inaudible 00:20:36] really got people used to this paradigm where some really smart person would go into the archives of some decision fiasco and then dissect it. Just like the Carter Racing Team, tell you all the things these idiots did wrong. We’ve got to really be careful.
That’s why we need a science. We need a science where you kind of have to make your predictions in advance, and easy to predict the future after it’s happened. Yogi Berra. The future’s hard to predict in advance. I think we’ve really got to check… That’s why scientifically, I kind of believe in quantitative science. I’m always a little leery of qualitative studies where people go in knowing what already happened, or kind of going in there with a particular angle. You’ve got to predict the future in advance, and so what I try to do with my classes is to show that some of this science is so magical. I can predict it in advance. I can build a whole lesson plan around it. That’s how irrational you’re about to be. Again, that’s the fun of it. That’s the magic of it. I love to teach this stuff. I love to research this stuff. I love going to work every single day. I can’t wait to find out what we’re going to screw up next, and then kind of fix it and move on. Yeah, I’m totally fascinated by all of that.
Daniel Serfaty: I stand corrected. I didn’t mean that those historical example… I know that they’re being taught as paradigms of decision making mistakes, or misunderstanding of the situation in teams or in groups. My point is that, and I remember we worked on some of those projects in the past in which when you do the forensics of something that was disastrous, where lives were lost, as we have many examples. In the military, for example, or in industry, and you interview the folks that were in the middle of that decision making process that are now [inaudible 00:22:23] being almost accused of being characterized as having made a mistake, but once you immerse them back in the same situation, they are all pretty adamant that, “Given what I knew at the time, with all the uncertainties at the time about the information and the time I had to make a decision, I would do exactly the same thing today.” [crosstalk 00:22:45] for us to do Monday morning, quarterbacking on some of those critical decisions, that’s really what I’m saying.
John Hollenbeck: I agree. And there was another. We were talking about language in the military, and the one part of the language in the military I’ve learned is the expression, “It happened on your watch.” Now, we’re not necessarily saying it was your fault, and so you might want to come back and go, “Yeah, it happened on my fault, but it wasn’t my fault because boom, boom, boom, boom.” Then they go, “We didn’t say it was your fault. We just said it happened on your watch,” which kind of still implies [crosstalk 00:23:16] fault. Just an interesting use of that language. It happened on your watch.
Daniel Serfaty: [crosstalk 00:23:21] the difference between authority and responsibility. We won’t get into that right away. Maybe if we have time toward the end.
So we’ve discussed different aspects of the magic of teams and what sometimes we have to understand, that teams can witness extraordinary performance out of teams that we wouldn’t have predicted, and sometimes it’s the other way around. If you look back at more than 30 years of your own research, but also research in the field on some major concept or major findings that you think, “Wow, that really changed the paradigm. That really changed my understanding about teams.” Can you share a couple of those with the audience?
John Hollenbeck: Yeah, I think there’s two that really kind of jump out at me. Again, we’ve talked about it a little bit. Just the multilevel nature of teams, and how if you’re not looking at the right place, you’d miss everything. If you’re looking at the individuals, but not the dyads. If you’re looking at the dyads, but you don’t understand the triad, you’ve missed the whole thing, because you’re looking in the wrong place.
It’s so hard to be looking at two places at once. The most impossible thing to call in basketball if you’re a referee is whether somebody got fouled on a three point shot or not, because to know whether it’s a three point shot, you have to be looking at their feet. Where were they? To know if they got fouled, you have to be looking at their hand. It’s literally impossible for you as a referee to tell [inaudible 00:24:43] as a basketball, to make that call, because you can’t be looking in both places at once. That’s one of the fascinating thing about teams, that it can be happening at all these different levels, and if you’re not looking at the right place, and you got to have a little bit of flexibility, you got to go in. The plan was to look at this place, but nothing happened there. Let’s look at the dyads. Let’s look at the triads. Let’s look at the team as a whole. Whatever. You kind of need that ability to explore, but right now within the field, there’s kind of a lot of pushback about hypothesizing after the results are known.
That’s considered not cool, that you had [inaudible 00:25:15], and now you’re kind of just sniffing around, exploring. And sure, if you explore 100 things, five are going to be statistically significant at .05. You can see where excess snooping is a bad thing. Not snooping at all is a bad thing, too. How you find the right balance between this is hard data to collect, [inaudible 00:25:37], you’ve got to give us a little room to breathe. We’re not going to be able to predict every single thing in advance. If we could, we wouldn’t be able to publish it, because it’s already such an established part of the knowledge base that their teaching it in MBA programs. You can’t do that now. That’s been done. We know that already. If you’re on the frontier of it, you don’t always start out looking at the right place. You’ve got to have the freedom to be able to look at these different kinds of levels. I would say that was thing number one.
The other thing, and again, it was probably [inaudible 00:26:03].
Daniel Serfaty: Before you go into that, John, just a clarification. I want to hear about the second milestone, so to speak. Is that just a question of where you put the spotlight, or is it also a question of the granularity in a sense that you cannot predict the behavior of material just by looking at one molecule or one atom?
John Hollenbeck: Absolutely.
Daniel Serfaty: What is that? Is that…
John Hollenbeck: Daniel, I just love that example for two reasons. One, we worked at the Facility for Rare Isotope Beams, which is the newest linear accelerator being built in the United States. The United States builds a new linear accelerator every 20 years. It’s an $800,000,000 project, so it’s kind of a big deal. I’m working with the project director, who is a nuclear physicist, and we’re trying to explain to him the importance of a triad. There’s these two people, and you might think these two people have a relationship. You can try to predict what’s happening between these two people. Either their individual characteristics, or the characteristics of their relationship. I want to talk about a [Simmelian 00:26:58] triad. A Simmelian triad is there’s a third person that has a strong connection to both of those people, and it totally changes that relationship. You cannot understand that dyad without understanding this thing.
Now, I say this to Thomas Glasmacher, who’s the director of the Facility for Rare Isotope Beams, and his whole face lights up. He goes, “John, John, John, that’s just like boron. If you put boron next to hydrogen, nothing. But if in the same thing you put carbon, boom. All of a sudden this thing…” And just to watch his face light up, and he immediately as a scientist understood that yeah, triads are a different thing. In his organization, we studied the scientists in his organization and we plotted the informal organization chart and it allowed us to count all the dyads and the triads. If I had you, Daniel, and I said, “Okay, Daniel, who are the important dyads in your life?” You could probably do it. A lot of people don’t even know the triads that they’re a part of, because you may not know that this person is related to some other person.
Yet, triadic influences happen all the time. I’ve got a close colleague who’s a department head at Arizona State, and one of my graduate students is an assistant professor at Arizona State. Yay! Now, they’ve got a dyadic relationship. They know me, one’s a department head, one’s a young assistant professor. But I talk to both of them every single week, and they both had me as a major professor, and they are both tied into the Michigan State mafia. You cannot understand the relationship between those two people unless you know the third-party guys. We’re doing a lot of work with third party guys, and that was something that I think now, we just have the paper coming out in JAP, that’s another one of these things that’s kind of invisible to people. They just look at the individuals, the dyads, whatever. But nobody’s paying attention to the triads, and yet that could be where the magic occurred. If you’re looking at the dyad and it’s happening at the triad, you missed the magic.
Again, why teams are so much fun, and why it’s so complex, you only have so many things you can measure my friend, in a study. You want to measure a bunch of individual difference variables, you’ve got a lot of that, but that doesn’t give you a lot of room to measure dyads. If you measure the dyads, and if you want to get stuff at the team level, so many variables. So many levels, and so few degrees of freedom. You have to make really hard calls going into this thing. Where do I really believe the action is? Because if I spend all my degrees of freedom at the wrong level, I’m sharing a bunch of null results. So you got to build in enough opportunity that you measure things at the dyadic or triadic level that if something happens there, you at least have a chance to get it and that’s where the challenge is. It never gets boring and it never gets easy. And it never gets predictable.
Daniel Serfaty: But isn’t there some kind, again if we stay even within the realm of research, I think it has direct implications of the way you design work teams, project teams at work if there is a combinator or explosion of dyads and triads as in, the number of people in a team, no matter how they organize it [diarchically 00:29:53] or hierarchically. If you have a team of six people, you have a lot of triads in there and you have even more dyads. And so the question is, is there a way, does the theory, the multi-level theory or any other model, give us a way to watch for those two or three triads that are really important or that can explain most of the performance of the team?
John Hollenbeck: Yeah, having it in our 1995 paper, which funded out of the A2C2 program, those were the first papers that actually looked at multiple level. A lot of people thought multi-level analysis came out in 2001 and that’s when people really started getting good at it. In 1995, we were doing it and because we were in a lab context with people randomly assigned to conditions, and everything was [inaudible 00:30:35], didn’t have a lot of pressure on the analytics. But the analytics get super complicated when all of a sudden all these things are correlating together and you got to find where is the variance on that.
In our ’95 paper we just said, “Okay, what is the single most important thing at this level? What’s the single most important thing at this level? What’s the single most important at this level? How do those combine to the team level?” That was one of the initial forays into it. What’s the most important thing at the decision level? What’s the most important thing at the individual level? What the most important thing at the dyad? That was a way to kind of keep the number of variables to three, but then [inaudible 00:31:07] you say, “Okay, what’s the five most important things in every one of these levels?” You’ve just run out of degrees of freedom when you have too many variables and not enough research participants. So that’s a challenge.
Daniel Serfaty: Yes, well that’s certainly food for thought for our audience here, because there is a subset in our audience that are researchers in teams, but there is probably most of the people in our audience work in different teams or study in different teams, and they are probably sensitive to that notion of the complexity of the dynamics when you look at two people. But when you look at three, and then you look at multiple threes in a team but [crosstalk 00:31:40]
John Hollenbeck: And it’s not fair, it’s not fair. If you’ve got a real life job, I mean you got enough balls in the air that you got to juggle. You’re not like me, I don’t even have a full time job. I mean, I can study these things at levels, levels, levels, but if you’ve got a full time job, you can’t study… You might not work at the dyadic level up until you been in the team where this is a really good team except for Frank and Sarah. Frank and Sarah, they always go off, every meeting is… Okay, now that’s a guy who figured out that dyads are important. And that, “Hey man, Frank and Sarah are on their own level, but whatever you put these two together on a team…” Okay, that is a person who now all of a sudden sees dyads in a way that they didn’t see before.
For a practitioner, it’s usually when something goes wrong, that this should have been a really good team. Why is this not a good team? Oh, I’ll tell you why. Because Frank and Sarah won’t let us get from point A to point B. Now they see it. And so it’s definitely unfair to ask people that are doing real life jobs to be able to manage all the complexity of this, but this is why we have graduate education. This is why we have executive education. It’s to point people to these kind of specific things you can use. That’s established knowledge base and, as you know as a researcher, half of my life is dedicated to pushing the future knowledge base that many people will be teaching tricks 20 years later from stuff that we did in [inaudible 00:32:59].
Daniel Serfaty: Yes, you mentioned there are two things, two milestones, so to speak, that you can think of that really, redirected or changed the way you were looking at team. One is that notion of multi-level and the multi-level theory that you proposed to the world. Was there another one?
John Hollenbeck: Yeah. multiteam systems, which I think we’ll talk about more.
Daniel Serfaty: Define them for our audience [crosstalk 00:33:20].
John Hollenbeck: Yes, okay. multiteam system is three or more teams that work interdependently with each other. You can imagine why the three-alarm fire in the old days, three-alarm fire meant three different fire companies showed up. You would take the north, I’d take the south, and he’d take the east. We’d just cross our fingers and hope to God we weren’t spraying each other with water. There was a real lack of coordination.
So multiteam systems are teams that have to work together. They’re increasing. Same reason that we could talk about historical reasons why organizations aren’t built around teams now more than individuals? They needed to have greater scope. They needed to have greater specialization, and teams allowed you to have more scope and more specialization than individuals. You just keep pushing on that frontier, and now there’s stuff that the team can’t do either. The team doesn’t have enough scope or specialization. Now you create the multiteam systems.
Once we started working on these, this was late in my career, I started looking back. I came to the conclusion that we were building a science of stand-alone teams. That is, teams that work, five of us, we work independent of anybody else. We’re a problem-solving team, we’re a project team, we’re this team. But we never interact with any other team. We just got to solve our own problem. If you think about why that is, from a researchers point of view, we can find 80 teams that might be doing the same task. You can bring 80 teams into the lab.
So we were looking where the light was good, because you can do this. But once we started getting into the multiteam systems, Daniel, it’s like all of these things that we consider best practices turn out to not be best practices if teams are interdependent. I would teach my MBAs, “Your teams need to be empowered. They’ve got to make their own decisions, they got to have the adaptability to break right if [inaudible 00:35:02], if they break left, they’re going to go left. Stay in a pattern. But you got to empower these teams.”
Okay, in a multiteam system, empowerment looks like unpredictability. Like, “Dude, you’re never where you’re supposed to be. We planned that you were going to go left here, and next thing I look up and you’re going right. What’s that?” “I was empowered.” That’ll go great. Your empowerment is destroying me. One of the things that we frequently learn that relative to what we teach our MBAs about empowerment, these multiteam systems need to be a little bit more centralized than that. Implicit coordination. We teach that in teams, implicit coordination is great, because me and Daniel and Debra, we’ve worked together for so long, we don’t even have to do a lot of talking to each other because I know what Daniel’s thinking, Debra knows what I’m thinking. You watch us and you see this tremendous coordination without any communication.
In a multiteam system where I don’t know what your team’s doing and why? You’re clearly opaque to me. So the lack of communication makes it really hard for me to coordinate with you because I don’t know what you, Debra, and John are really thinking about because you never really articulate it. When I ask you to articulate it, no offense, but you’re not that good at articulating it, because it’s implicit cohesiveness. We build cohesive teams, baby! That’s our job. In a multiteam system, teams have to sacrifice for other teams, and the more cohesive the little component teams are, the more they love each other. The more they are unwilling to sacrifice for the larger part of the multiteam system.
In teams, open communication structures, everybody should be allowed to [inaudible 00:36:36]. Multiteam systems, man, just the number of links. 15 people? 15 times 14 divided by two, that’s 90 communication links. We can’t have 90 communication links firing. You need a level of communication discipline that we haven’t seen. I got a dozen of these things that, when I look back, we’ve been building a theory and science of stand-alone teams. And the minute you put a team in an interdependent context, some of that’s bad advice. We are trying to rewrite the rules of teamwork for teams that are part of multiteam systems. It’s not just me. It’s people like [inaudible 00:37:13] Church, Steve [Zakarao 00:37:15], John Epp. There’s a lot of people trying to rewrite their rules of teamwork for multiteam systems and we have a long way to go.
Daniel Serfaty: Thank you for that insight. Indeed, it’s not just that it becomes more complex when you look at multiple teams that have an overlap or an interdependency. The very same principles of good teamwork can actually be reversed, as you said, when it comes to working in harmonious multiteam systems. We’re all part of that. What I like with the multiteam systems, as a system engineer myself, it’s very appealing, this notion of system of systems. But from a human perspective, a human performance perspective, I think it’s fascinating because you can look at it almost as the teams that are working with each other to some interdependency in tasks, but also in peoples.
Which implies that an individual worker, manager, engineer, is part of multiple teams as an individual. If you focus on individual, and therefore, her or his behavior is adapted to those multiple teams because they migrate between the different corners of that multiteam system. That by itself is interesting in terms of selection, training, and certainly team composition.
John Hollenbeck: And again, just for our listeners, I know they want to keep a distinction between people that are on multiple teams. This is multiple-team membership. I can be on three different teams and that definitely creates challenges. But a multiteam system doesn’t necessarily have to have that. I’m only on one team, you’re only on one team, Debra’s only on one team, and we’re good. Now if you combine that with not only do we have that, but Debra’s also on this other team and you’re on this other team, and so we have both multiteam membership embedded within multiteam systems.
Daniel Serfaty: That’s the story of my life as a CEO, John. That’s what I do. [crosstalk 00:39:05].
John Hollenbeck: And again, what is the level of complexity? One of the questions that we were talking about before is what’s the single most important thing with multiteam systems? I tend to punt at questions like that, because I’m a college professor. But I will say this, whether teams or multiteam systems, you got to boil this down. The single most important thing in teams, and this is kind of ironic, single most important thing in a team is individual accountability. Like, “Daniel, do your own job. Debra, do your own job. John, do your own job.” Especially if we have our own specialization, so that I really can’t do your job. Or that I have to do my job differently because of the way you do your job. All of that stuff creates this bad magic where these five individuals are so much less than the sum of their parts because of that.
In a multiteam system, we’re kind of seeing the same thing in that the most important thing in a multiteam system is, each team has to focus on their own job. Don’t worry about the multiteam system. Just focus on your own job. You focus on your own job, that’ll put us in a position where maybe we can gain synergy. But if you don’t do your own job, I must. We’re going to watch the Super Bowl in a couple of days, okay? Offenses and defenses need to support each other. The other team’s not going to score. If I’m the defense, and you’re my offense? Guys, go put the ball in the air. Don’t let their defense score, because if you let their defense score we’re not going to… Or if I’m on offense and I can score 30 points? It’s like, “Yo, defense. All you got to do is get a few turnovers.”
So the teams can kind of support each other, but if they don’t start from doing your own job, and I know my defense is bad so I’m going to have to score 35 points? All of a sudden we’re doing things that we wouldn’t normally do as a team, because of those knuckleheads. And then those knuckleheads are doing something because they don’t trust us. Do your own job. Single biggest thing is accountability. That was at the team level or the multiteam system level.
Daniel Serfaty: I am personally a believer of that. When we come back from the break before we jump back into your new ARI program, I want to jump back about some comments you made about sports team. But I am a big believer of that. As you know, we used to have the Patriots, the greatest team on Earth here in New England.
John Hollenbeck: That’s just Tom Brady, I hate to tell you.
Daniel Serfaty: Well, but he was believer of at least the mantra of military which was, do your job. You do your job, we decompose the jobs in such a way that we will put them together as a team.
John Hollenbeck: Exactly.
Daniel Serfaty: Just do your job. That was the mantra for many years.
John Hollenbeck: At the highest level, trust [inaudible 00:41:33] to have a good plan. So just do your own job. If you don’t trust that, then all of a sudden you start not doing your own job. If you go into the Patriots training facility, that expression, “Do your own job,” is everywhere. Every time you turn a corner there’s a sign that says that. I think there’s some real [inaudible 00:41:49]. I think if I look at organizations right now and you would think that because I’m a team researcher, I want teams everywhere. But I really believe that right now, I look at a lot of organizations, their biggest problem is open embeddedness. People are part of too many teams. They make a decision that, “Oh, it might be nice if this person was there,” because that person may [inaudible 00:42:09].
Or they create a multiteam system. Maybe these teams should meet. Now you’ve just committed 15 people to have to meet because it might be worthwhile for these teams to coordinate. Often, after making a single mistake when there was a lack of coordination, but 19 times out of 20, these guys are well coordinated. One out of 20, there was a lack of coordination and now we have the meet.
The one thing I would tell your audience, the people who are practitioners, be stingy with how you create teams. Be stingy with who’s on the team and who’s not. The test is not it would be nice to have that person, versus, “No, this team really can’t function without this person or this person’s specialty,” because you might think, “What’s the harm in putting Debra on this team?” Oh, we’re putting another team. What’s the harm of putting Debra on this team, too? Hey, you know this team over here? Debra might have some interesting views on that. And often it’s not the same person. It’s three different people who don’t even know other people are putting Debra on teams [inaudible 00:43:10] people.
And the next thing you know is Debra’s running around from one meeting to another, and if she’s not taking notes or [inaudible 00:43:17], you shouldn’t be at this meeting. If you ever find yourself in a meeting that when it’s over, you didn’t take notes and you didn’t talk? You shouldn’t have been there, because the biggest problem with over-embeddedness is that it prevents individual accountability. I can’t do my own job, because I’m going from meeting to meeting to meeting to meeting. My job is teaching. I didn’t write a word today, because I went from this to this to this. Or I didn’t get my homework graded, and I’m a teacher, because I was going from this to this to this.
If the single biggest thing that you need is individual accountability, the single best way organizations can support that is not creating over-embeddedness. If somebody tells you, “Dude, I’m on too many teams,” they might be right. So try to avoid that [inaudible 00:44:01].
Daniel Serfaty: I think this is a beautiful, right on, literally right on, concern that many enterprises have these days. More these days, in the times of COVID, than in any other days. Because precisely the barriers of forming teams have disappeared. It is basically how many people you can get on Zoom. And so, we do that… I leave that every day by guiding my team leaders, my managers, to be, I like the term stingy. Don’t create a team for everything, and when you create a team, be very stingy on the number of people you’re going to bring to that team, because otherwise you hit a fragmentation level that basically start having decreasing return on productivity.
It’s a real, real problem today, and any guidance that had come from the researchers or leaders in teams research regarding how to form those teams and how to form them in a way, just in that right middle when there are just enough teams and enough people on the teams, but not too many. Right now, it’s an empirical. We have some empirical rules, but it will be great to be guided by theory.
John Hollenbeck: Richard Hackman, 5.4. Ideal team size, 5.4. That’s five adults and an eight-year-old. I actually don’t know what that eight-year-old does, but it can’t hurt. And just to reinforce, there’s two things that need to happen at any high-level organization. Yes, you need collaboration. That’s true. But you also need individual concentration. If you’re working on a complex job and you need to be accountable, you often need to be able to shut the door, turn off the phone, turn off your email, because what you’re doing is difficult and it requires concentration.
What I would tell managers, and I’ve seen some organizations [inaudible 00:45:43] budget, that if you just spent collaboration dollars on me, you just put Hollenbeck in a meeting that he didn’t have to go to before. But you owe me some concentration dollars. Where are you giving me time back that I can concentrate on my job? Because left to your own devices, or the uncoordinated devices of 17 different people that put me on 17 different committees, and don’t know it? You just destroyed my ability to do my own job.
So anyway, enough about over-embeddedness. [crosstalk 00:46:11].
Daniel Serfaty: With that, I think it’s right on. I hope our members of our audience are going to heed that advice. We’ll be back in just a moment. Stick around.
Hello MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS but don’t have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis. They are curated segments from the MINDWORKS podcast, condensed to under 15 minutes each, and designed work with your busy schedule. You’ll find the Minis, along with full-length episodes, under MINDWORKS on Apple, Spotify, Best Prout, or wherever you get your podcasts.
I want to ask you a question as we talked earlier about the magic of team. The last question before we move onto the futuristic multiteam systems, and artificial intelligence and everything else. You come from a long tradition and a family of coaches and coaches of teams, and you’ve seen, especially in the sports area which is the number one metaphor in America when people talk about teams, immediately people talk about sports teams. They know the great sport teams, and then the sport teams of history.
Tell us a little bit your perspective about that. Your own observation about what, especially in the sports area if you decide to choose that, what has made good teams great?
John Hollenbeck: Again, in terms of how I got started and interested in teams, as you said, my family is coaches. My father was a coach, my brother is a coach, my son’s a coach. We’re constantly talking about things like that, because it’s always in the forefront for us and almost all the discussions, Daniel, are about synergy or process loss. How this team was so much better than you’d think, or how this team was so much worse than you would think. That’s about half the conversations we have about teams.
And then when we try to dissect it after the fact, and I do think sports resonates with people because for a lot of people that’s something that they share, it’s very public, and so they’ve seen it. In my MBA class, I always talk about the 1972 US basketball team as an unbelievable team. I actually have their names written down here, so it kind of helps me remember them. Here we go. Tim Duncan, LeBron James, James Wade, Carmelo Anthony, Allen Iverson, Amar’e Stoudemire, Carlos Boozer. Every name I just mentioned there is in the NBA Hall of Fame. Hall of Fame. They lost three games. They got beat by Puerto Rico, Lithuania, and Argentina.
Somebody joked that the gross national product of Lithuania was smaller than the combined salaries of the US Olympic team. I don’t know if that’s true or not, but that team’s been dissected. And then of course the flip of that is 1976 US hockey team that won the gold medal, and not a single one of those guys went on to be a pro. Literally. I mean, to really make a living as a pro, let alone Hall of Fame. And so again, these are just extreme examples of that.
The reason I love this is because anytime I give this example in class, and our MBA program is very, very international. [inaudible 00:49:06], it’s very, very international. I will tell you, somebody will come up and they’ve all got their own story about the 1987 Chinese ping-pong team.
Daniel Serfaty: Oh, from their own country.
John Hollenbeck: From their own country. And they’re like, “Well…” And they’ll talk about it. You know, I’m not a big ping-pong guy. This person’s obviously super into ping-pong. And the 1987 team was so much less than the sum of their parts for reasons I don’t know, but this Chinese person is talking to me and it’s like… Or the 2005 Canadian curling team. I don’t even know where this story’s going, but I know it’s going to be a story of synergy and process loss. “Oh my god, the 2005 Canadian curling team, they shouldn’t have done anything! These guys didn’t know what they were doing.”
And then you put them together. The 2009 Italian [bocce 00:49:46] team, again, I don’t even know where this story’s going. But the minute this person wants to talk about the 2019 bocce ball, I know it’s going to go one direction. I just think this is a trans-cultural phenomena where we all get it. We all get the synergy process loss thing, and we’ve all got our own favorite example of it. And even people that don’t really follow sports will have an example of it of… Another one is you see weddings. The five maids of honor at this wedding are the most beautiful, wonderful people in the world. But man, this was [inaudible 00:50:20]. Or that these five groomsmen are the five greatest guys. This should have been the greatest party of all time, but all we got was a fight in the parking lot.
Everybody’s got the story of individuals that just came together and man, you didn’t get anything to predict it. At Michigan State, I always talk to my department head about the perfect meeting. A perfect meeting for Michigan State is absolutely what you thought was going to happen at that meeting, happened at that meeting. No surprises. No dynamics. No synergy, no process loss. It’s just a meeting, let’s get in there, get out of there, nobody get hurt, [inaudible 00:50:55].
Anyway, yeah. Sports teams are definitely attractive for that reason. It just really resonates with people.
Daniel Serfaty: Thank you. That’s funny. People describing what’s the best meeting ever. That would be a great discussion [crosstalk 00:51:08]. Let’s turn the dial a little bit toward the future and maybe it is linked to a new form of multiteam systems. This last year of confinement, Zoom work, distributed operations, has basically created this notion of, and I believe we don’t have a word in the English language yet for that. A network, a team, a multiteam organization, that because we have lowered the barriers of composing some work organizations, we created these new entities. They’re amorphous. Some of them are more structured than others. I wouldn’t call them a team, but I don’t know how to call them.
My son plays video games with folks he’s never met in his life, and he plays them every week. So they meet every week to play together as a team. But maybe it’s not a team. It’s a [inaudible 00:52:05] of social structure that has been enabled by technology. Are we looking at the dawn of a new way by which human organized to accomplish the goal?
John Hollenbeck: Yeah. Well Amy Edmonson has actually coined the term ‘teaming’ to get away from the word team. Really what you see at organizations is teaming. It’s much more verb, and the word teaming, if you think about it, means it’s just teaming with this or teaming with that. Again, it is these really kind of unstructured teams. For your audience doesn’t know this, but Daniel and I, another thing that we share is, we both have twins. I went in our basement, my twin boys were playing Halo with, like you said, a bunch of strangers. People were swearing. It was like, “Who are you playing with that are swearing?” And they were like, “Oh, we’re playing with these guys from Australia.”
So they’re in my basement in East [Lancing 00:52:54] playing Halo with a bunch of guys from Australia. For all I know, they could be a real-life SWAT team. Who knows? But like you said, the barriers are so low now that all of a sudden we’re playing with different people. Yeah, I definitely think that’s part of the group over-embeddedness problem. The fact that it’s all technologically mediated makes it harder, too, because a Zoom meeting has a lot of features that, for evolutionary reasons, are not really good for people in the eye context, not good. The head sciences aren’t right. It doesn’t really simulate being in a room with three-dimensional people in a way, and it’s very, very tiring for your human brain to try to process the fact that this is not a normal group situation for your human brain, but your human brain’s trying to make it like it is one. It’s extremely fatiguing.
Daniel Serfaty: Why is that tiring?
John Hollenbeck: It’s just not natural. If we’re in a group meeting, all eyes aren’t on me. But if I’m in a Zoom meeting with nine people, I’m looking at nine faces all of which look like they’re staring at me, even when I’m not talking. You all are staring at me, when you’re not. Where if we’re in a real-life conference room, I can see that you’re looking this way, Debra’s looking at [Irash 00:54:02], Dan’s on the internet, these two guys are talking to each other. Again, it’s just not natural and because it’s not natural and we’re trying to make it natural, it becomes tiring.
I do think what we’re learning with Zoom meetings as [inaudible 00:54:13] is size. Group size is not the number of people, it’s the number of communication links. So a group of five is five times four divided by two. That’s 10 communication links. Okay. You double that. 10 times nine divided by 2. That’s 45 communication. A Zoom meeting with 10 people on it, especially if you follow my rule that if you’re not taking notes and you’re not contributing to the conversation, you shouldn’t be there? That’s tough. 10-20? Forget it. The communication links just kind of explode.
I do think we’re relearning lessons about team size, and that team size needs to be a lot smaller. I think the other thing we’re learning yet again, this is Richard Hackman, and Amy Edmonson is a student of Richard Hackman, so I think she gets this more than the average person. But he was a big fan of boundaries, team boundaries. A team’s got to have a hard boundary, and people can’t just come in and out of this team like it’s Grand Central Station and pop in and pop out, because it’s very dysfunctional. So he was a big fan of not just small teams, N=5.4, by the way, was his number. But also teams with really strong boundaries.
So like, okay. We may want to let the other people in. But not routinely, and I do think that the concept of teaming recognizes that we’ve sacrificed a lot of that and that we just throw teams together all the time. It takes a long time to build a team. They’ve got to to through stages of development. If we interrupt that every time, we shake them up. And so it’s ironic because people believe in teams, and that’s why they’re constantly forming them, not recognizing that they’re engaging in very self-defeating behavior because they keep stirring up the teams that they’re trying to build where they just really need to leave them alone and respect their boundaries, and maintain their boundaries.
Daniel Serfaty: I think this notion of boundaries is important. I have so many questions about that. I want to make sure that I don’t ask all of them. But I think since I’ve seen that as you have probably evolved the notion of maintaining team size for meaningful interaction, and by meaningful I just don’t mean productive interactions but also meaningful in terms of does it enrich me to interact with that person? In more and more meetings I go to now, virtual meetings, they’ll use breakout rooms to make those teams, to control basically team size through Zoom device to break people up into four or five people teams, has been used almost empirically by people because they wanted to get some work done because they realize that team size have to be controlled.
John Hollenbeck: I do a lot of online teaching, and one of the first things they teach you in online teaching is you can’t just go, for my MBA class, me against 40. You must break out into breakout groups, because otherwise it’s so much harder for the students to have… The students think there are 40 faces looking at them, which is not true. But you go into these breakout rooms. I not very good at technology, and I go to a breakout group it’s like, “Oh man. I hope they come back.” I lost two MBA students, the FBI’s looking for them. We went into a chat room, they never came back and Daniel, I hate to say it. We still don’t know where those kids are.
Daniel Serfaty: They are lost in cyberspace somewhere.
John Hollenbeck: They’re lost in cyberspace. And so yeah, chat rooms are important but make sure you get every single kid back because otherwise there’s going to be some hard questions asked.
Daniel Serfaty: I want to turn the page on this one, because I think that at the end of the day, I wonder, it’s really a research question almost or a philosophical question even, more than a research question. I wonder if perhaps our generation concept of the social structure called teams, which has been essential to our professional, personal development, professional success, is basically disappearing. It’s disappearing because our next generation, your twins, my twins, are much more tolerant of surface-level connection, and because they are tolerant of that surface-level connection, they are not superficial. That’s the way they conceive of that connection with that coworker in Finland, or that coplayer in Australia.
They can sustain many more of those connections because they are not as deep as we think they should be. Therefore, I think this notion of teaming will be interesting for the next generation of researchers to see whether or not there is actually an age different between the digital natives, the people that grew up with the network, and people who did not.
John Hollenbeck: I hope you’re right. I will say that as an evolutionary psychologist, that brain that you have in your head and the brain that I have in my head was basically developed two million years ago, and it doesn’t respond overnight to changes in technology. You have a hunter-gatherer brain. So do I. So I worry about it. I will tell you my son, Tim, one of my twins, he thinks he has 10,000 friends. Well Tim, see if any of your 10,000 friends are going to come help you move this week, because I’ll bet you go over 10,000, see if anybody will let you use their pickup truck. He doesn’t have 10,000 friends.
I had a student come in because one of my classes, I take their technology away. Especially I have a class of freshmen, and it’s kind of like Independence Day when they see the aliens and it’s like, “Well, if you take away their technology, they’re just like humans.” That’s what I’m learning about freshmen in college, because I’m 63 years old. Even my kids are 31. These are aliens to me. But you must take their technology away, because you’re teaching in a class and a kid’s on his phone, he’s on the video, and you hear the ESPN jingle going off. So i just shut it down. I go, “For 80 minutes, the 40 of us are just going to be together as individuals, shut off from the outside world, talking to each other.”
One of these students came up to me. He says, “I understand where you’re coming from. I understand we have to concentrate. I understand we have to focus.” He goes, “But you just don’t understand how good my generation is at processing parallel information coming from many different angles.” He goes, “Dr. Hollenberger, you just got to give me a chance.”
I was like, “Son, you had me right up until that Dr. Hollenberger thing, because we’ve been in class for five weeks and my name’s Hollenbeck. I put it up on the board every single time.” And so maybe we’re seeing an evolution in human history, but I’m not seeing it in some of my students quite yet. But anyway.
Daniel Serfaty: Dr. Hollenbeck, you just perhaps proved that maybe precision is not a value in the future. [crosstalk 01:00:32] in family names, that’s less [crosstalk 01:00:35].
John Hollenbeck: Just that big ugly guy that’s up there talking. His name’s not important. You know how I’m talking about.
Daniel Serfaty: We’re talking about the future and the future evolution. I know that you want to ask some questions, but more and more, we are looking at teams. Again, we have to invent a whole new vocabulary, I think, because we are unable and constrained by our own language. More and more we are studying around, my colleagues and I, with teams that are made basically of different entities. Some of them carbon-based, or human, some others artificial. They can be robotics, they can be artificial intelligence bots, they can be different kind of entity. In fact, for the past year, we had a new employee at Aptima. Her name is Charlie. She was the object, and the subject, frankly, of the first episode of this podcast series on MINDWORKS.
She’s artificial. She has somewhat a personality. She helps doing all kind of things now, more and more. She was designed to be a panel member in a conference, but now she’s also members of proposal teams. She co-wrote a chapter with her co-creators on artificial intelligence. Kind of interesting. And so we are creating those new teams for the military, for the hospitals, for even research and development companies like my own, that are made of different types of intelligence.
Do you think we are entering a new area, or is just same old, same old? In a sense that whatever we know about teams, even multiteam system at some point, is going to apply whether or not some of those members are not human?
John Hollenbeck: Again, I do think this is kind of a new area. I don’t think a robot or an AI is treated by other humans like a human. So I do think this is kind of a new area. It’s not like, “Oh, this is a team, but now this team has a child on it. Or this is a team, but now this team has a genius on it.” I really do think it’s qualitatively different, and I know you guys at Aptima have way more experience than I do. I have one experience in AI. Basically it was a company that was looking to compose teams, because you can image so much of the work is outsourced now. A lot of work is outsourced as individuals.
Yet as organizations are increasingly built around teams, you might want to outsource a whole team. So the idea is this organization’s going to basically be an outsourcer and so they wanted to have an AI to learn how to compose really good teams from a bunch of individual outsourcers. Okay, we have all these outsourcers that we can draw from. Who’s the best team from a team chemistry point of view?
What the AI is trying to do is what makes teams cohesive? What makes people good performers? And the criteria was like leader evaluations, or team member evaluations. Do you feel like this team did a good job? Do you feel like this team is cohesive? Did you feel like this person was doing a good job? [inaudible 01:03:23]. And so, the AI was trying to learn what goes into getting good evaluations and poor evaluations, but the thing was like a precocious child. It was learning all kinds of things you didn’t need to teach it.
If there’s bias in the supervisor evaluations, the AI learns the bias. The team is all homogenous group of these, and this person’s an outlier, this person’s a token. The AI learns that tokens don’t work. The people that are building this thing, they say, “Oh, it’s so beautiful because the AI doesn’t have any human biases. It’s just an objective…” It has human biases if you teach it! A child’s not born with that. It has to be taught.
That was just my first experience, but we’re increasingly looking at multi systems to reduce the number of people. It’s tempting to replace people with AI things and so I would love to hear about what you guys have learned. How do people react to it? Are there a lot of differences to it? I mean, what’s been your experience?
Daniel Serfaty: That’s a good question. We don’t have enormous experience. This is a nascent field, this notion of so-called human-AI teams. The first thing that we are discovering is that the paradigm or replacement is a wrong way to think about the problem. If you just say, “Oh, we’re going to replace that node on the diagram with an AI,” yes. That can work for some kind of tasks. I assume that that’s less interesting. The more interesting insertion of AI in teams, as opposed to human-AI teams, is having AI do functions that you and I and team researchers have dreamed about somebody doing that function. Maybe an AI that is roaming the team, roaming the information system of the team, and find coordination opportunities, and then suggest that. Or find collaboration opportunities. Kind of an eye in the sky AI that can reallocate tasks, can reallocate information, can even suggest new types of or new forms or new directional collaboration that is not happening in order to, say, optimize a mission.
John Hollenbeck: Almost like matchmaking. These two people should get together. They don’t even know each other, but if they did, wow would they hit it off. Kind of like that?
Daniel Serfaty: Something like that, and why would an AI like that be able to do that? Because it’s an AI that has learned, that has absorbed a lot of data about teams. That’s why I think my suggestion, this is just one insight that is emerging right now, because most people look at one human, one AI, and how do we optimize a dyad? As you taught us a few minutes ago, dyads is interesting. Triad is much more interesting. So just focusing on the one-on-one and then generalizing to larger teams is a dangerous thing to do.
We are discovering that, also with AI, the question I have is actually a plea to the team researchers in organizational psychology, in management sciences, et cetera, that these essential questions for our future, for the future of work, are too important to be left only to the AI engineers. Because in the example you gave, if AI is just given a free rein to do whatever they want, they’re going to do exactly that. Principles of good work and good collaborations are going to disappear, and who knows principles of good work and good collaborations? Researchers like you. So I think it’s very important that this field takes a plunge and say, “Okay, we’re going to study that,” because as I say, it’s too important to be left to [inaudible 01:06:57] to a developer.
John Hollenbeck: Let me give you a really good example that reinforces that. We have a paper published on wearable sensors. We do a lot of things with wearable sensors now in terms of collecting data. There was a wearable sensor that they did a lot of social metric work to kind of capture who was talking to who, who was physically located, whatever. They were generating their own measures and whatever, and we were asked by the National Science Foundation to look at that and see what their applications were for both business and research.
The thing that we recognized right away were these engineers were developing an entirely separate science of teams. And so they would have wearable sensor measures of cohesiveness and what cohesiveness is. It would define it by what this wearable sensor’s doing. Of course, we couldn’t get it to correlate with any measure of cohesiveness that we have. We would measure groups and we would use the wearable sensor measure of cohesive, and it was like, virtually they had almost every construct they had, there was a construct that existed in the science, and none of those correlated.
You would literally have a science of teams being developed by the wearable sensor people. A 50-year science of teams in the pages of Journal of Applied Psychology. And those worlds don’t come together at all. Your point that you can’t leave it to the engineers? Oh my god, these guys didn’t understand the most basic idea in psychometrics. Why would they? But they didn’t understand a single basic thing about psychometrics. They would have all these metrics that all of a sudden they would just take a bunch of numbers, multiply them together. One of the equations had pi in it, 3.14159. As far as I could tell, the only reason that we were multiplying this thing times pi was because it makes it look scientific. But then, that was a separate variable for some other variable, and when we tried to do a factor analysis, the system crashed because we found out that actually these two variables were the exact same thing except this one’s multiplied by pi.
And so they correlated 1.0, and it was like, “Guys. I’m sorry.” It was kind of like they generated all of these supposedly psychometric constructs, and they don’t understand. It’s not their fault. What they do is hard. You can’t know everything. You can’t know all that stuff and know 50 years of psychometric theory.
Daniel Serfaty: I allow myself to say that sentence. It’s too important to be left to engineers. I am an engineer, as you know, and that’s precisely why I’m saying that because the matters are complimentary but you cannot substitute, basically, that people don’t treat each other [inaudible 01:09:24]. I think it’s very important, I think is probably the most important, more so than the internet, perhaps, transformation of the future of work, of what work is going to mean in the future. This notion of seamless blending of intelligences between the realm of the artificial and the realm of the human.
I think it’s even more important to think of it as an interdisciplinary enterprise, which brings us back to that first project when you and I, many years ago, where the reason magic happened in that project, you turn it into herding cats. But it was herding cats, but when the herd worked it was beautiful to watch because then you had network theory experts and we had industrial organization, psychology systems engineer, mathematical modelers, experimental psychologists, eventually all working around the same questions. They had different answers, or different methods, [crosstalk 01:10:20] question.
John Hollenbeck: And it took time. I gave you a lot of credit, because you were kind of the physical leader of that group. But Bill Vaughn, as you know, Bill Vaughn really deserves a lot of credit for that group [crosstalk 01:10:31].
Daniel Serfaty: … because Bill Vaughn was not with us anymore was, a division leader at the Office of Naval Research would have the vision to bring together multiple universities, multiple experts, top experts in their field, to study the notion of adaptive organization and adaptive teams.
John Hollenbeck: And so he brought in the math modelers, the social network people, the [inaudible 01:10:52] theory people, the lab people, and he was patient and gave us time. He recognized in the beginning, it’s just going to be parallel play. Each of these guys is working on the same problems, and there’s not going to be a lot of interaction. It’s just going to be almost like children playing next to each other. But over time, because he was patient, eventually these things fed into each other and we were mathematically modeling some of the theories that we were building.
I just can’t give Bill Vaughn enough credit for that to have both the vision and the patience to let that thing happen, because it took a lot of time for that to happen.
Daniel Serfaty: Yeah, it takes vision and many of us have been working with the government for a long time, with different agencies within the government. From time to time, you have a visionary leader like that who can sustain and understand that these things take time. I think in what we’re talking about, this notion of blended intelligence work and the future of teams in that environment, the paradox is that one side of the equation evolves at such a fast speed. I’m talking about the artificial intelligence, the deep learning where knowledge becomes obsolete within 18 months or something like that. Therefore, that’s the big paradox or the big challenge for this new enterprise, I think, to be able to synchronize ourselves to the fact that the very concept, when we say artificial intelligence can do X, X can become 10X in the next year.
Therefore that changes the problem, and that changes probably the solutions, too.
John Hollenbeck: And then as you know, Daniel, we know where this goes. Eventually, the artificial intelligence recognizes that the only threat to its existence is the humans. Dude, it’s like you’re building Skynet. Do you even recognize that you’re building Skynet? I mean, this movie always ends the exact same way. How come nobody sees it?
Daniel Serfaty: Maybe then we need researchers like you, and the next generation of researchers that you taught and you trained, to be able to prevent us to reach a Skynet kind of model. But John, I have time for one last question. I wanted you to just briefly [inaudible 01:12:55] on a prediction on what we just talked about besides Skynet as a prediction, which [inaudible 01:13:02]. How do work teams and multiteam system look like in 10 years? What are some of the things that you see?
John Hollenbeck: It’s hard for me to look into the future in advance, but I will tell you what we’re doing and what we’re committed to, and what we believe. We do believe that the reason we have so many team-based structures in Western societies is because it’s impossible for businesses to compete on cost. Because competing on cost happens in nations where the labor standards are so low, we can’t possible go there. Therefore we must compete on differentiation, we must compete on speed.
All of the things that pushed us [inaudible 01:13:37] job [inaudible 01:13:38] individual who could work alone, all by himself, without having to talk to other people. If it doesn’t get roboticized, it’s going to be sent off-shore so far you’ll never see it again. And so, there’s just increasing pressures in Western societies to be faster, more creative, more differentiated. You cannot compete on costs. So that’s where we’re going. If that’s where we’re going, then we’re going to be doing multiteam systems, and as I said before we have a science of stand-alone teams for really good reason, and that it’s hard to do research on multiteam systems. That’s a problem.
I am part of that problem. I will tell you specifically why. In the beginning, there was multiteam research and then multiteams were like two teams of two, or two teams of three. I will tell you, in the beginning, people that you know, smart people like Ted [Salus 01:14:26], people like Steve Gonsowski saying, “There’s no such thing as a multiteam system. These things are just teams.” They were right, because a two-person team is not a multiteam system. It requires size, and it requires specialization.
We did a program that was sponsored by AFOSR where we worked with captains and squad officers. We built one of the largest databases ever on multiteam systems with 15 or 16 people. We would go to AFOSR, I would teach in the morning. We had 450 people. And then we would run 31 teams, 15-person teams, in the afternoon. We would do that for three days, and we did that for seven years. It almost killed me. But this is not the kind of paradigm that the average person can do.
I was at a conference one time, and somebody said, I literally heard him oversay, “Oh, Hollenbeck ruined multiteam systems. Because now you can’t do it with two teams of two, or two teams of three. You have to have 80 teams of 15 people, which nobody can do.” He says, “Hollenbeck’s literally put the industry out of business because he has a paradigm that nobody else can do. He has access to these 50,000 captains or whatever.” And so I did feel a little bad.
We just got a grant today, I’m very excited [inaudible 01:15:35] about, with ARI. The purpose of the grant is to help the National Infrastructure for Multiteam Research. Lower the bar. This will be housed in the beginning at Michigan State, but later we want to distribute this. The idea is we will have a system that if you show up, Daniel, with 80 teams and Debra shows up with 80 teams, and I show up with 80 teams, which most team researchers can do? We are going to put you into a multiteam system. All of a sudden people that couldn’t do multiteam system research before, can. All this data will become public, all of this data will be at a repository.
Obviously there’s going to be some negotiation, because you have this angle that you want to study and Debra has this angle, and I have… But we will use those angles to try to create, “Oh, here’s a good multiteam system. Daniel and Bill [inaudible 01:16:23]. They want to do this. We should put these guys together and we’ll run 80 of those.” [inaudible 01:16:27].
Now eventually, the first three years is just Michigan State, Penn State, and Arizona State. That’s my colleague Jeff [inaudible 01:16:34] at Arizona State, and Steven Humphrey at Penn State. But a lot of teamwork is going on there. In the third year, we’re looking to branch out and we’re going to try to find three other universities that will partner with us. Beyond that, we’d really like to put this distributed. I’m 63, Daniel. This would be my legacy, baby. If we can create a national infrastructure for MTS [inaudible 01:16:52] that I can walk away from and that thing’s self-regulating and self-sustaining and self-operating? We will have a scientific evidentiary base for multiteam systems that A) we do not have now, and we will not have without this kind of infrastructure.
Again, Greg [Roark 01:17:09] is basically supporting this. I do think he gets it. He gets that this is going to be a legacy that if we build it, they will come. If they come, we’ll have something that we wouldn’t have had four years ago. If I’m responsible for destroying MTS research, perhaps I can be at least partially responsible for lowering the bar and getting people back in it with the help of our friends at ARI.
Daniel Serfaty: I think this is such an exciting prospect. Thank you for sharing that with us. Congratulations, first, on obtaining this grant with your colleagues at the other universities. I think if anybody can do that, John, it’s you, because you have enough reach to the right, to the left, in terms of the different types of research that are out there. This is part, for our audience, of the larger trend in scientific research in which massive collaboration, multi-institution collaboration, will provide the data and bring the data to a much larger, worldwide audience.
We’ll have a podcast in the future about the future of those massive collaborations. But John, thank you so much for sharing all these wonderful stories with us today. Best of luck on that new enterprise. Maybe in a couple of years I’ll have another podcast with you and how you succeeded.
John Hollenbeck: I want to thank you for the lovely gift of this microphone. I’m going to keep it and cherish it. I’m going to put it next to my Aptima mugs that I took from your office several years ago. I love the Aptima mugs. Just kidding, my friend. I’ll mail that one back. I’m just kidding.
Daniel Serfaty: Thank you, John.
Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast, and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely, whose name you’ve heard several times today. My audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.