MINDWORKS Season 2 Transcripts

MINDWORKS Season 2 Transcripts

Daniel Serfaty: Welcome to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This week, we will explore the far boundaries of technology with a topic that is familiar for all of the audience because the literature and the film industry is full of those stories. Usually those stories turn tragic from the Golem of Prague to Frankenstein’s monster, to HAL in A Space Odyssey to the Terminator. There is always this extraordinary artificially intelligent being that at some point turns against the human and there is hubris and fear and folks have this image of, uncontrolled, that’s an important word, robots or artificial intelligence. There is a lot of positive in all that, but also a lot of warning signs. And we will explore that with my two guests today. We are lucky, we are really fortunate to have two people who’ve been thinking about those issues while doing very successful careers in technology.

And my first guest is Dr. William Casebeer who is the Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center. He has decades of experience leading interdisciplinary teams to create solutions to pressing national security problems at Scientific Systems Company, the Innovation Lab at Beyond Conflict, Lockheed Martin Advanced Technology Lab, and at the Defense Advanced Research Projects Agency or DARPA.

My second guest is Mr. Chad Weiss. And Chad is a senior research engineer at Aptima Inc. So full disclosure, he’s my colleague and I work with him on a weekly basis. Chad focuses primarily on user experience and interaction design. He comes to us from the Ohio State University where he studied philosophy and industrial and systems engineering, focusing on cognitive systems engineering. Now that’s a combination that is going to be very useful for today’s discussion.

So Chad and Bill, we’re going to go into exploring unknown territories. And I know you probably have more questions than answers, but asking the right question is also what’s important in this field. Both of you are very accomplished engineers and scientist technologists, but today we’re going to talk about a dimension that is rarely approached when you study engineering. It’s basically the notion of ethics, the notion of doing good as well as doing well, as we design those systems.

And specifically about that are intelligent, that are capable of learning, of initiative. And that opens a whole new domain of inquiry. And our audience is very eager to even understand beyond the Hollywood lore, what are really the issues. So what we’re talking generally speaking is about really the future of work of war, of transportation, of medicine, of manufacturing, in which we are blending, basically different kinds of intelligences, artificial and human. And we are at a moment of a perfect intersection between technology and philosophy. Let’s call it by its name, ethics. The ancient Greek philosophers studied ethics way before formal principles of engineering and design. So why now, why is it important now at this juncture to understand and study the ethics of artificial intelligence? Why now? Bill?

Bill Casebeer: I think there are really three reasons why it’s so important now that we look at the ethics of technology development. One is that our technologies have advanced to the point that they are having outsized effect on the world we live in. So if you think about over the span of evolutionary timescales for human beings, we are now transforming ourselves and our planet in a way that has never been seen before in the history of the earth. And so given the outsized effect that our tools and ourselves are having on our world, now more than ever, it’s important that we examine the ethical dimensions of technology.

Second is I think that while we have always used tools. I think that’s the defining hallmark of what it means to be a human, at least in part. That we are really good tool users. We’re now reaching a point where our tools can stare back. So the object stares back as the infamous saying goes. So in the past, I might’ve been able to use a hammer to drive a nail into the roof, but now because of advances in artificial intelligence and machine learning, I can actually get some advice from that hammer about how I can better drive the nail in. And that is something that is both qualitatively and quantitatively different about our technologies than ever before.

Third, and finally, given that we are having dramatic impact and that our technologies can talk back, if you will, they’re becoming cognitive. There’s the possibility of emergent effects. And that’s the third reason why I think that we need to think about the ethics of technology development. That is we may design systems that because of the way humans and cognitive tools interact, do things that were unintended, that are potentially adverse, or that are potentially helpful in an unexpected way. And that means we can be surprised by our systems and given their impact and their cognition that makes all the more important that we think about those unanticipated consequences of these systems that we’re developing. So those are at least three reasons why, but I know Chad probably has some more or can amplify on those.

Chad Weiss: Yeah. So I think it’s an interesting question, perhaps the best answer is if not now, when? But I also don’t see this as a new phenomenon. I think that we have a long history of applying ethics to technology development and to engineering specifically. When I was in grad school, I joined an organization called the Order of the Engineer, which I believe some of my grad school mates found a little bit nerdy at the time, but it fit very well with my sort of worldview. And it’s basically taking on the obligation as an engineer to operate with integrity and in fair dealing. And this dates back to, I believe the 1920s. A bridge collapse in Canada, and it became readily apparent that engineers have an impact on society.

And that as such, we owe a moral responsibility to the lives that we touch. In the case of AI I think that the raw power of artificial intelligence, or these computational methods presents some moral hazards that we need to take very seriously. And when we talk about ethics in AI, I think that one thing I’ve noticed recently is that you have to be very deliberate and clear about what we’re talking about. When we say artificial intelligence, if you read between the lines of many conversations, it becomes readily apparent that people are talking about vastly different things. The AI of today, or what you might call narrow AI is much different from the way that we hypothesize something like an artificial general intelligence that has intelligence closer to what a human has. These are very different ethical areas, I think. And they both deserve significant consideration.

Daniel Serfaty: Thank you for doing a 360 on this notion because I think the definitions are important as well as those categories, Bill, that you mentioned are very relevant. I think what most people worry today are about your third point. Which is, can I design that hammer, it may give me advice of how to hit a nail, but can suddenly the hammer take initiatives that are not part of my design specification. The notion of emerging surprising behavior.

I mean, Hollywood made a lot of movies and a lot of money just based on that very phenomenon of suddenly the robot or the AI refusing to comply with what the human thought should be done. Let’s start with an example, perhaps. If you can pick one example that you’re familiar with from the military or from medicine. Can be a robotic surgery or from education or any domain that you are familiar with, then describe how the use of it can represent an ethical dilemma.

I’m not yet talking about the design principles. We’re going to get into that, but more of an ethical dilemma, either for the designer who designed those systems or for the operators that uses those systems. Could you share one example? I know you had tons of them, but to pick one for the audience so that we can situate at least the kind of ethical dilemma that are represented here, who wants to start?

Bill Casebeer: I can dive in there Daniel, and let me point out that Chad and I have a lot of agreement about how the kind of a history of technology development has always been shot through with ethical dimensions. And some of my favorite philosophers are the ancient virtue theorist out of Greece who we’re even then concerned to think about social and physical technologies and how they impacted the shape of the polis, of the political body.

It’s interesting that Chad mentioned the bridge collapsed. He might’ve been referring, correct me if I’m wrong, Chad, Tacoma Narrows bridge collapse, which was where a new change in the design of the bridge, where we eliminated trusses from the design of the bridge was what actually caused the aeroelastic flutter that led to the bridge oscillating and eventually collapsing. That dramatic footage that you can see on YouTube of the collapse of galloping Gertie

And so that just highlights that these seemingly mundane engineering decisions we make such as “I’m going to build a bridge that doesn’t have as many trusses can actually have direct impact on whether or not the bridge collapses and take some cars with it.” So in a similar fashion, I’ll highlight one technology that demonstrates an ethical dilemma, but I do want to note that I don’t know that confronting ethical dilemmas is actually the best way to think about the ethics of AI or the ethics of technology. It’s a little bit like the saying from, I think it was Justice Holmes, that hard cases make bad law. And so when you weighed in with a dilemma, people can immediately kind of throw up their arms and say, “Oh, why are we even talking about the ethics of this? Because there are no clear answers and there’s nothing to be done.”

When in fact for the bulk of the decisions we make, there was a relatively straightforward way to design and execute the technology in such a fashion that it accommodates the demands of morality. So let me throw that caveat in there. I don’t know that leading with talk of dilemmas is the best way to talk about ethics and AI, just because it immediately gets you into Terminator and Skynet territory. Which is only partially helpful.

Having said that, think about something like the use of a semi-autonomous or autonomous unmanned aerial vehicles to prosecute a conflict. So in the last 20 years, we’ve seen incredible developments in technology that allow us to project power around the globe in a matter of minutes to hours, and where we have radically decreased the amount of risk that the men and women who use those systems have to face as they deliver that force.

So on the one hand that’s ethically praise worthy because we’re putting fewer people at risk as we do what warriors do. Try to prevail in conflict. It’s also ethically praiseworthy because if those technologies are constructed well, then they may allow us to be yet more discriminate as we prosecute a war. That is to reliably tell the difference between somebody who’s trying to do us harm, and hence is a combatant and someone who isn’t, and is a person on the battlefield.

And so those are two ethically praiseworthy dimensions of being able to drop a bomb from afar, you put fewer lives at risk. You put fewer warriors at risk and you potentially become more discriminant, better able to tell the difference between combatants and non-combatants and morality demands that if we were going to be just warriors.

However, the flip side of that is that being far removed from the battlefield has a couple of negative effects. One is that it makes you less sensitive as a human being potentially to the damage that you’re doing when you wage war. So when you are thousands of miles away from the battlefield, that’s a little bit harder for you to see and internalize the suffering that’s almost always caused whenever you use force to resolve a conflict. And that can cause deadening of moral sensibilities in such a way that some would say we perhaps become more likely to use some of these weapons than we otherwise would if we were allowed to internalize firsthand the harm that can be done to people when you drop fire from above on them.

Secondly, if we delegate too much authority to these systems. If they’re made up of autonomous, semi-autonomous and non-autonomous components, then there’s the likelihood that we might miss certain dimensions of decision-making that are spring loading us to use force when we don’t necessarily have to.

So what I mean by that is that there are all kinds of subtle influences on things like deadly force judgment and decision-making decisions that we make as warriors. And let me use a homely example to drive that home. When I was teaching at the Air Force Academy, we have an honor code. And so the cadets all swear that they will not lie, steal or cheat or tolerate amongst the cadet body, anyone who does. And you might think that that is a matter of individual judgment to do something that you, I might later regret when it comes just to say, preparing for a test. You might make that fateful decision to cheat on an exam, in a way that ultimately serve no one’s interests. Those who want people who know the subject matter, and those who want individuals to be people of integrity, who don’t cheat or lie.

But it turns out that when you look at the data about what leads cadets, or really any human being to make a bad decision, a decision they later regret that there are lots of other forces that we need to take into account. And in the case of those students who cheated, oftentimes there were precipitating conditions like a failure to plan. So that they had spent several sleepless nights before the fateful morning when they made a bad decision to cheat on an exam. And so the way to build a system that encourages people to be their best selves was not necessarily to hector or lecture them about the importance of making a decision in the moment about whether or not you’re going to cheat on the exam. It is also to kit them out with the skills that they need to be able to plan their time, well, so they’re not sleepless for several days in a row.

And it also consists in letting them know how the environment might exert influences on them that could cause them to make decisions they would later regret. So we should also be thinking about those kinds of things as we engineer these complicated systems that deal with the use of force at a distance. So I consider that to be a kind of dilemma. Technologies that involve autonomous and semi-autonomous components that have upsides because they put fewer warriors at risk and allow us to be more discriminant, but also my deaden us to the consequences of a use of force. And might also unintentionally cause us to use force when we would otherwise decide not to, if the system took into account all of the social, psychological dimensions that you support decision.

Daniel Serfaty: Thank you, Bill. I think I was listening to you very intently and this is the most sophisticated and clearest explanation of how complex the problem is, in a sense that on the designer perspective, as well as on the operator’s perspective. That it is not just an issue of who has control of what, but there are many more context, variable that one has to take into account when conceiving, even of those systems. Chad, do you have an example you want to share with us?

Chad Weiss: Yeah. So first of all, Bill, great answer. I would advise anybody who is going to do a podcast with Bill Casebeer not to follow. The point that you bring up about remote kinetic capabilities is an interesting one. I think Lieutenant Colonel Dave Grossman covers that in his book On Killing about sort of the history of human kind of reluctance to take the lives of other humans. And a key variable in making that a possibility is increasing the distance between the trigger person and the target, if you will. One thing that strikes me in the military context is that what we’re talking about today is not new, in any way. As we stated, it goes back to ancient Greece. It goes back to Mary Shelley and all of these different sort of cultural acknowledgements of the moral hazards that are presented by our creations.

And the history of technology shows that as much as we like to think that we can control for every eventuality, automation fails. And when automation fails or surprises the user, it fails in ways that are unintuitive. You don’t see automation fail along the same lines as humans. They fail in ways that we would never fail. And I think that probably goes vice versa as well.

So something that keeps me up at night is the sort of idea of an AI arms race with military technologies, that there is an incentive to develop increasingly powerful, automated capabilities faster than the adversary. And we saw the nuclear arms race, and that this puts the world in quite a bit of peril. And what I am a little bit fearful of is the idea that we are moving towards AI superiority at such a pace that we’re failing to really consider the implications and temper our developments in such a way that we’re building resilient systems.

Bill Casebeer: Yeah, that’s a really critical point Chad, that we need to be able to engineer systems in such a way that they can recover from the unexpected. From the unexpected behavior of both the system that it’s part of and unexpected facts about the environment it’s operating in. And that’s part of the reason why in the United States, our doctrine presently in praise worthily requires that a soldier be involved in every use of force decision.

Just because we’re aware of these unknown unknowns, both in the operation of the system and in the environment it’s working in. And so bringing human judgment into there can work can really help to tamp down the unintended negative consequences of the use of a piece of technology. And now the flip side of that, of course, and I’d be interested in your thoughts on this Chad, is that as we use autonomy, and I agree with you that there is almost a ratchet, a type of inexorable increase in the use of autonomy on the battlefield because of its effect, you can act more quickly and perhaps deliver a kinetic solution if you will, to a conflict quicker than you could otherwise. So for that reason, autonomy is going to increase in its use in the battlefield.

What we might want to consider is given that the object stares back, is we need to think about how we engineer some of that resilience, even if we’re not allowing deadly force judgment, decision-making to take place on the autonomy side into the autonomous system itself. And I think that’s one reason why we need to think about the construction of something like an artificial conscience. That is a moral or governor that can help some of the parts of these complex and distributed systems consider and think about the ethical dimensions of the role they play in the system.

And I know a lot of people have a negative reaction to that idea that artificial intelligence could itself reason in the moral domain and perhaps for good Aristotelian or platonic reasons. For good reasons that stems from the Greek tradition that usually we only think of people as being agents, but it may very well be that as our tools start to stare back that as they become more richly and deeply cognitive, that we need to think about how we engineer some of this artificial conscience, the ability to make moral judgments, the ability to act on them, even independently of a human into the system so that we can give them the requisite flexibility they need.

Chad Weiss: Yeah, that’s a great point. It strikes me that we’ve really been discussing this from one side, which is what are our ethical responsibilities when using artificial intelligence, developing, using artificial intelligence. There’s also a question of not only what our responsibilities towards the AI that we’re developing, if in fact there are any, but what does the way that we think about AI say about the human animal?

Bill Casebeer: Yeah well that’s a really interesting point. Maybe we’re spring loaded to think that, “Oh, a robot can’t have a conscience.” I think that would be too bad. I think this requires a more exacting analysis of what it means to have conscience. So we should probably talk about that. Which I think of as being something like the capability to reason over and to act on moral judgments. And of course the lurking presence here is to actually give some content to what we mean by the phrase, moral judgment. So what is morality? And that’s the million dollar question because we’ve been around that block for a few thousand years now, and I suspect that Daniel and Chad, both of you could probably give some nice thumbnail sketches of what the domain of morality consists in, but I’ll give that a go because that might set us up for more questions and conversations.

So I think of morality or ethics as really consisting of answers to three questions that we might have. We can think that for any judgment or action I might take that it might have positive and negative consequences. So that’s one theory of morality. What it means to be ethical or to be moral is to take actions that have the best consequences, all things considered. And that comes from a classic utilitarian tradition that you can find in the writings of folks like John Stuart mill, probably the most famous proponent of utilitarian approach to ethics.

And on the other hand, folks like Aristotle and Plato, they were more concerned to think, not just about consequences simply, but also to think about the character of the agent who was taking the action that produces those consequences. So they were very focused on character oriented analysis of ethics and morality. And in particular, they thought that people who had good character, so people like Daniel and Chad, that they are exemplars of human flourishing, that they are well-functioning well put together human beings. And so that’s a second set of questions we can ask about the morality of technology or of a system. We can ask what’s its function. And is it helping people flourish, which is slightly different from a question of what are the consequences of enacting the technology.

And then finally, we can also think about ethics or morality from the perspective of do we have obligations that we owe to each other, as agents, as people who can make decisions and act on them that are independent of their consequences, and that are independent of their effect on our flourishing or our character. And those are questions that are generally ones of rights and duties. So maybe I have a right, for instance, not to be treated in certain ways by you, even if it would be good for the world, if you treated me in that way, even if I add good consequences.

So that’s a third strand or tradition and ethics, that’s called the deontic tradition. That’s a Greek phrase. That means the study of our duties that we have towards each other. And you didn’t see this in the writings of somebody like Emmanuel Kant, who can be difficult to penetrate, but who really is kind of carrying the torch in the Western tradition for thinking about rights, duties and obligations that we have independent of consequences.

So those three dimensions are dimensions of ethical evaluation, questions about the consequence of our actions, questions about the impact of our actions on our character and on human flourishing and questions about rights and duties that often revolve around the notion of consent. So I call those things, the three CS consequence, character, and consent. And if you at least incorporate those three, Cs into your questions about the moral dimensions of technology development, you’ll get 90% of the way toward uncovering a lot of the ethical territory that people should discuss.

Daniel Serfaty: Thank you, Bill. I’m learning a lot today. I think I should listen to this podcast more often. As a side I know that you’re a former military officer because you divide everything in three.

Bill Casebeer: Right.

Daniel Serfaty: That’s one of the definitions. Thank you for this clarification, I think it’s so important. We order that space a little bit. We understand a little bit those dimensions. I’ve never heard them classified the way you just did, which is very important. I want to take your notion of artificial conscious a little later, because when we talk about possible approaches and solutions to this enormous, enormous human challenge of the future. I would go back now to even challenge you again, Chad, you keep telling us that these are problem that whether since the dawn of humanity, almost, the ancient Greek philosophers that struggle with these issues. But isn’t a AI per se, different. Different qualitatively, not quantitatively in a sense that is perhaps the first technology or technology suite, or technology category that is capable of learning from its environment.

Isn’t the learning itself, put us now in a totally different category. Because when you learn, you absorb, you model, you do all the things that you guys just mentioned, but you also have enough to act based upon that learning. So does AI represent a paradigm shift here. You’re welcome to push back and tell me now is just on the continuum of developing complex technologies. I want myself to challenge both of you with that notion that we are really witnessing a paradigm shift here.

Chad Weiss: You know, it’s interesting, I would push back on that a bit. Certainly the way that AI learns and absorbs information, modern Ais, is different from traditional software methods. But the ability for a tool to learn from the environment, I don’t think is new. I think that if you look back at a hammer that you’ve used for years. The shape of the handle is going to be in some way informed by the shape of your hand, which is certainly a very different kind of learning, if you’re willing to call it learning at all. But ultimately I think that what we’re seeing with AI is that it is shaping it’s form in a sense in response to the user, to the environment and to the information that’s taking in. So I don’t think that it’s unique in that regard.

Daniel Serfaty: Okay. I think we can agree to disagree a little bit. Since this podcast, by the way, for our audience was prompted by a question that Chad asked me several months ago, members of the audience, that probably listened to the first and second podcast that focused on this artificial intelligence employee called Charlie, so to speak, at Aptima. And there was a moment in which Charlie was fed a bunch of rap music by different artists, thousands of pieces of rap, and then came up with her, that’s a she, with her own rap song that is not mimicking just the rap songs or even the rhythm that she’s heard before, but came with a striking originality almost.

So the question is that, okay, what did Charlie learn? And by that, I mean, this goes back to a point that Bill mentioned earlier about this notion of emerging behavior, surprising things, did Charlie just mimic and brought some kind of an algebraic sum of all the music and came up with the music. Or did she find a very hidden pattern that is opaque to our human eyes, but that she was able to exploit. That’s why I believe that AI is changing because we don’t know exactly what it learns in those deep learning schemes. We think we do, but from time to time, we’re surprised, sometimes the surprise is very pleasant and exciting because we have a creative solution, and sometime it can be terrifying. Do you agree with me or disagree with me? For that matter.

Chad Weiss: I hope you don’t mind if I shirk your question a little bit, because you brought up a couple of things in it that make me a little uneasy, not least of all that I think that my rap was objectively better than Charlie’s. It had more soul in it. But in all seriousness though, the concept of the artificial intelligence employee is something that gives me pause. It makes me uncomfortable because this is one of those areas that I think we have to take a step back and ask what it reflects in the human animal.

Because if you look at the facts, Charlie is here at Aptima through no will of her own. Charlie is not paid and Charlie has no recourse to any perceived abuse if in fact, she can perceive abuse. If Charlie starts to behave in a way that we don’t necessarily like, or that’s not conducive to our ends, we will just reprogram Charlie. So the question that I think that raises in my mind is what is it in the human that wants to create something that they can see as an equal and still have control over, still have domain over. Because the characterization that I just laid out of Charlie, doesn’t sound like employee to me, it sounds a little bit more like a slave. And I think there’s some discomfort around that. At least in my case.

Daniel Serfaty: Very good point, Chad, that’s something that you and I and other folks have been thinking about. Because suddenly we have these let’s call it being for lack of a better term. We don’t have exactly the vocabulary for it. That is in our midst, that participate in innovation sessions, that write chapters in books.

And as you said, the anthropomorphization of Charlie is a little disturbing. Not because she’s not embodied, or she doesn’t have a human shape, but we use the word like employee. She has an email address, but she does not have all the rights as you said, and all the respect and consideration and social status that other employees have. So a tool or a teammate, bill?

Bill Casebeer: These are great questions. And I think that I come down more like a Chad on this topic in general. I don’t think there’s anything new under the sun in the moral and ethical domain. Just because we have several thousand years of human experience dealing with a variety of technologies. And so it’s hard to come up with something that is entirely new.

Having said that I think there was a lot of background that we take as a given when we think about the human being, when we think about ourselves. So if I just, from a computational perspective, consider 10 to the 14th neurons I have in my three pound universe here on my spinal cord and the 10 to the 15th power connections between them and the millions of hours of training, experience and exemplars, I will have seen as I sculpt that complicated network so that it becomes Bill Casebeer, there’s a lot of that going on too.

I don’t know exactly how Charlie, she may be a more traditional type of AI. But if Charlie learns, if she has some limited exposure in terms of training, exemplars and sets, if she has some ability to reason over those training sets to carry out some functions, then I think Charlie might be more akin to something like a parrot. So parrots are pretty darn intelligent. They have language, they can interact with people. Some parents have jobs and we don’t accord the parrot necessarily full moral agency in the same way that I do a 20 year old human.

But we do think that a parrot probably has a right not to be abused by human being or kept without food and water in a cage. And so I don’t think it’s crazy to think that in the future, even though there’s nothing new under the sun, that our AIs like Charlie might reach the point where we have to accord them parrot-like status in the domain of moral agency. Which really leads to the question about what makes something worthy of moral respect.

Daniel Serfaty: Yes, the parrot analogy is very good. Because I think it reflects more the place where Charlie and its cohorts of other AI’s, like modern new generation AI are standing. And we need to think about that. We’ll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS , but don’t have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis, curated segments from the MINDWORKS , but gas condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, BuzzSprout or wherever you get your podcasts.

So artificial intelligence systems, whether they are used in medicine, in education or in defense are very data hungry. At the end of the day, they are data processing machines that absorbs what we call big data, enormous amount of data, of past data from that field, find interesting patterns, common patterns among those data, and then use the data to advise, to make decisions to interact, et cetera.

What are some of the ethical considerations we should have as data scientists, for example, when we feed those massive amounts of data to the systems and let them learn with very few constraints about those data? Do we have examples in which the emerging behavior from using those data for action or behavior has led to some questions?

Chad Weiss: That’s a great question. And there are a lot of issues here. Some of them are very similar to the issues that we face when we are dealing in sort of research on human subjects. Things like do the humans that you’re performing research on benefit directly from the research that you’re doing. I’ve used the phrase moral hazard a few times here, and it’s probably good to unpack that. So when I say moral hazard, what I’m referring to is when an entity has incentive to take on a higher risk because they are not the sole holders of that risk, in some sense it’s outsourced or something of that nature.

So here are some specific examples we have are things like image recognition for the purpose of policing. Where we know that because of the data sets that some of these things are trained on, they tend to be much less accurate when looking at someone who is African-American or in many cases, women. And as a result of being trained on a data set of primarily white males, they are much less accurate when you’re looking at some of these other groups.

And there are some very serious implications to that. If you are using something like image recognition to charge someone with a crime, and it turns out that your ability to positively identify from image recognition is significantly lower with certain demographics of people then you have an issue with fairness and equity. I believe it was Amazon who was developing an AI for hiring, and they found that no matter what they did, they were unable to get the system to stop systematically discriminating against women.

And so I think after like $50 million of investment, they had to pull the plug on it. Because they just could not get this AI to stop being chauvinist, more or less. So I think those are examples where the data sets that we use and the black box nature that you alluded to earlier come into play and present some really sticky, ethical areas in this domain.

Daniel Serfaty: These are very good, very good examples. Bill, can you add to that law enforcement and personnel management and hiring examples? Do we have other examples where data itself is biasing the behavior?

Bill Casebeer: I think we do. One of the uses of artificial intelligence and machine learning both is to enable prediction and the ethical dimensions of prediction are profound. So you and Chad have both alluded to the possibility that your training data set may perhaps unintentionally bias your algorithm so that it makes generalities that it shouldn’t be making, stereotypes, classic stereotypes. So I know a professor Buolamwini at MIT has done studies about bias and discrimination present in face recognition algorithms that are used in surveillance and policing.

I think that same kind of use of stereotypes can, for example, lead as it has with human doctors to medical advice that doesn’t work well for certain underprivileged groups or minorities. So if you’re medical research and experimentation to prove that a certain intervention or treatment works, and this began mostly with white males then whether or not it will work for the 25 year old female, that hasn’t really been answered yet, and we don’t want to over-generalize from that training dataset, as our AIs sometimes can do.

The example that comes to mind for me, like Chad mentioned his Tay Bot. And Tay was a AI chatter bot that was released by Microsoft corporation back in 2016. And its training dataset was input that it received on its Twitter account. And so people started to intentionally feed it racist, inflammatory, offensive information. And it learned a lot of those concepts and stereotypes. It started to regurgitate them back in conversation, such that they eventually had to shut it down because of its racist and sexually charged and innuendo. So I think it goes, that’s a risk in policing for some defense applications. If you’re doing security clearances using automated algorithms, if you’re determining who is a combatant based on a bias training dataset. For medicine, for job interviews, really for anywhere where prediction is important.

The second thing I would point out is that in addition to data sets that can cause bias and discrimination is people like Nicholas Carr Virginia Postrel have pointed out that sometimes you get the best outcomes when you take your native neural network and combine it with the outputs of some of these artificial neural networks. And if we over rely on these AIs, we may underlie or shirk this very nicely trained pattern detector that has probably a lot more training instances in it than any particular AI and ability to generalize across a lot more domains than a lot of AI systems. And so Nick Carr makes the point that one other ethical dimension of prediction is that we can over rely on our AI is at the expense of our native prediction capabilities. Every day AI is making people easier to use as the saying goes.

Daniel Serfaty: Yes, well, that’s a perfect segue into my next question that has to do with, as we move towards the future and towards the potential solution to the many very thoughtfully formulated problems that you shared with us today, the major recent development in research is to apply the knowledge that we acquired for many years in the science of teams and organization, to understand the psychology and the performance of multiperson and I use that term in particular. Because now we use it as guidelines for how to structure this relationship you just described in your last example, Bill, by combining basically human intelligence and AI intelligence into some kind of [inaudible 00:42:11] intelligence that be better than perhaps at the sum of its part, in which each one checks on the other, in a sense.

And as a result, there is some kind of an [inaudible 00:42:20] match that will produce higher levels of performance, maybe safer level of performance, maybe more ethical levels of performance. We don’t know, all these are questions. So could you comment for a second on both the similarities and differences between classical teams that we know, whether they are sports team, or command and control teams, or medical teams with those new, we don’t have a new word in the English language. We still call them teams of humans and artificial intelligences, blended together similarities and differences. What’s the same, what’s different. What worries you there?

Chad Weiss: This is another interesting area. It’s a lot of this hinges upon our use of language. And this is the curse of really taking philosophy of language at a young age. There’s a question here of what we mean when we say teammate, what do we mean even when we say intelligence, because machine intelligence is very different from human intelligence. And I think that if you are sort of unfamiliar with the domain. There may be a tendency to hear artificial intelligence and think that what we’re talking about maps directly to what we refer to when we talk about human intelligence, very different.

Daniel Serfaty: Language is both empowering, but also very limiting Chad. That’s true. We don’t have that new vocabulary that we need to use. So we use what we know. That’s the story of human language, and then eventually that evolves.

Chad Weiss: Thank you.

Bill Casebeer: Language generates mutual intelligibility and understanding. So if you’re interacting with an agent that doesn’t have language mutual intelligibility and understanding is really hard to achieve.

Chad Weiss: Yeah. And then when we’re talking about teammates when I use the word teammate, it comes packaged with all of these sort of notions. When I consider a teammate, I’m thinking of someone who has a shared goal, who has a stake in the outcomes. If I have a teammate, there’s a level of trust that this teammate, one, doesn’t want to fail, that this teammate cares about my perception of them and vice versa, and that this teammate is going to share in not only the rewards of our success, but also the consequences of our failures.

So it’s hard for me to conceptualize AI as a strictly defined teammate under those considerations, because I’m not confident that AI has the same sort of stake in the outcomes. Often you hear the question of whether it’s ethical to unplug an AI without its consent. And I think that it’s very different because what we’re doing there is inherently drawing an analogy between depriving the human of life. You’re turning them off, essentially turning off in AI is not necessarily the same as a human dying. You can switch it back on, you can copy and duplicate the code that runs the AI. So there’s a really interesting sort of comparison between the stakes of a set of potential outcomes between human and Ai.

Daniel Serfaty: I think the richness on to your perspective on this notion, Bill, especially the ethical dimension of it, but I am very optimistic because those very questions that we’re asking right now when we pair a radiologist, for example, with an AI machine who’s read millions and millions of MRI pictures and can actually combine that intelligence with that of the expert to reach new levels of expertise. As we think through this problem, as engineers, as designers, it makes us understand the human dimension even deeper, what you’re reflected right now, Chad on what does it mean to be a member of a team and what does a teammate mean to you? Has been forced, that’s thinking has been forced because we are designing artificial intelligence system and we don’t know what kind of social intelligence to embed with them with. So my point is that it has a beautiful kind of a going back to really understanding what makes us humans as special, unique. That for us, what do you think about that?

Bill Casebeer: That’s really intriguing Daniel. I mean, when I think about the similarities and differences between AIs and people on teams, some similarities that we share with our artificial creations are that we oftentimes reason the same way. So I use some of the neural networks I have in my brain to reason about certain topics in the same way that a neural network I construct in software or in hardware reasons. So I can actually duplicate things like heuristics and biases that we see in how people make judgements in silico, if you will. So at least in some cases we do reason in the same way because we’re using the same computational principles to reason.

Secondly, another similarity is that in some cases we reason in a symbolic fashion, and in some cases we reason in a non-symbolic fashion. That is in some cases we are using language and we’re representing the world and intervening on it. And in others, we’re using these networks that are designed to help us do biological things like move our bodies around or react in a certain way emotionally to an event. And those may be non-symbolic. Those might be more basic in computational terms, if you will.

And I think we actually see that in our Silicon partners too, depending on how they’re constructed. So those are a couple of similarities, but there are some radical differences as you were just picking up on Daniel, I think. One is that there is a huge general purpose AI context that is missing. You and Chad are both these wonderful and lively people with these fascinating brains and minds. You’ve had decades of experience and thousands of training examples and hundreds of practical problems to confront every day. That’s all missing, generally, when I engage with any particular artificial intelligence or cognitive tool, it’s missing all of that background that we take for granted in human interaction.

And secondly, there’s a lot of biology that’s just missing here. For us as human beings, our bodies shape our minds and vice versa, such that even right now, even though we’re communicating via Zoom, we’re using gestures and posture and eye gaze to help make guesses about what the other person is thinking and to seek positive feedback and to know that we’re doing well as a team. And a lot of that is missing for our AI agents. They’re not embodied, so they don’t have the same survival imperatives that Chad mentioned earlier. And they also are missing those markers that can help us understand when we’re making mistakes as a team that at least for us human beings have evolved in evolutionary timescales. And are very helpful for helping us coordinate activity like be mad, angry when somebody busts a deadline. So all supremely important and differences between our artificial agents and us humans.

Daniel Serfaty: So taking on that, are you particularly worried about this notion of, it’s a long verb here, but basically anthropomorphizing those artificial intelligence and robots by giving them names, giving them sometimes a body. The Japanese are very good at actually making robots move and blink and smile like humans, for example, or maybe not like humans and that’s the issue. And are we worried about giving them gender like Charlie or other things like that because it creates an expectation of behavior that is not met. Tell me a little bit about that before I’m going to press you about giving us all the solutions to solve all these problems in five minutes or less, but let’s explore that first, anthropomorphizing.

Bill Casebeer: I’ll start. It’s a risk for sure, because of that background of our biology and our good general purpose AI chops as people, we take that for granted and we assume it in the case of these agents. And when we anthropomorphize them, that can lead us to think that we have obligations to them that we actually don’t, and that they have capabilities that they don’t actually possess. So I think anthropomorphization, it can help enable effective team coordination in some cases, but it also presents certain risks if people aren’t aware the human-like nature of these things stops. Before we kind of think about, “Oh, and this is something that rebuts Chad and Bill’s assumption, there’s nothing new under the sun.” I would say we actually have a body of law that thinks about non-human agents, our obligations to them and how we ought to treat them. And that’s corporate agency in our legal system.

So we have lots of agents running around now, they’re taking actions that impact all of our lives daily. And we have at least some legal understanding of what obligations we have to them and how we ought to treat them. So IBM or name your favorite large corporation isn’t composed of exclusively of people. It’s this interesting agent that’s recognized in our law, and that has certain obligations to us and we have certain obligations to it. Think of Citizens United. All of those things can be used as tools as we kind of work our way through how we treat corporate entities to help us maybe figure out how we ought to treat these agents that are both like, and unlike us too.

Daniel Serfaty: Thank you. Very good.

Chad Weiss: Yeah. I think I’m of two minds here on the one hand-

Daniel Serfaty: Something an artificial intelligence will never say.

Chad Weiss: On the one hand as a developer of technologies and because of my admittedly sometimes kooky approach to sort of collaborative creativity I think that there is a sense of value in giving the team a new way to think about the technology that they’re developing. I often encourage teams to flip their assumptions on their heads. And to change the frame of reference with which they’re approaching a problem, because I think this is a very valuable for generating novel ideas and remixing old ideas into novel domains.

It’s just the key to innovation. On the other hand, I think that as shepherds of emerging and powerful technologies, we have to recognize that we have a much different view, understanding of what’s going on under the hood here. And when we are communicating to the general public or to people who may not have the time or interest to really dive into these esoteric issues, that people like Bill and I are sort of driven towards by virtue of our makeup I think that we have a responsibility to them to help them understand that this is not exactly human and that it may do some things that you’re not particularly clear on.

My car has some automated or artificial intelligence capabilities. It’s not Knight Rider or Kit if you will. But it’s one of those things where like, as a driver, if you sort of think of artificial intelligence as like human intelligence that can fill in gaps pretty reliably, you’re putting yourself in a great deal of danger. There are spaces in, as I’m driving through this area, if I’m driving to the airport, I know there’s one spot right before an overpass where the car sees something in front of it and it slams on the brakes. This is very dangerous when you’re on the highway. And if you’re not thinking of this as having limited capabilities to recover from errors or misperceptions in the best way possible you’re putting your drivers, your drivers families, your loved ones, a great deal of risk, as well as other people who have not willingly engaged in taking on the artificial intelligence. There are other drivers on the road, and you’re also putting their safety at risk as well. If you’re misrepresenting in a way, whether intentionally or unintentionally the capabilities and the expectations of an AI.

Daniel Serfaty: It’s interesting guys, listen to these examples from inside your car or in war and combat situation, et cetera, I cannot help, but go back to science fiction. Because that’s really our main frame of reference. Quite often, in discussions, even with very serious medical professionals or general officers in the military, they always go back to an example of the scene in the movie because they want a piece of that. Or because that becomes a kind of a warning sign, whether it’s about the autonomous artificial intelligence or some interesting pairing between the human and artificial intelligence system many people site the minority report movie in which there is that interaction between Tom Cruise, I believe, and the system. Do you have a favorite one, kind of a point of reference when you think about these issues that from the movies quick one, each one. You’re only entitled to pick one each one of you.

Bill Casebeer: Well, that’s tough. So many great examples ranging from Isaac Asimov and the I Robot series of stories on through to probably my favorite, which is HAL 9000, the 2001 movie. So the Heuristically Programmed Algorithmic Language Computer. And it’s my favorite not only because it was built at the University of Illinois where my son’s finishing his PhD in computer science, but also because it highlights both the promise and the peril of these technologies. The promise that these technologies can help us do things we can’t do alone as agents, like get to Jupiter. But the peril also that if we don’t build them with enough transparency, intelligibility, and potentially with a conscience that they might take actions that we otherwise don’t understand like a murder astronauts en route to the planet. So I think of HAL when I think of AI and its promise and peril.

Daniel Serfaty: It’s striking, Bill, isn’t it, that I know the movie was made more than 50 years ago, five zero. And the book, even earlier than that, it’s pretty amazing the foresight these folks had on the danger of basically the artificial intelligence, HAL in that case, took on themselves because they decided that they knew what’s best for the collective. That’s interesting, Chad any favorite one?

Chad Weiss: Well the same one actually, and for similar reasons, I suppose, but having to do with the Ohio State University. And it is sort of, I think is attributable to Dave Woods who’s a professor there. Not only because Dave sees 2001 as seminal in its connection to cognitive dissonance engineering, but also because of my propensity to say, “I’m sorry, Dave, I’m afraid I can’t do that.” What I really like about this is that it’s not Terminator. I have zero fears about an eventuality where we have the Terminator outcome. The reason Terminator works is because it’s great for production value. I don’t think autonomous armed rebellion is what we need to worry about here. I think that there’s a little bit more about the sort of inability for, or the imperfection, I guess, with which humans can actually predict the future and foresee all of the potential outcomes.

Daniel Serfaty: So let’s go back actually to that. Because I, a member of the audience, would like to know, okay, there are a lot of these dilemmas, these almost disasters, these prediction of things are going to turn into a nightmare. There was a recent article in the Washington Post entitled Can Computer Algorithms Learn to Fight Wars Ethically and both praising the capabilities as you did earlier Bill, but also warning about unexpected behaviors.

Well, that makes good storytelling, but what can we do as engineers, as scientists, as designer to ensure that the AI of the future or that is being designed now will behave according to the design envelope, we have engineered it for. You brought that brilliant idea earlier about this notion of an artificial conscience, maybe a metacognition of sorts on top of the artificial intelligence that can regulate itself, maybe independent of it. What else? What can we do, even practically, what guidance do we have for the engineers in our audience to minimize the occurrence of unpleasant surprises?

Bill Casebeer: It’s more than a million dollar question. You’ve been asking a series of those Daniel. That’s a $10 million question. Like Chad, actually, I’m not worried about terminators. I’m not worried about Cylons from Battlestar Galactica, I’m more worried about systems that have unintended emergent effects or miniature HAL 9000s that is systems that are designed to reason in one domain that we try to apply in another and they break as a result.

So in order to prevent that kind of thing from happening, I think there have to be three things. I’m thinking in PowerPoint now as you mentioned earlier. First, I think better self knowledge will help us. So it’s not necessarily a matter of engineering as such, but rather a matter of engineering for the types of human beings, people we are. So the best way to engineer a hammer that doesn’t hit my thumb when I strike a nail is just for me to know that I don’t use cameras as well when I’m tired. So maybe I ought to put the hammer down when I’m trying to finish my roof in the middle of the night. So first better self knowledge.

Second better modeling and simulation. So part of validation and verification of the use of technologies is to forecast performance at the ragged edge, if you will. And I think we’re only now really getting to the point where we can do that, especially with human machine teams. And so part of what we’re doing in my lab at Riverside is we’re working on virtual testbeds. That lets us put algorithms into partnership with humans and ecologically valid or somebody’s realistic environment so we can stress test those in the context of use. I think that’s very important, better modeling and simulation.

Finally, third, I think we do have to be sensitive to how we build capacities into these machine teammates that let them reason in the moral domain. Not necessarily so they can strike off on their own, but more so they can be mutually intelligible with us as teammates. So they can say, “Hey Bill, I know you told me to take action X, but did you really intend to do that? Because if I take action X I might unintentionally harm 10 innocent non-combatants in the target radius.” And I would say, “Oh, thank you. No I’m task saturated as a human right here. I didn’t have that context. And I appreciate that you surfaced that.” That’s why you think it’s so important that we design into our AI agents, some type of artificial conscience that is the ability to know what is relevant from a moral perspective, the ability to have the skill to act on moral judgments, have the ability to make the moral judgements themselves and the ability to communicate with us about what those judgments consist in.

So that framework that I told you comes from a moral psychologist, a friend who I should acknowledge Jim Rust, who talks about moral sensitivity, moral judgment, moral motivation, and moral skill, all as being necessary parts of being the kinds of creature that can make an act on moral decisions. And so along with Rust and people like Paul and Patricia Churchland, my mentors at the University of California, I think we should think about giving our tools some of those capacities too. So that can be effective teammates to us human beings.

Daniel Serfaty: Fascinating. That’s super. Chad, you want to add your answered to Bill’s, especially as a, I know how much you care about design, about thoughtful design. You being the evangelist, at least that Aptima about putting design thinking or thoughtful design in everything we do. What guidance do you have for the scientists and the designers and the data scientist and the engineer in our audience about adding to what Bill said to prevent that surprises or minimize the occurrence of them, at least?

Chad Weiss: I can’t tell you how much I wish I could give a clear, satisfying and operational answer to that. What I can give you is, what I see is one of the biggest challenges here. And I think that is we need to pay particular attention to the incentive structures here. We need to convince the developers of technology, because I think that we rely often on external bodies, like government to step in and sort of legislate some of the ethical considerations and certainly in free market capitalism, there is an incentive to operate as well as you can within the confines of the law to maximize your self-interest.

In this arena government is not going to be there. They’re not going to catch up. They move too slow, technology moves too fast. And so we have a unique responsibility that we may not be as accustomed to taking on when we’re talking about these types of technologies. We need to find ways as leaders within organizations, to probably incentivize some degree of sober thought of, I think I’ve used the phrase with you tempering our action with wisdom. And consideration that when something that we produce fails, when it has adverse outcomes. And I don’t mean to talk only about adverse outcomes, because a huge part of this discussion should be the positive outcomes for humanity, because this is by no means a bleak future. I think that there’s a massive amount of potential in artificial intelligence and advanced computing capabilities. But we have to be aware that we bear responsibility here and we should take that with great seriousness, I guess. I don’t even know the word for it, but it’s critical.

Bill Casebeer: It’s precious. I mean, to foot stomp that Chad, that is a beautiful insight and a significant piece of wisdom. If we could just rely on our character development institutions, our faith traditions, our families, so that we push responsibility for moral decision making down to the individual level. That’s going to be the serious check on making sure that we don’t inadvertently develop a technology that has negative consequences so that we can harvest the upside of having good artificial teammates. All the upsides that Chad just mentioned, such a profound point. I am in debt to you, sir for bringing us to it.

Chad Weiss: Part of the reason that people like me exist, that you have user experience designers is because there is a tendency when we’re developing things to externalize the faults, the blames. Something you’re building doesn’t work. Maybe we blame the users. They don’t understand it, it’s something there, what is it? The PIBKAC, the problem exists between keyboard and customer. This is really dangerous when you are talking about something as powerful as AI. And so knowing that tendency exists such that UX, as big of a field as it is, I think we need to be special consideration here.

Daniel Serfaty: I really appreciate both thoughts of wisdom and just going back to basic human values as a way to think about this problem. And certainly I cannot thank you enough to have shared these insights, really new insights that make us think in different direction. Which makes me believe that while many folks in the corporate environment are thinking about adding a certain roles, either a chief AI ethicist or chief ethics officer, or even having subspecialties within the engineering about ethical and societal responsibilities of AI and computing in general, MIT has a new school of computing for example, in which that particular branch is being emphasized.

I believe that like you, that we need to go back to first principles as innovator, as inventors or scientists and engineers to consider ethics the same way we consider mathematics. When we do our staff, it’s part of what we do. It’s not a nice to have, it’s becoming the must have. So thank you very much, Bill. Thank you, Chad, for your insights and your thoughts and your thoughtful considerations when talking about these important topics of technology and ethics and AI and ethics.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.

Daniel Serfaty: Welcome to MINDWORKS. This is your host Daniel Serfaty. Today, we will explore together how science and human performance data are revolutionizing the world of education and training. This a big task. Over my thirty plus years of working tooptimize and improve human performance in all kinds of mission-critical settings, I had the privilege to work with one of my mentors, Lieutenant General John Cushman, who passed away in 2017. I learned a lot from him and there is one phrase that he used to say that really stands out to me the most as being extremely relevant to today’s challenges of optimizing training, optimizing education, and understanding human performance. General Cushman was not just a highly decorated veteran of the Vietnam War, but later in his career, when I was working with him, he was very instrumental in really revolutionizing the way the US military thinks about human performance when it comes to training our top commanders. 

This is when he and I worked on a project trying to understand the nature of expertise in command. We were working at Fort Leavenworth—here you had one retired General, General Cushman, training other Generals that were ready to go to the field and teaching them how to do their job—evaluating them, coaching them, and understanding what needed to be done for them to be Field Generals. 

General Cushman used to say two things that stay to me to this day and has been the philosophy behind a lot of the work we have been doing at Aptima. It was a two-part statement. First part: “You cannot improve what you don’t measure.” Basically, the false claim of, “Oh, things went well,” is not good enough if you cannot measure the before and the after of the training intervention. But I think it’s the second part of his statement that is most important if you really want to improve human performance. If the first part was, “You can’t improve what you don’t measure,” the second part is, “But you cannot measure what you don’t understand.”

Our job is to first listen to the field commander, the surgeon, the law enforcement officer, about their experience and their expertise, and then take that and frame it within what we know about human science. So that the end we understand that out of the many, many things that you could measure during action, during performance in the field, there are only a handful that truly matter. How do you find the handful? That if you measure each one of these dimensions, these key dimensions, you know when to train your people, when to put people in a situation, when to give them a technology to improve their performance. And you know also whether or not five or six dimensions are really going to improve or not. Are you going to move the needle on those key performance that emerge from that notion of understanding before you measure and understanding through human expertise in the field and human sciences in the lab. It’s not easy to do that. It’s actually very hard to do it well because when we try to identify this handful of dimensions of performance—sometimes in the military they call that “measures of effectiveness” and in the management literature they call that sometimes “KPIs” or “key performance indices”—those dimensions that really matter, we tend to either ignore a lot of so-called common wisdom or give away way too much weight on things that are not truly relevant to the one thing that counts the most, which is learning.

But today we have some very special guests that are going to help us unpack all of this for us. They are three experts in human performance who are also my colleagues at Aptima. Dr. Shawn Weil is Principal Scientist and Executive Vice President of Business Strategy at Aptima. His expertise are in research areas that include social media, military command and control, advanced training, and communications analysis for organizational improvement. Courtney Dean is a Senior Scientist and Director of Products at Aptima. Courtney specializes in developing and deploying training and performance assessment solutions for individuals and teams in both live and simulated training environments. And thirdly, Evan Oster, is a Scientist and Learning Solutions Architect at Aptima, whose current work centers on infusing cognitive science into performance assessment, analytics, and artificial intelligence.

So our three experts are not just academically fit in terms of having been trained in cognitive science or organizational psychology but they all have deep experience into implementing those principles in the field, which is really what is the topic of our discussion today. It’s blending that wisdom that comes from the field, together with the science that comes from the lab. 

Welcome, Shawn, Courtney, and Evan. And I would start first by asking my guests to introduce themselves one at a time and ask them specifically, why did they choose this particular domain of all the domain of interest they we would have chosen, the domain of human performance in different flavors, by the way? Shawn, good morning.

Shawn Weil: Good morning, Daniel. Thank you so much for having me. You know, I t’s such an interesting question. I started my career thinking I was going to be a college professor, a psychology professor. I’ve always been really interested in human performance at an individual level, but I got a little bit tired of doing laboratory experimentation while I was in grad school. And I was adopted really by folks who were doing much more applied research in human performance and the study of human behavior in real environments. I credit David Woods at the Ohio State University for finding me roaming around campus and bringing me in. And it’s been a pleasure over the past almost 20 years now to look at human performance in different domains in a very naturalistic understanding of human interaction and performance.

Daniel Serfaty: Thank you. I understand that you were at the epicenter of a whole movement, cognitive systems engineering, which Professor Woods was leading. And you’re going to tell us more about that later.

Shawn Weil: Absolutely.

Daniel Serfaty: Courtney?

Courtney Dean: So I stumbled into human performance, rather circuitous route that was very much self-centered. I played golf in college and was looking for opportunities to improve my game and came across some sports psychology material. And this included a course I was able to take over one summer and a series of tapes by Dr. David Cook that told some really great stories to some PGA professionals. And I listened to those tapes, and this was audio cassettes enough times that I think that they started to unspool themselves. I wanted to enter into that field, but some advisors and professors at my university guided me away from that much like what Shawn just talked about because they were pointing me towards the option of either being a consultant or being a professor. And at the time, I wasn’t particularly interested in going into teaching.

And this consulting notion coincided with me just hacking to take an industrial psychology course. And I said, “Well, there’s something that’s much more applied.” And I got into an applied graduate program that focused an awful lot on that. That led me to a public safety selection environment, and I found that to be very interesting. And then the opportunity with Aptima came along that was essentially doing the same thing, human performance, but with the DOD. And having always thought that I was going to be a fighter pilot like my dad, I thought, “Well, heck, that’s a great way to get close to that environment without actually having to put my life on the line.”

Daniel Serfaty: That’s great. I didn’t know if you were those things you just told the audience. But that straddling between domains such as sports, aviation, military and public safety is very interesting because this is also our theme today. We’re going to explore basically how the power of the scientific approach and understanding what to do with the data we collect for the betterment of human performance crosses actually domains. But then Evan, how did you get to this business?

Evan Oster: So I actually started off in K-12 as a teacher, and I found that to be a very rewarding thing. But a couple of years in, I found that things were becoming the same. You may do the same tasks and jobs, and it’s something that over time I wanted to change. And so I was looking for a different challenge. I love the idea of instruction and training, and I’ve always had a profound respect for the military. So I thought, what’s a way that I could still continue doing what I love but also pull in this aspect of the military? We have so much untapped potential as humans. And being able then to jaw our focus and attention to something, to be able to accomplish something new that we previously weren’t able to do, I find to be really rewarding. So being able to take my background and instruction and training and being able to bring that into the military I think is a great blend of these two things that I love.

Daniel Serfaty: Okay. So we have a lab scientist, a golfer, and a teacher. Sounds like the beginning of a wonderful joke, but it is actually not just a joke. He’s actually the secret ingredient of this very challenging but extremely rewarding domain of studying human performance. We need that multidisciplinary curiosity and that combination of methods to crack the nut of the difficult problem that capturing human performance represent. So perhaps in one or two sentences each one of you can take our audience, that is, what do you do? Now we understand your background, but what do you do at Aptima? You can just pick one project or one activity that really you think represents what you do. And I’m going to scramble a little bit the order and start with Courtney. What do you do, Courtney?

Courtney Dean: So at Aptima, my primary focus is the most directly human performance, you could just about put that as line one on my resume. I’ve been focused on developing measures of human performance and training contexts for a variety of domains almost since the first day that I stepped through the doors of the company. This involves sitting down with subject-matter experts in their respective domains and identifying what it is that differentiates somebody who’s competent from somebody who’s incompetent or somebody who’s excelling in their field versus someone who’s not. Breaking that down to the micro level, what are the specific behaviors that we can observe that indicate that somebody possesses the knowledge, skills, and abilities that are necessary to complete said tasks?

And we’ve developed a pretty effective methodology for eliciting that information. And I’ve just run with that methodology to apply it to many different domains and utilize that to both gain an understanding about the domain that we’re focused on there, and then produce a series of metrics that those individuals can then take with them into their training environment and utilize to achieve some goodness on the part of the trainees or the learners in their environment.

Daniel Serfaty: So in a sense, you have a scalpel like a surgeon, which is a method you mentioned. And you are trying to understand, deconstruct the nature of mastery or expertise in the domain that you study, whether they are fighter pilots or police officers or surgeons actually. Is that what you’re saying, you basically are an analyst that decompose that and say, “These are the ingredients of mastery”?

Courtney Dean: Yeah. I would say that it’s a little bit less elegant than a scalpel. The truth is that it’s a little bit more along the lines of a sledgehammer and some super glue.

Daniel Serfaty: Right. Well, we’ll talk about those tools in a minute. So what is it that you do at Aptima, Evan?

Evan Oster: That’s a great question because when friends asked me that, I have a different answer each time, and it really depends on the day. When I am looking at human performance at a high level, that’s me reviewing and conducting research on training to improve human performance. But more specifically, what that looks like is working with colleagues who are experts in their field in small teams to be able to innovate some solution that satisfies a customer’s need. So that can be improving decision-making, it can be improving the instructors, it can be helping to improve the students. And I think that’s what’s really unique is you can take a look at human performance challenges from multiple different perspectives and multiple different ways. And each time you improve something, it improves the whole.

Daniel Serfaty: Okay. So you’re not an engineer, you’re not a software engineer. Many of the solutions you dream up or Courtney dreams up end up in instantiation software. How do you establish a dialogue with folks that are actually about coding and architecting software systems?

Evan Oster: So that’s also a really good question, something that we face every day. And I think what it comes back to is really in understanding and good communication with one another. Relationships are huge, right? So understanding how different people work, how they view problems, how they see things and valuing those differences. And being able to clear out space for them to work as they can work best. At the same time, having a common framework and lexicon for what it is that need to be accomplished. How many times have you heard someone say the word domain? And that means one thing to one person, something to someone else, something to someone else. And so being able to have that common language and framework to operate from really helps to inform that end goal and form the solution.

Daniel Serfaty: Well, we’ll come back again to this dialogue. But Shawn, I know what you do. You’re the executive vice president for strategy at Aptima, but that’s not the only thing you do. It sounds pretty managerial and executive, but actually you’re still scientists. So how do you bring that science into your job?

Shawn Weil: It’s a really good point, Daniel. I think about this in a number of ways. I wear different hats in the company, and I wear different hats professionally. Because I have a corporate role, it allows me to think about human performance in a systemic way using systems thinking. So construed broadly, that could be looking at human performance of teams and how they communicate or how they interact with artificial intelligence. It could be looking at how we bring together measurement from different modalities, observer-based or systems-based. Or it could be understanding the link between the physiological, the behavioral, and the cognitive, and trying to make sense of that.

But the other hat that I wear, that executive hat is the one where I’m helping our engineers and scientists, both Aptima staff and our partners really understand what the end users’ needs are. There’s something intrinsically wonderful about human performance that satisfies the intellectual curiosity of scientists and engineers. But then you need to figure out how to frame that in a way that’s going to be really beneficial societally, and that requires a different perspective. So it’s a pleasure of mine to be able to take on that role, put that hat on and work with our diverse staff to help them help those customers.

Daniel Serfaty: That’s a very good way to describe this constant stitching of ideas into something that not just the market, it’s too abstract, but the human user, the human learner out there needs. So I’m going to ask you Shawn, think of an instance of an aha moment in your professional life when you suddenly realized something new or how to articulate insight into a scientific fact, a project, something, an aha moment?

Shawn Weil: I’ve had a couple of those aha moments. But there’s a common thread in all of them, and that is messiness and complexity. So when I was first exposed to ideas of human cognition and human performance, human behavior, they were very neatly compartmentalized. You had these kinds of behaviors in these situations, this decision-making in these kinds of environments. This kind of communication patterns with these kinds of people, and you could study them independently. When I was in graduate school, I first got exposed to the complexity of just having two people talk to each other and two people trying to coordinate towards a common goal.

So I think an aha moment came early on in my career at Aptima when I was working on a program about command control, and we were doing some experimentation. And when you start looking at not two people, but just five people in a controlled environment, the amount of complexity, the multiplicity of ways that you can look at human performance, it’s just staggering. So how do you then do something useful, collect something useful? It may not be collecting everything. So the aha moment for me in that scenario was really saying to myself, what questions do we need to answer? And from all of the chaos and all of the complexity, do we know how to zero in on that subset that’s really going to give us some insight that’s going to help improve the performance of that organization?

Daniel Serfaty: That’s interesting because that’s a dimension that all of us had to face quite early in our carrier, you’re right. This notion of not everything is neatly compartmentalized the way it was in the lab in graduate school. And dealing with these, you call it chaos or complexity or messiness is really a skill that eventually we all need to acquire. Courtney and Evan, any aha moment you want to share with the audience?

Evan Oster: I had one for sure pretty early on. So we were at an army installation, and we were there to help instructors provide tailor feedback to their students. And that was going to be through an adaptive instructional decision tool. And I was conducting a focus group. And I asked the instructors what made their job challenging? And they started off by complaining about the students. They started to tell all these stories about how lazy they were, and they constantly make these common mistakes. And I looked up one of the main instructors there and asked, so why do you think they’re doing this? And he paused and thought about it, and he said, “Well, it’s because the job is really hard.” And he gave the example of, if you stacked all of their manuals on top of one another, they’d be over six feet tall,” taller than most of the trainees who were there.

And then soon some of the other instructors started talking about other things that were challenging and why they’re hard. And really just from one question, the conversation and the culture shifted from complaining to more of a sense of compassion where then they’re starting to put themselves in the student’s shoes and they’re able to resonate with why it’s so difficult. And as the instructor, I mean, this is easy for them, they’d have lots of practice and they’ve done it for years. But really what we were able to do is before we even did any software, any development, we were able to set a culture and a framework for why are the instructors there? They’re really there to help.

And from that moment forward, it helps shape what that final solution would look like. That was a big aha moment for me that this human performance, similar to what Shawn said, it’s messy. You look at it from so many different perspectives, and it’s not clean-cut and clear. You need to go in and feel your way around and figure out what’s going on and see what the best path is moving forward.

Daniel Serfaty: Courtney, you got one of those ahas?

Courtney Dean: I have one, but I actually had to relearn. I remember watching a retired army operator taking me and bring the soldier down with him. And they had this conversation, it was very quiet and very reserved, and it was technical. He was pantomiming and gesturing with his hands to mimic the use of a rifle. And there was probably a little bit of engineering and physics associated with the drop of the bullet or the rise of the muzzle, et cetera. I don’t know, I wasn’t totally privy to the words that were coming out of her mouth. But I watched the body language of this settled, isolated conversation. And it was before I had been on a firing range with drill instructors and new privates or private recruits where the screaming was only overshadowed by the sound of muzzle fire. Then we started to see drill instructors taking knees next to young privates who were practically quaking in their boots if they don’t hit the target just right.

And you started to see some change in behavior because they went from, “I’m trying to avoid this screaming, this punishment,” to, “I’m starting get some context and some understanding and some support. And I understand that maybe my inability to hit the target as accurately as I’m supposed to right now is not a personal flaw that I’m never going to be able to get over. And then I constantly forgot that. And one day I had a friend in town and my new walking one and a half year old son, and we were going sledding. And he didn’t have very good mittens on or children can’t keep their mittens on and snow was creeping in between his hands and his sleeves. And I wanted to put hi, on this sled because it’s going to be so exciting and he didn’t want to. And he started to get scared and started to cry.

And I found myself trying to force this child onto the sled. You just get on the sled, and everything is going to be wonderful. And then I stopped for a second because I was realizing that wasn’t happening. But in my head, the gears were just barely clicking. What is going on here? And my friend who hadn’t had a child yet took a knee and talked to my son and calmed my son down, and he got the situation back under control. And I looked at that and I thought, “Well, I guess I’m one of those old drill instructors right now, and it’s time to become one of those new drill instructors.”

Daniel Serfaty: These are wonderful stories, all the three stories that you’re telling me. Which really remind all of us that human learning, which is the other side of human performance and the ability to improve a skill or to acquire a skill is more extremely complex and messy. But at the same time, there is an intimacy to it between the instructor and the learner, between the tool and the person reading the data that we as engineer, as designer, as scientists have to take into account. Thank you for using those stories, they illustrate very well better than any theory our job is both so hard but also so rewarding. So Shawn, looking back, why do you think it’s important to be able to measure? We go back to that notion of capturing something, of measuring, whether with methods or tools or just intuition sometimes to measure humans in order to improve their performance in doing their job. Why is measurement important?

Shawn Weil: I’ve thought about that a lot. Courtney was just talking about his children, and I have two children who are school age. And I think about how they’re measured and where measurement is worthwhile and where measurement actually might be a distraction. The reason why you need to measure is because humans have bias. Humans as they are going about their work and trying to accomplish their goals have only so much vision, they have only so much aperture. Especially in training situations, traditionally in the military at least, what they’ve done is they don’t have professional instructors, per se, they have operators who are expert in those domains. And they watch performance, and they cue into the most salient problems or the most salient successes. And they build their training around those narrow bands, that narrow view effects.

So when you do a more comprehensive performance measurement scheme, when you’re measuring things from different perspectives and different angles, especially when you’re measuring aligned to objectives of what the group is trying to accomplish, what you’re doing is you’re enabling instructors to overcome their biases, to think more holistically about what it is they’re trying to give to their trainees, to their students. And to do it in a way that is more empirically grounded and more grounded in the action they’re trying to perfect or improve.

Daniel Serfaty: So Evan, you’re listening to what Shawn is saying now. Can you think of instances when you’ve seen that actually a particular measurement changed the way people talk or people educated? How can we link right now for our audience our ability to measure in sometime precise detail certain aspect of cognition or behavior or decision-making and eventually turn that into an opportunity to train and therefore to improve learning and eventually to improve people performance on their job?

Evan Oster: Yeah. So on measuring human performance, one of the things that I think is critical is being able to challenge the bias, as Shawn was talking through. The expectations that might be placed on students maybe by one instructor and another, you’re lacking consistency. And there are nuances to the training that we’re trying to get a more objective measure of. So when we’re looking at how to measure human performance, being able to get concrete, specific, and objectives is really critical in getting the training to be well aligned. One thing that I’ve seen through some of our efforts is we’ve had a particular training that was being trained in groups and lacked the ability for instructors to know if a mistake was made, was that by one student over another? Was it due to the team dynamics that were there, was it due to a lack of knowledge? And when they were training in that group environment, they weren’t measuring in a level where they could distinguish between those nuances and those differences.

When we in and started measuring at a more granular level, we were able to then help them disentangle what was happening, when it was happening, and with whom it was happening. And that way, then the instructors were able to tailor what they were doing to specific students at a specific point, in a specific way.

Daniel Serfaty: So you’re in fact making the point that this so-called objectivity, and that’s a topic probably for another podcast about why the objectivity of measurement is not here to tell the teacher or the instructor that they are wrong, but more to augment what they do naturally as teachers by giving them basically a rationale for intervention, for particular instruction, for focusing on a particular competency of the student. One of my early mentors was General Jack Cushman who passed a couple of years ago. He has an old-fashioned, crusty, three-star general in the army who after he retired was actually training other generals to conduct complex operations and giving them feedback.

And he was always exasperated when people say, “Oh, we did better this time.” And he was always asking them, how do you know? How do you know you did better other than just a good feeling? And he came up with this quote that I really respect a lot, I use it a lot, which means you can’t improve what you don’t measure. How do you know? It’s such a simple statement, you don’t improve what you can’t measure. But very profound in the way it’s changing not only our military training but also education at all levels.

Courtney Dean: I stand on that a little bit, Daniel.

Daniel Serfaty: Of course, Courtney, yes.

Courtney Dean: So three thoughts that I don’t have written down, so I’ll try to channel them together coherently. So number one is I know what right looks like, or alternatively stated, I’ll know it when I see it. Practice does not guarantee improvement. That was on a slide that we had for years. And then finally, feedback. So those things linked to each other inextricably. The issue that we had, the bias that Shawn was talking about that you can’t improve what you can’t measure is all about that. I know what right looks like or I’ll know it when I see it. If we don’t have something, then a subject-matter expert is resolved to that bias or we don’t have consistency from one biased subject-matter expert to another.

And if you don’t have any measurement, then that practice can be … And trust me, I know this one because I have years of experience in this one. Practicing the wrong things, you don’t miraculously change those things. There’s that critical element that’s missing, and that’s that third bit, which is feedback. By delivering feedback, we have the potential for a subsequent change. And that’s what training is all about.

Daniel Serfaty: We’ll be back in just a moment, stick around. 

Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS , but don’t have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis, curated segments from the MINDWORKS , but gas condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, BuzzSprout or wherever you get your podcasts.

Courtney, I’d like you and Shawn to expand basically on that idea, but focusing it more on a particular toolkit that was developed at Aptima early on we call Spotlight, which is a kit that includes basically a lot of the science by articulating, what should we measure, what can be measured, and what is measured. And how we went from basically an academy concept of scales and measurement into a practical tool. I would love for you to tell me a little bit about that. What is unique about it, and what did we learn in applying it to different setting environments? So Shawn, and then Courtney.

Shawn Weil: Yeah, I love the Spotlight story. Unfortunately, I wasn’t there at its inception. I suspect if you listened to some of the previous podcasts and listen to Jean McMillan, she’ll tell you the story of resilience that was the origin of the Spotlight application, which at its core seems like a pretty straightforward concept. But in practice, it’s a lot more sophisticated, especially in comparison to what people tend to do. So Spotlight at its core is an electronic observer measurement tool. It’s a way to provide to observer-instructors the means for comprehensive assessment of live activities. So think about it this way. The way it’s been done for years is you have people in a field environment, maybe they’re doing some maneuver on an actual field, maybe they’re doing some pilot training in simulation environments. And you have experts who are watching them, and they’re writing their comments back of the envelope.

Well, back of the envelope only gets you so far. Those biases that we’ve talked about, the inter-rater differences that creep in, they make it so there’s very limited consistency. So enter Spotlight, essentially what we’ve done is put together a set of measures that are comprehensive and aligned to the activities that the trainees are actually going through, then implemented that in an electronic form factor that then affords a bunch of other things. It affords that feedback that Courtney was describing. It allows for aggregation of data. The measures themselves are designed to encourage what we call inter-rater reliability, essentially consistency from one rater, one expert to another. And we’ve seen that really transform the way that training is done in a number of environments that Courtney and Evan have really been in charge of and really pushed forward over the years.

Daniel Serfaty: Well, thank you for giving us a little bit of history. Indeed, Dr. McMillan was our previous chief scientist, was actually at the origin of developing that tool, I believe originally for the Air Force. But Courtney, you are actually in one of your many roles the manager of these product line called Spotlight, and you’ve seen dozens of instantiation for Spotlight. You also mentioned the F word here, feedback. Tell me stories about when you used or recently or previously with the way you’ve used Spotlight? Which is after all a tablet for our audience that prompts the trainer or the observers to grade the learner or the team of learners according to certain scale that have been established as being essential to their expertise through their mastery. So tell us some Spotlight stories, especially when it comes to how people use it to provide feedback?

Courtney Dean: I’ve gotten my shoes dirty on a couple of occasions. I’ve been in the woods of Georgia, I’ve sat on flight lines, I’ve hung out and fields next to, I guess, improvised villages or foreign operations villages. And I’ve been in briefings at two o’clock in the morning that extended until three o’clock in the morning after all of those outdoor activities occurred. And in all of those occasions when instructors had Spotlight with them, their ability to communicate to the learner the delta between what is observed and what is expected. And then to elaborate on that with a picture of how to close that delta is far and beyond what I’ve ever seen when I watched the same activities go down with an instructor with an envelope and a pencil.

Spotlight is these two core components that Shawn talked about, and I’m not going to try to re-describe it because Shawn did an excellent job. You got the measures, and you’ve got the application that delivers those measures. And when the measures are in the hands of a competent instructor, they’re able to make total sense of the student doing the job that they’re supposed to be doing. Why was his arm at the wrong angle? Why did the bullet go offline? Why did the tank not make it to the way point at the right time? Whatever the context is, they’re able to thread together the story that led to the undesirable outcome. And they can pick spots within that timeline, and they can communicate with that student, “Here’s where things deviated slightly, it led to these consequences. Here’s where things deviated slightly again as a result of those consequences.” 

Suddenly the student goes from, “I failed, and I don’t know why,” to, “I failed, and it’s because I need this fundamental error here,” or, “I received this incorrect information here, and I operated on the wrong frame of reference.” Those pieces of information are critical for that subsequent change in behavior that I think I’ve repeated two and three times now. Ultimately, the student is now empowered to become better in the long run.

Daniel Serfaty: Thank you for that vivid description, Courtney. Evan, I think in a sense we see here that what science and the judicious use of data enable us to do is not just to provide the movie of what happened, which would have been the description of the scenario, of the vignette, of the way the student went through a chapter of instruction. But also some x-ray or a hyper vision, if you wish, of that movie that enable the D word, which is in this case is a diagnostic that Courtney is talking about. And perhaps that’s what science gives us, the ability to see inside and then eventually say, “Yes, you didn’t do that as well as you could, but here’s the reason why,” and for a student being able to close that gap. Can you think of in your own development and use of Spotlight in environments that sometime are different from the environments that Courtney described? Can you give us some example of how that was used that way, provide that secret insight into human behavior?

Evan Oster: Yeah, there’s a couple instances that I can think of. But one in particular is when it comes to receiving that feedback. So it depends on who your trainee or your student is. And there are times where, like Courtney outlined, you have an instructor doing this back of the envelope notes, they provide the feedback. And that leaves the trainee with an option, they can accept it or they can reject it. And oftentimes when you don’t have that environment in that context and the angles, a student is more prone to reject the feedback or to make an excuse for it or whatever it might be. But when using Spotlight, I’ve seen a number of times where that might be the first response.

And then when the student gets the feedback and the instructor shows them where they might’ve gone wrong here or there, they are able then to accept it and see, oh, my arm was here, or I did do this. And it’s that point in that context that’s concrete and objective. And then they’re able then to accept the feedback and then use the data and the additional context that the instructor can provide, to use that to make a better decision next time.

Daniel Serfaty: Evan, you’ve seen that, what you described right now. I would like our audience to visualize, where are you seeing that, in what context? In what domain, to use a word that you like?

Evan Oster: One context is when using that in law enforcement and has been doing building search. So there are very specific ways that you can systematically progress through a building and search, and you need to do that methodically. And in order to conduct that in the right way, there’s gotta be a lot of non-verbal communication, there’s gotta be a lot of decision points as a team and as an individual. And you have to be constantly aware of your surroundings. A particular example would be even doing what’s called lasering your partner, where if the path of essentially what your gun is pointing at were to cross over your partner, then that’s putting them at risk. It’s a safety concern. An instructor might say, “Hey, I saw you laser your partner.” And they could say, “No, I didn’t.” But when using Spotlight and having that video of the feedback, they can show a distinct point when that happened. And then at that point, they can adjust how they are holding their gun or how they move through a space.

Daniel Serfaty: Okay. That’s a good prompt towards the next segment where I would like to explore how we took a lot of wisdom that we learned from the particular domain, whether it’s fast jet flying or a military operation on the ground to another domain when other skills are more important. But Shawn, why don’t you add a little bit texture to what we just heard from Courtney and Evan before we move to that next step?

Shawn Weil: Absolutely. One of the things that I heard Evan say that really resonated with me is this false dichotomy, false separation between subjective and objective measurement, and the tendency for people to de-value subjective measurement even if it comes from experts. I’ll explain it this way. So in some of the military domains where we work, let’s say you have people in an aviation trainer, you’re in this multimillion dollar F-16 simulator. And you’re in the computer, you’re doing your work in the simulated environment. So the computer could collect all of this information about your position relative to the other planes and where you’re dropping your armaments and all of these things. And people value that quantitative information, maybe they over rely on it in some sets. Because if what you’re actually trying to teach in those environments have to do with teamwork or communication or some other behavior that requires interaction, then the way the computer is doing the measurement isn’t the right way to collect that data.

So what’s happened in the past is you have those back of the envelope guys providing some feedback, which often gets devalued because it doesn’t have that quantitative shell around it. But what Evan was just describing in law enforcement and what Courtney was describing in the use of structured measurement is now we put some quantitative armor around subjective opinion. It’s not subjective anymore because we’ve given a lot of meat to the ratings that people get. And we could correlate those ratings with video evidence of what they’re doing. So now the subjective becomes really, really powerful. That’s what Spotlight does, it gives you that powerful view that you couldn’t get otherwise.

Courtney Dean: I would like to say for the record, it’s only engineers that think that the only way that you can do measurements is objective.

Daniel Serfaty: Let’s not get into the scientific versus engineer dilemma. But rather I think that, Shawn, thank you for clarifying the difference between subjectivity, objectivity as opposed to qualitative, quantitative. And in a sense you can be very subjective, which is not a bad word because sometimes subjectivity include the very expertise of the teacher, which is born out of decades of experience sometime. And so, yes, it has biases, but it has also some profound wisdom with it. Adding a layer of quantitative data to it certainly strengthen it, do not diminish it. And so we have to make the difference between subjective, objective versus qualitative, quantitative.

As we move towards those complex environments, and I get the feeling from a lot of your explanation, from a lot of Courtney’s description that basically those domains are complex, they are not trivial, they are not mild. In a sense, they all have some time pressure involved in them and high stakes and different sources of uncertainty and multiple objectives and complexity that makes basically the job of acquiring expertise in those domains difficult. And certainly the job of the instructor about developing that expertise in others even more difficult.

Having talked about all these levels of complexity in the domain that we are applying our human measurement and training science and technology, I was reminded of early in my career an episode where we were tasked to take our tools, our team training tools, our decision-making training tool that we have developed for warfighting, for improving the war-fighting decision-making of our armed forces and apply them in a mission that was quite different. And I’m talking about Bosnia about 20 plus years ago where our war fighters were super expert in their domain, suddenly had to change. But not just suddenly, but on a daily basis, they had to migrate between their war fighting mode with very nefarious people wanting to do them harm and wanting to do the population harm to becoming peacekeepers, and almost at some level social workers.

And that migration between those skills, very complex skills of war fighting to the very complex skills of peacekeeping was not trivial at all. We were very challenged. We’re dealing with very smart soldiers, very smart war fighters. But it was terribly difficult for them to be able to maintain those two minds at the same time. It was not on the very same day two very different missions. One dealing with war fighting, another one dealing with peacekeeping. And they had to switch the way they were making decisions, they have to switch the way they were assessing the situation. They had to switch the way they were evaluating danger. What you are doing right now is taking all these tools and these smarts and these theories of human performance and these sophisticated measurement tools and bring them into a different domain, which is the law enforcement domain. Shawn, I will ask you first, how did we decide to make that jump? Was there a particular project that led you to lead this initiative to switch from one domain to another?

Shawn Weil: Yeah. I think you point out some critical issues in the way that people have to be dual hatted when you’re in harms way. The reason we pivoted in some sense to law enforcement had to do with the program that DARPA was running about 10 years ago called the Strategic Social Interaction Modules program. And I won’t get into the details of the program itself. But fundamentally what it was looking at was exactly as you described, are there ways to help Marines and soldiers who have joined their military services because they want to protect the country? They’ve now been put into roles of civil government in some sense, and it’s a very hard transition to make. So one of the things that we looked at was, well, is there a model of competency for people who are put in dangerous situations but have to manage them the way you would in a civil organization?

So that’s when we started looking at law enforcement as that competency model, if you will. Because police officers in the best of circumstances go from situation to situation, to situation that might be completely different. You don’t know what’s on the other side of the door when you knock on the door when there’s been a call for disturbance. It might be somebody who is sick or somebody with mental illness or somebody who is violent, or somebody who has mental illness, or somebody who doesn’t speak English. You don’t know what’s behind door number one, door number two, and door number three. And we found that the very methods we’ve been talking about in Spotlight, in measure development could be used to measure the capability of those peace officers for both the tactical portions of their jobs and the software skills that they need to do for their jobs to manage difficult situations, dangerous situations.

And it’s had this nice feedback effect because some of those same measures could then be used in a number of military contacts where there are analogous activities that need to be accomplished to achieve the goals of the mission.

Daniel Serfaty: Thank you. Courtney and Evan, I’m going to ask you to give me some examples based on your own experience. And I realize that the depth of your experience is primarily in armed forces situations or military situations. But experience in the law enforcement domain in which you had not just to use similar tool and then customize you tools, but also detected that there are some things that actually are different in terms of the skills that we need to coach these folks with. Courtney, you want to jump in and then Evan?

Courtney Dean: I want to use what Shawn just talked about and characterize it as ambiguity. We had a situation with a Massachusetts police department that was very interested in getting some support in some of their training. And they had a semi-annual field exercise that was a couple of different traffic scenarios. Each one of those traffic scenarios was designed to present an ambiguous situation to the police officer. So going back to what Shawn’s talking about, you don’t know what’s behind door number one. You could expand on that for hours and talk about all the ambiguous situations that a police officer faces. And it’s not about knowing how to put somebody in restraints and put them in the car and then fill out some paperwork. It’s about dealing with citizens on an everyday basis, about being an advocate, about being a supporter, about differentiating who the bad guy from the good guy is, or understanding how to keep the bad guy from becoming a bad guy or worst guy.

So this ambiguity theme existed in all of these different scenarios. And the one that sticks out in my mind the most is the officer pulls up to two vehicles, and both of the drivers, occupants of the vehicles are standing out in the street. And they’re agitated, they’re screaming at each other. Unknown statements are coming out because the verbal diarrhea is just anger and frustration. So what’s the police officer’s first move here? Oh, I’m going to put both of these guys on the ground. That’s not going to get us somewhere. I’m going to figure out what’s going on, diffuse the situation and try to bring these two out of their hostility environment in back into society, and then decide from there whether something needs to be done further from the law enforcement perspective or send them on their way.

And that is a snap decision that the officer needs to make because he can’t sit in his car and think about it for a little while. These two could go to blows any second now. Got to get out of that vehicle, got to move on that situation in the most effective, and let’s call it gentlest way that they know how so that they don’t add fuel to an already sparkling fire.

Daniel Serfaty: That’s a very good example of this notion that we used to call, at least in the military version of that, on the peacekeeping side of things, teaching tactical patience, that tactical patience is that tactical passivity. It mean that patience in taking step to gain control on the situation and diffuse it as opposed to being very quick, say on the proverbial trigger and moving in and trying to stop the situation without understanding first the nature of the situation, which is really something for survival and for mission effectiveness. A lot of the war fighters, the military tradition has been training people to act very fast.

Courtney Dean: And we use Spotlight in that particular case, and the measures that we included in that were a combination of technical measures as well as measures that dealt with this ambiguity. So we might’ve had a measure that pertained to the physics of parking the vehicle in the correct way, and that’s important. But the essence of this scenario is, is the officer understanding the situation, maintaining order without exacerbating the situation? And those are the measures that are the most important to capture because those are the ones that help the instructor evaluate or deliver that proper feedback. And those were the measures that were the most contentious afterwards for the officers, well, now hold on a second here, what did I do wrong? Or I did it this way. And as Evan had talked about before the video showed, well, here’s where you demonstrated some bias towards this guy versus this guy, which agitated this guy, et cetera, et cetera. And those measures really do have a powerful impact because they can support that understanding and that feedback.

Daniel Serfaty: So obviously Evan, our audience is going to be very sensitized to the current events and the different accusation both for and against the police behavior. But sometime no matter on which side of the political spectrum you stand, you have to realize that law enforcement officers are humans too, and they have their own cognitive and behavioral challenges facing ambiguity like the rest of us. The question is, can we through science and technology train folks to actually make the best out of those situations to know that there are alternative ways to control the situation without being very fast on the trigger? Evan, tell us a little bit about your own experience. You’ve worked quite a bit with another region, not the Massachusets region like Courtney was describing, but central Florida.

Evan Oster: Yes. That makes me think about how the training of how to respond in these situations. We can look at training, the types of behaviors we want to see. And it isn’t always a train at once and you know what to do, right? But it’s more of a constant and persistent thing. So when it comes to anyone doing their annual refresher training, that’s helpful, but it’s likely not going to have long-term effects and impacts. And what we can do using science and technology is be able to take performance that is being captured, measure and assess that and have that be something that’s on a more frequent basis, which over time can help to correct certain behaviors, adjust maybe core decision-making skills, present other options or ways of handling situations.

And like we’ve been talking about, the other side of the training coin is learning. And so being able to correct where each person is maybe weak in areas and be able to tailor that feedback and over time provide the opportunity for that training to take different forms. And it’s something that’s not just a one and done or once a year, but being able to gather data and provide that feedback in meaningful ways.

Daniel Serfaty: Shawn, you want to add to that?

Shawn Weil: Yes. When Courtney and Evan were talking about ambiguous situations, it reminded me of some work we were doing in the development of Spotlight measures at a police academy on the West Coast. And this was the capstone activity for these police cadets. They went from station to station experiencing something, and they were being measured on how they would perform. Some of the things they were being trained on were very tactical. But one of them I think really exemplified this ambiguity. This is what happened. The police officer would roll up in his cruiser to an apartment building to find that there was a man on the third story holding a child and threatening to drop the child from the third story of this building. And they had to decide how to handle this.

They’re too far away to be physically there. If they tried to approach them physically, the man might drop the child. The child, by the way, was being played by a doll in this situation, so there was nobody who was actually in danger. But in spite of that, the adrenaline, the anguish, the ambiguity of the situation trying to deal with use of force, and should we shoot the sky, and what should we do was almost overwhelming for some of these cadets. I saw people burst into tears on the implications of whatever their actions might be. In this particular case, what was happening was there was one moment in the stage scenario where they put the child down and they come to the edge.

And at that point, use of force is warranted, it’s legal by the rules in that state. So as we were thinking about the measures that we could develop to try to capture their performance, there was a set, as Courtney was saying of the mechanics of police work. And then there was another set that had to do very explicitly about the decision-making process. Sometimes there isn’t a perfect solution, but learning how to make decisions in critical situations is part and partial of police work in those most critical times. The development of measures for that purpose will go a long way in ensuring that you’re clear-headed and you have a way to break situations down and make decisions that are going to save lives.

Daniel Serfaty: Thank you Shawn for this very vivid description. Indeed, I think that the audience can imagine themselves, what would they do in those cases? And it is not always about what we hear in the popular press about systemic bias or other things like that that may exist. But there are some fundamental human behaviors here and fears and dealing with ambiguity that are at work here in which I think we as human performance experts could help quite a bit. And Courtney, this one is for you because you’ve seen probably more instantiation of the application of tools like Spotlight, but not just Spotlight in different domain. If I ask you to envision the success for Spotlight and associated digital tools in the particular domain that is so much right now on the mind of the United States’ public, the law enforcement domain and vision success of the use of the systemic measurement, the use of science and technology in one year, in three years. Tell me a little bit how that would work?

Courtney Dean: So my associates at my former line of work would take posit what I say here just a little bit, and I’ll clarify it. The job of a police officer can be team posed into what we call in my industry, job analysis. And for the most part, that job analysis can relate to different departments, different places. There was a highly litigious environment where I was at before, which is why those folks would take posit to that. But you can essentially break the job down to these tasks and these knowledge, skills, and abilities. What we could see is a library of measures that pertain to those tasks and knowledge, skills, and abilities. And in particular, it’s a usable library that focuses on some of those less kinetic, less technical skills and more of those soft skills that are so essential for avoiding getting into a kinetic situation. Those decision-making skills, the ability demystify the situation, the ability to deescalate.

And there are a variety of things that can be put into place that will measure an officer’s willingness and then ability to do that, and then the effectiveness of their effort. Those measures could be applied to many, many different situations and scenarios with only maybe a little bit of tweaking here and there. So in the future, we can have Spotlight with that library of measures and a couple of configurable features or functions of Spotlight. And what I mean by that is that sometimes we apply our measures in a video tagging context. So we tag the video that we’re seeing with those behaviors and those ratings.

Sometimes we use a typical survey type of method, sometimes we have a combination of the two. But it ultimately comes down to what is the best method for capturing data that is going to benefit the learner? We have this library of these measures that are focused on these things that are so critical for an officer to achieve effectiveness. And we use it in environments like that like the one that Shawn just described. And we lead these officers towards the desired end state through repetition, feedback, and a deeper understanding of their role within the potentially deadly situation.

Daniel Serfaty: Thank you Courtney for, I would say optimistic vision because that’s our role is to look to the degree to which those methods, especially if they are really future oriented, they lean forward, the degree to which those tools and the scientific method can help. I know that one of the more pioneering thoughts in this domain was actually promoted by the Air Force. Dr. Wink Bennett or the Air Force Research Lab has promoted the notion of mission essential competencies, tor example, that is a catalog almost exactly the way you describe Courtney. A catalog of those essential skill that if you could progress on those skills, you would progress on the quality of your mission accomplishment.

And if we could do that through a fresh look into what’s considered police work in its different instantiation including the ambiguity of the situation, I think we may have a chance. Shawn, what do you think? I’m asking you the same question. Given the current situation with the police department in this country, what are your hopes and fears about your ability and that of your team to use science and technology to help? And Evan, I’d like you to chime in too about it.

Shawn Weil: I’m really excited about this, Daniel. In spite of the truly tragic circumstances that we’ve seen across the country, there are some positive trends that I believe are going to revolutionize performance, not just in law enforcement, but more generally. So there are several of these. Number one, there’s the ubiquity of measurement. I think back to the start of Spotlight and the idea of doing performance assessment still had some novelty. Now, we all walk around with cell phones in our pockets that can measure us in a half a dozen ways, where we are, who we’re talking to, the accelerometers in these devices. So if you start to extrapolate to law enforcement where you’re wearing body cams and you’ve got dash cams and you might have some wearable physiological sensors, that you might be able to use artificial intelligence to do some of the assessment that’s currently done by expert observers at a much larger scale.

And use machine learning then to start to develop a more comprehensive understanding of what right looks like even in low frequency situations. If we can, as Evan was describing, change this from a one and done situation to a continuous performance assessment and feedback environment, the sky’s the limit to the ways in which these professions can improve over time. So in 2030, we might be talking about a situation where the performance of our law enforcement officers is continuously refined aligned with societal expectations for public safety.

Daniel Serfaty: Wow, that’s a very ambitious vision, and I hope it is realized, Shawn. Thank you. Evan, you are back to your roots as a teacher with trying to impart some skills and knowledge to your students. Can you also describe for me vision over the next 10 years or so of how these tools, not just the scientific tool, but also the technology tool, the ability to capture artificial intelligence, the ability to look through large amount of data will enable us to transform learning and teaching and training in the law enforcement domain?

Evan Oster: Certainly. So I always like to start with the end in mind, to what end and why would we do this? And I think if we’re looking at law enforcement operating as public servants and wanting to improve the safety of our communities and of our country, one of the main driving factors there is policy. So before making any changes to policy, and I think this has been a common thread throughout this whole discussion and conversation is, how do we capture data? Is it validated? What types of measures are we looking at? And ultimately, the thing that I think everybody can ultimately agree upon is that data has a major influence over how those decisions are made regarding policy. So when we look at collecting this data, then we can start making decisions on how to use it based on what we’re seeing.

In the future, like Shawn said in 2030, I can see starting to blur the line between training in the field. So as video is collected and data is generated, we could start using AI, computer vision to automatically start assessing what is being viewed in the video, whether it’s body cam, dash cam. And be able to either prompt or be able to collect and assess and provide that feedback as a fraction or hotwash. Then that’s something that you’re continuously able to monitor and do in addition to providing that back to the department to see, really where are the areas that need to be targeted as far as training in the future goes? So I think it’s something that we can start getting more of this training on demand and be able to have it be custom tailored to each officer and to each department.

Daniel Serfaty: Thank you, Evan, that a very ambitious future. But you know, 10 years is an eternity in our world. And Courtney, maybe you can also imagine what a life of a police officer or a police trainee, pick one, could be a day in the life in 2030 when they have many of the tools that Shawn and Evan are dreaming about. Give us your take on that. Are you optimistic?

Courtney Dean: So I’m going to try to tiptoe the realm of politics here and avoid getting too far down one path. One thing that I like is a little bit of the dialogue that is talking about taking a little bit of the officer’s responsibilities off their plate. Are officers social workers? Are officers therapists? Are officers peacekeepers? Are officers custodians of our slums, of our suburbs, of our downtowns? There may be a few too many roles that have been bestowed upon police officers inadvertently, informally, accidentally that we could relieve them of and focus those back on the professionals that want to and are best suited and trained to do that.

And so maybe in 2030, we see that we have a little bit more democratic funding of our schools, and we have a little bit more judicious allocation of support resources for our underprivileged. And as a result, we have police officers who are put in a position where they’re not doing a wellness check, for instance by themselves, but they’re actually accompanied by a social worker. That they’re executing warrants when it’s a known dangerous offender and not executing warrants when it’s potentially but unknown or, for instance, a psychiatric issue.

We know that that person is a psychiatric issue, so we brought the right equipment, the right people into the equation. And the reason that we know that is because people are getting help, and we’re diagnosing and recognizing when folks are a psychiatric issue. And maybe in 2030 when somebody does slip through the cracks, the officers are stepping back and they’re getting the right professional into that environment before things escalate.

Daniel Serfaty: Thank you. And indeed, that vision … And I appreciate you defending yourself of being political, you’re not being political here. You’re basically being a scientist looking at how the complexity of the job may require through that analysis that you promoted earlier, compartmentalization of the different jobs components of expertise that we are asking our police officers to have. And maybe by distributing that responsibility, this is a beginning of a solution. Well, I want to conclude by thanking Shawn and Evan and Courtney for their insights. They’ve dedicated their lives to studying and also improving human performance through learning and through compassion basically. Thank you for sharing those insights with us, not only about scientific and the technical, but also about the visionary perspective that you shared with our audience.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. Today, I have two phenomenal guests who are going to tell us stories about the future and also about the present in the much maligned collaboration between humans and robots. We’ve talked about, on past episodes of MINDWORKS, books and films often paint a dystopian picture from iRobot to Skynet, while in the real world, human workers are concerned about losing their jobs to robot replacements.

But today, I hope to put some of these fears to rest. We are going to talk with two MIT professionals with advanced degrees, who recently did something very interesting together. They are the characters of a new book, What To Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration. Well, in addition to being the coolest title published this year in any domain, it’s a really important book that I recommend to all of you to read about the reality of introducing robots and artificial intelligence in our daily lives and at work.

So without further ado, I want to introduce my first guest, Professor Julie Shah, the associate dean of social and ethical responsibilities of computing at MIT. She’s a professor of aeronautics and astronautics and the director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She’s expanding the use of human cognitive models for AI and has translated and worked in manufacturing assembly plants, healthcare applications, transportation, and defense.

Professor Shah has been recognized by the National Science Foundation with a Faculty Early Career Development Award and by the MIT Technology Reviews on its 35 Innovators Under 35 list. That’s a great list to be on because you stay under 35 forever. Her work has received international recognition from among others, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, and the International Symposium on Robotics. She earns all her degrees in aeronautics and astronautics and in autonomous systems from MIT.

My other guest, and as you will soon learn, Julie’s partner in crime and in books too, is Laura Majors. Laura is an accomplished chief technology officer and author. As CTO at Motional, Laurel leads hundreds of engineers in the development of revolutionary driverless technology system. She began her career as a cognitive engineer at Draper Laboratory, where she combined her engineering and psychology skills to design decision-making support devices for US astronauts and military personnel. After 12 years with the company, she became the division leader for the Information and Cognition Division. During this time, Laura was recognized by the Society of Women Engineers as an emerging leader. She also spent time at Aria Insights specializing in developing highly advanced drones. Laura is also a graduate of the Massachusetts Institute of Technology. Laura and Julie, welcome to MINDWORKS.

Julie Shah:Thank you so much.

Laura Majors:Thank you. It’s great to be here.

Daniel Serfaty: This is, as you know, a domain that is very close to my heart. This is where I spend most of my waking hours and maybe my sleeping hours thinking about this notion of collaboration between humans and intelligent machines and you’ve been at it for a long time. Can you say a few words to introduce yourself? But specifically of all the disciplines in engineering you could have chosen in institution as prestigious as Georgia Tech and MIT, why this domain as a field of endeavor? Julie, you want to tell us first?

Julie Shah:As long as I can remember from when I was very small, I was interested in airplanes and in rocket ships and it was always my dream to become an aerospace engineer. I wanted nothing more than to work for NASA. I have a different job today, but that was my initial dream when I went off to college to study aerospace engineering at MIT. When I got into MIT and everybody said, “Oh, what are you going to study there? I said, “Aerospace engineering,” and everybody would say, “Well, that’s so specialized. How do you know at such a young age that you want to do something so specialized?” And then you get to MIT and you begin a program in aerospace engineering. And the first thing you learn is that it’s a very, very broad degree. Aerospace engineering is a systems discipline. And then everybody begins to ask you, “What are you going to specialize in as an aerospace engineer?”

And the thing that caught me early was control theory. I really enjoyed learning about the autopilots of aircraft, how you make a system independently capable. And then interestingly for my master’s degree, I pursued research in human factors engineering. So you make a system independently capable of flying itself, but it’s never really truly independently capable. It has to be designed to fit the pilot like a puzzle piece. And then that expanded design space of the human-machine system really captivated me.

For my PhD, I went on to specialize in artificial intelligence, planning and scheduling, so moving from lower level control to how do you make a system more capable of acting intelligently and autonomously to make higher level decisions at the task level, but you still have this challenge of how you design that capability to fit the ability of a person to monitor it, to coach it through steps, to catch it when something isn’t going right. And that master’s degree in human factors engineering has really been the center of my interest, putting the human at the center and then designing the technology from there. And so you never truly want to make something that operates independently. It operates within a larger context. And that’s part of the aerospace endeavors, teams of people and complex socio-technical systems coming together to do amazing things. And so that’s how I ended up working in this space.

Daniel Serfaty: That’s great. And indeed, I hear words like humans and intelligence and all kinds of things that usually we don’t learn in traditional classes in engineering. Laura, you ended up in human-robotic collaboration. You took a slightly different path and you are more a leader in industry. How did you get there? Why not just be a good engineer in building bridges or something?

Laura Majors:Yeah. I’d always been interested in robotics and space and found math very easy and beautiful. But I also had this side interest in how people think and human psychology. And so when I went to college, I wasn’t sure which path to go down. I will say my parents were pushing me down the engineering path. I was struggling because I also had this interest in psychology and I thought they were orthogonal. And it wasn’t until my campus tour at Georgia Tech, where someone pointed out a building and said, “That’s the engineering psychology building where they figure out how do you design a cockpit, how do you help a pilot control this complex machine?”

That was really the spark of inspiration for me. Of course, it wasn’t until my junior year after I got through all my core engineering courses that I was able to take a single class in engineering psychology, or it was called, I think, human-computer collaboration at the time. I was fortunate to take that course with Amy Pritchett, who many of you probably know, and I was really interested in that topic. And so I approached her and asked if I could do some research in her lab as an undergrad, and really through that got exposed to what this field was all about. And so I followed in her footsteps going to MIT and humans and automation and really focused on that area for my graduate work.

Also, for me, I always wanted to build things that made a difference. And so seeing products through to the end was really, again, part of my passion. And so I saw that opportunity at Draper to really work on these important critical projects. And then that took me into the commercial world as well as I worked on drones before this and now the opportunity to really figure out how do we build robot cars that are going to work in our world and that are going to blend with human society in a way that’s safe and effective.

Daniel Serfaty: That’s amazing, this fascination that some of us have had despite the choice of wanting to play with airplanes and other things like that about the human as maybe the most fascinating, but yet the most mysterious part of the system. One day, somebody need to do an anthropological study about why some of us decided to migrate into that area and some others did not. But Laura, since you have the microphone, can you tell our audience what do you do on your day job? What do you do when you go to work as a CTO of a really high-tech company looking at them striving and other things like… What is it that you do?

Laura Majors:Yeah, so some days I do podcasts, like today. I have a large engineering team, so I have hundreds of engineers. I don’t get to go deep anymore into the hands-on software myself, but I work with my team. So we’re working on everything from the hardware design. I have to worry about the schedule. What are the key dependencies across my hardware teams and my software teams? What’s the system architecture that’s going to enable us to have the right sensors and the right compute to be able to host the right algorithms in making the right decisions?

So how I spend my time is a lot of meetings. I spend time with my leadership. I also do a lot of technical reviews, new architecture designs, new results. Yesterday, I was out of the track riding in our car. I try to get in the car every couple of weeks when we have new software builds so I can see it tangibly myself. I also present to our board frequently. So I have to share with them progress we’ve made, risks that we worry about, challenges that we face and how we’re approaching them. So there’s a lot of preparation for that.

And of course, I’m working with the executive team here with our CEO, with our CFO, our general counsel, our head of HR to make sure that all the pieces are coming together that we need from a technology standpoint to be successful. I have to wear a lot of hats. I would say maybe 70% of those are technical and probably more than you would expect are not technical, but they’re all a part of making sure we have the right team, the right process, the right tools we need to be successful in creating this very complex system.

Daniel Serfaty: You seem to enjoy it, that the role of the CTO, which is really a very coveted role in high-tech for our audience who doesn’t know the chief technology officer make all those connections between the hardware and software and business and finance and the different components, and at the same time need to be quite often the deep engineer or the deep scientist because you deal with such advanced technology. Julie, what do you do? You have at least three jobs I know of at MIT. You’re associate dean of that new big school of computing, you’re a professor of aero/astro, you work in the lab, you’re managing your own lab, tell our audience what do you do on a typical day if there is such a thing.

Julie Shah:I’m a professor and researcher and I’m a roboticist. So I run a robotics lab at MIT. When you’re doing your PhD, usually you’re sitting in computer science or AI or other disciplines. Usually, you spend a lot of time sitting very quietly at your desk coding. When I turned over to becoming a professor and the better part of 10 years ago, I described the job as a job where you have half a dozen different jobs that you just juggle around. And that’s a part of the fun of it. So what I do is work on developing new artificial intelligence models and algorithms that are able to model people, that are able to enable systems, whether they be physical robots or computer decision support systems to plan to work with people.

So, for example, I develop and I deploy collaborative robots that work alongside people to help build planes and build cars and industry. I work on intelligent decision support for nurses and doctors and for fighter pilots. I specialize in how you take the best of what people are able to do, which vastly surpasses the ability of computers and machines in certain dimensions and how you pair that with computational ability to enhance human work and human well-being.

Daniel Serfaty: But your professor was a million different projects and a lot of students. I assume the people actually performing that work of modeling, as you said, of building, et cetera, are your students or your other professors. What’s a typical day at the lab when you’re not in the classroom teaching?

Julie Shah:It’s all of the above. I run a lab of about 15 grad students and post-docs, many more undergrads that engage in our lab as well. I think over the last 10 years, we’ve had over 200 undergrads that have partnered with the grad students and post-docs in our lab and they primarily do the development of the new models, the testing, bring people in to work with our new robots and see how and whether the systems work effectively with people. We do have many different types of projects and different domains.

But one of the most exciting things about being a professor is that the job description is to envision a future and be able to show people what’s possible, so 10 plus years down the road. So we’re shining a light on as we advance this technology, here’s what it can do and here’s the pathway to the ways it can transform work, a decade plus down. But I am very driven by more immediate, real-term applications. And so many of my students will embed an industry and hospitals and understand work today to help inspire and drive those new directions.

The key is to develop technologies that are useful across a number of different domains. And so that’s how it all becomes consistent. It’s actually whether it’s a robot that’s trying to anticipate what a person will do on an assembly line, or just decision support system anticipating the informational needs of nurses in a hospital floor. And many of the aspects of what you need to model about a person are consistent across those. And whether it’s physical materials being offered or informational materials, there’s consistency in how you formulate that planning problem. And so that’s the joy of the job is working through into that intellectual creative space to envision what these new models and algorithms will be and how they can be widely useful.

Daniel Serfaty: I’m glad you mentioned the joy of the work because what comes across when listening to Laura and to you, Julie, is you really enjoy what you’re doing. There is a true joy there, and that’s very good to hear. There is also the duality in what both of you described into the work of today. I suppose both as CTO and as professor, you have to envision the future too. Actually, that’s your job, as you said. And it’s reflected in the book that you just co-wrote, to remind our audience, What To Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration. So my question is what prompted you to collaborate on this book other than just having more joy in your work, which you’re going to make a lot of the people in the audience very jealous? But why this book, Laura? What prompted you to collaborate with Julie on this?

Laura Majors:There was a conference that we were both at where I was asked to talk on some of the commercial early use of robots and some of the challenges there and some of the things we learned in industrial applications that crossed over that may help in commercial applications. And after giving that talk… It was short, it was, I don’t know, 15-minute talk, but it was received really well. I think it sparked some discussion. I was at Draper at the time. And so some of my staff and her students were already working on some projects together. But actually, it was only at this conference that she and I met for the first time in person when we were working just across the street from each other. We both knew of each other very well.

And so when we got back from the conference, we got together over lunch and we were talking and connecting on many topics, but I think that was the moment where Julia was considering writing a book at the time. After this talk, I had started thinking about writing a book and we both felt like we don’t have time to write a book. But we thought, “Hey, if we do it together, then we can motivate each other like a gym buddy.” And also, we saw that we each had these very different perspectives from all of the great theoretical work that Julie was working on in the lab and me from the practical more industrial product-oriented work. And so we decided to start pursuing this and we wrote up a proposal and we started working with editors and a concept came together that became EMR book.

Daniel Serfaty: Practicing collaboration or writing about collaboration, look at that.

Laura Majors:Yes, and it was a great joy. We kept waiting for the process to get hard and painful and it never was, I think because of that collaboration.

Julie Shah:Everything is good so far. We knew what happens next and then it was just continued to be a joy all the way through the end.

Laura Majors:Julie, I assume you have fully endorsed Laura’s version of events here?

Julie Shah:Yeah, exactly right. It’s exactly right. I think the only thing I’d have to add is I think we’re bringing very complimentary perspectives from industry and academia. But as you can probably also infer just based on the conversation so far, there’s a core, there’s an orientation towards technology development that we share coming from the human needs perspective and how these systems need to integrate with people and into society. Laura gave this amazing talk. It’s 15-minute talk. I had been drawing many of the same themes in year after year in a course I teach on human supervisory control of automated systems, where I say like, “Look at aerospace, look at the new applications coming and the challenges, we’re going to sit with them.” And afterwards I was like, “You’ll capture everything so perfectly.” And then a great friend and mentor of ours said in passing, “That would be a really great book.”

Daniel Serfaty: You’ll see both of you are going to be able to retire the day you can design us a robot that will make the same recognition. We recognize that there is an impedance match between themselves and the human they’re supporting, but let’s jump into that because I want to really dig deeper right now into the topic of human-robot collaboration. And my question and any one of you can answer is humans have been working with machines and computer for a while. Actually, Laura, you said at Georgia Tech you walked into a human-machine interaction class a couple of decades ago, or at least that. So we’ve been teaching that thing. Isn’t human-robot collaboration just a special case of it? And if not, why not? What’s unique here? Is there a paradigm change because of what? Any one of you can pick up and answer.

Laura Majors:I remember one of my first projects at Draper was to work on an autonomous flight manager for returning to the moon. I was so surprised to find a paper from the time of Apollo that was written. I think Larry Young was one of the authors since MIT Emeritus professor. And even back then, they were talking about how much control do you give to the guidance computer versus the astronauts. So you’re right, this discussion and debate goes way back. And how is it different now? I think it’s only gotten harder because machines and robots have become more intelligent and so they can do more. And so this balance of how do you figure out what it is they should do? How are they going to be designed to be that puzzle piece as Julie described to fit with the people that they interact with or interact around?

Julie Shah:I fully agree with that. And maybe the additional thing to add is I don’t think human-robot interaction is a special case of a subset of human-computer interaction. There’s different and important factors that arise with embodiment and thinking about interaction with an embodied system. Maybe to give two short examples from this, I’m not a social robotics researcher. I started my career working with industrial robots that work alongside people in factories. They are not social creatures, like they don’t have eyes, they’re not cuddly. You don’t look at them and think of them as a person.

But we have this conference in the field, the International Conference on Human-Robot Interaction. And up until lately when it got too big, it was a single track conference. There’s a foundation of that field that comes from the psychology background. And so in this conference, you’d watch all these different sorts of papers from all different sorts of backgrounds. I remember there was this one paper where they were showing results of differences in behavior when a person would walk by the robots, whether the robot tracked with it head camera as the person walked by, or whether the robot just stared straight ahead as the person walked by. And if the robot just tracked the person as the person walked across the room, person would take this very long and strange arc around the robot.

I just remember looking at that and thinking to myself, “So I’m working on dynamic scheduling.” Like on a car assembly line, every half second matters. A half second will make or break the business case for introducing a robot. I say, “Oh, it’s all about the task.” But if you get these small civil cues wrong, if you just like, “Ah, maybe the robot should be social and watch people around it as they’re working,” that person now takes a second or two longer to get where they’re going and you’ve broken the business case for introducing your robot.

And so these things really matter. You really need to understand these effects and they show up in other ways too. There is an effect of not trust related to embodiment of a system. So the more anthropomorphic a system is, or if you look at a physical robot versus computer decision support, the embodied system and the more anthropomorphic system can engender inappropriate trust in the system. You might engender a high level of trust, but it might not be appropriate to its capabilities. And so while you want to make a robot that looks more human-like and looks more socially capable, you can actually be undermining the ability of that human-machine team to function by engendering an inappropriate level of trust in it. And so that’s a really important part of your design space and embodiment brings additional considerations beyond an HCI context.

Daniel Serfaty: So what you’re sending us is a warning about do not… think first before you design a robot or robotic device in a way that looks or sounds or behave or smells or touches more like a human. It’s not always a good thing.

Julie Shah:Yeah. Every design decision needs to be intentional with an understanding of the effects of that design decisions.

Daniel Serfaty: Now I understand a little more, is the fact that the robots and like classical machines of the ’70s, say, has the ability to observe and learn and as a result of that learning change? Is that also changing the way we design robots today, or is that something more for the future, this notion of learning in real time?

Julie Shah:So there’s a few uses of machine learning in robotics. One category of uses is that you can’t fully specify the world or the tasks for your robot in advance. And so you want it to be able to learn to fill in those gaps so that it can plan and act. And a key gap that’s hard to specify in advance is, for example, the behavior of people, various aspects of interacting with a person as opposed to like a human is like the ultimate uncontrollable entity. And so it’s demonstrated empirically in the lab that when you hard-code the rules for a system to work with the person, or for how it communicates with the person, the team will suffer because of that versus an approach that’s more adaptable, that’s able to gather data online and update its model for working with a person.

And so the new ability of machine learning, which is it really transformed the field in the last 5 to going back 10 years, it certainly changes the way we think about designing robots, it also changes the way we think about deploying them, and it also introduces new critical challenges in testing and validation of the behavior of those systems, new challenges related to safety. You don’t get something for nothing, basically.

Laura Majors:On that point of online learning, machine learning is, I would say, core to most robotic systems today in terms of their development, but online learning and adaptation is something that has to be designed very carefully and thought through very carefully because of this issue that most robotic systems are safety-critical systems. And so you need to go through rigorous testing for any major change before fielding that change for new software release or software update, for instance. I think some of those online learning adaptation can also create some unexpected interaction challenges with people. If the system they’re using is changing underneath of them, then it can have negative impacts on that effective collaboration.

Daniel Serfaty: Yes, that makes total sense. We’ll get back to this notion of mutual adaptation a little later, but your book is full of beautiful examples. I find them beautiful of basically the current state of affairs as well as the desired state of affairs because many people that are in the field tend to oversell the capability of robots, not because they’re lying, but because they aspire to it and sometimes they confuse what is to what could be, will be. You’re describing different industries in the book, there are beautiful examples, I would like, Laura, for you to take an example that is particularly good maybe in the world of transportation in which you live to show what do we have today and what will we have in the future in that particular domain, whether it’s autonomous cars that everybody obviously is talking about or any other domain of your choice. And Julie, I’d like you to do the same after that, perhaps in the manufacturing or warehousing domain.

Laura Majors:In our book, we talk a lot about air transportation examples and how, again, some innovation we’ve seen in that space can also yield some more rapid deployment and improvement for other ground transportation robotics. One example that I really love is what’s called TCAS, traffic collision avoidance technology, where the system is able to detect when two aircraft are on a collision course and can make a recommendation and avoidance maneuver. I think the beauty of combining that system with… There’s air traffic control, there’s also monitoring these aircraft, and then there’s, of course, the pilots on board. And when you look at air transportation, there’s been these layers of automation that have been added, and not just automation within the cockpit, but automation across… I mean, that’s an example of automation across aircraft. That’s really enabled us to reduce those risks where errors can happen, catastrophic errors.

And so I think we see some of that happening in ground robotics as well, I think in the future ways for robots to talk to each other. So if you imagine TCAS is a little bit like the aircraft talking to each other, if we could imagine the future robots to talk to each other, to negotiate which one goes first at an intersection, or when it’s safe for a robot to cross a crosswalk, I think that’s when we look into the future, kind of how do we enable robots at scale? It’s that type of capability that we’ll need to make it a safe endeavor.

Daniel Serfaty: So you introduced this notion of progressive introduction of automation and robotic, not to step function with more of a ramp in which the system eventually evolve to something like the one that you described. What’s the time horizon for that?

Laura Majors:I think you have to get to a core capability and then there’s improvements beyond that that we learn based on things that happen, not necessarily accidents, but near accidents. That’s the way that aviation industry is set up. We have this way of collecting near misses, self-reported incidents that maybe didn’t result in an accident, but could inform a future automation improvement or procedure improvement. I think if we just purely look at air transportation as an example, this automation was introduced over decades, really, and so I think that’s maybe one of the misconceptions is that it’s all or nothing. We can get to the robotic capability that can be safe, but maybe have some inefficiencies or have certain situations that can’t handle where it stops and needs to get help for maybe a remote operator. We learn from those situations and we add in additional… Again, some of this automation may not even be onboard the robot. It may be across a network of robots communicating with each other. These types of capabilities, I think, will continue to enhance the effectiveness of robots.

Daniel Serfaty: So the example that Laura just gave us is maybe not mission critical, but lives are at stake when people are flying if you misdirect them. They are situations that maybe people may not think of them are dangerous, but can become dangerous because of the introduction of robots, perhaps. Julie, you worked a lot in understanding even what happened when I press Buy Now button on Amazon or Order Now, what happened in the chain of events that eventually led the package to show on your doorstep the morning after, or other situation in the manufacturing plant in which robots on the assembly lines interact with humans? Can you pick one of those examples and do a similar thing? What we have today and what will we have once you’re done working on it?

Julie Shah:Sure. Yeah. In manufacturing, maybe we can take the example of automotive manufacturing, building a car, because most of us probably think of that as a highly automated process. When we imagine a factory where a car is built, we imagined the big robots manipulating the car, building up our car. But actually in many cases, much of the work is still done manually in building up your car. It’s about half the factory footprint and half the build schedule is still people mostly doing the final assembly of the car, so the challenging work of installing cabling, installation, very dextrous work.

So the question is why don’t we have robots in that part of the work? And up until very recently, you needed to be able to cage and structure the task for a robot and move the robot from the person and put a physical cage around it for safety because these are dangerous, fast moving robots. They don’t sense people. And honestly, it’s hard and a lot of manual work. Same thing with building large commercial airplanes. There’s little pieces of work that could be done by a robot today, but it’s impractical to carve out those little pieces, take them out, structure them, and then cage a robot around to do it. It’s just easy to let a person step a little bit to the right and do that task.

But what’s been the game changer over the last few years is the introduction of this new type of robot, a collaborative robot. So it’s a robot that you can work right alongside without a cage relatively safely. So if it bumps into you, it’s not going to permanently harm you in any way. And so what that means is now these systems can be elbow-to-elbow with people on the assembly line. But in the introduction, this is a very fast-growing segment of the industrial robotics ecosystem. But what folks, including us as we began to work to deploy these robots a number of years ago, noticed is that just because you have a system that’s safe enough to work with people doesn’t mean it’s smart enough to get the work done and add value. So increase the productivity.

And so just as a concrete example, think of a mobile robot manipulating around a human associate assembling a part of a car and the person just steps out of their normal work position just to talk to someone else for a few moments. And so the robot that’s moving around just stops. It just stops and waits until there’s a space in front of it for it to continue on to the other side of the line. But everything is on a schedule. So you delay that robot by 10 seconds, the whole line needs to stop because it didn’t get to where it needed to be and you’d have a really big problem.

So there’s two key parts of this. One is giving these [inaudible 00:31:52] systems smart enough to work with people, looking at peoples more than obstacles, but as entities with intense, being able to model where they’ll be, order why. A key part of that is modeling people’s priorities and preferences and doing work. And another part of that is making the robots predictable to a person. So the robot can beep to tell people they need to move out of the way. Well, actually, sometimes people won’t, unless they understand the implication of not doing that. So it can be a more complex challenge than you might initially think as well.

So the key here is not just to make systems that… The way this translates to the real world is now we are increasingly we have systems that are getting towards safe enough to maneuver around people. There are still mishaps like security guard robots that make contact with the person when they shouldn’t and that’s very problematic. But we’re moving towards a phase in which these robots can be safe enough, but in making them safe enough does not mean they’re smart enough to add value and to integrate without causing more disruption than benefit. And that’s where the leading edge of what we’re doing in manufacturing know some of that can very well translate as these robots escape the factory.

Daniel Serfaty: These are two excellent examples that shows the fallacy of just petitioning the task space into this is what humans do best, this is what robots do best, let’s design them and let’s hope for the best. I love in your book at some point you’re talking about dance, you use the word dance, which I like a lot in this case because whatever saying you want, it takes two to tango, but the fact is that in order to be a great tango team, you not only have to be an excellent dancer by yourself. And certainly the two roles of the traditional role of men and women in that dance perhaps are different. However, you need to anticipate the moves of your partners to be a great dancer, especially in tango, it’s particularly difficult.

You write about that and you start a journey to indicate this notion of harmony, of collaborative aspect of the behavior, Laura, in your world, is that as important, this notion of a robot having almost like an internal mental model of the human behavior and for the human that is also in the loop having some kind of an internal understanding of what the robot is capable of doing and what he’s not capable of doing?

Laura Majors:Yeah, absolutely. We have people who ride in our cars who will take an autonomous ride to their passengers. So they have to understand what is the robot doing and why and how do I change what it’s doing if I want it to stop earlier, or I want to know why it got so close to a truck, or does it see that motorcycle up ahead? There are also pedestrians who will need to cross in front of robotaxis and need to know is that car going to stop or not? So our vehicles have to be able to communicate to pedestrians and other human drivers in ways that they can understand. We have a project we call expressive robotics that’s looking at different ways you can communicate to people, and again, using mental models they already have, rather than… You see a lot of research around flash a bunch of lights or have some display, but is there something that’s more intuitive and natural?

In some of our studies, we discovered that people often use the deceleration of the vehicle as one indicator. So should we start the stop a little more abruptly and maybe a little earlier to indicate that we see you and we’re stopping for you? Another cue people use is sound, the screeching of the brakes. So when we stop, should we actually amplify the screeching sound? That’s something that we work on. And then the third class of users or of people in our integrated system that we think about our remote operators. So if a car gets stuck, let’s say, it comes up to an intersection where the traffic light is out and there’s a traffic cop directing traffic, or a remote operator needs to take over control and have some ability to interface with the car and with the situation. It’s definitely an important part of autonomous vehicles.

Daniel Serfaty: That’s interesting because in the first capture you only imagine the proverbial user, but in a large system or a system of systems, the way you describe it, there are all kinds of stakeholders here that you have to take into account in the very design of the vehicle itself.

Laura Majors:That’s right. Julie and I in the book we call this other set of people bystanders. These are people who may not even realize is a car a human-driven car or a robot. The car may be far enough or angled in a way that you can’t see if there’s a person in the driver’s seat or not. And so these people don’t necessarily know what are the goals of that robot? Where’s it going? What is it doing? How does it work? What are its blind spots? And so I think there’s a lot of work there to figure out how can you effectively communicate with those bystanders, again, who know nothing about your system and be able to interact in a safe way with those bystanders.

Daniel Serfaty: That’s fascinating because it’s about almost amplifying an interaction that you wouldn’t do normally if you’re a car, in the sense that because you adapted in certain way, you have to exaggerate your signals somehow. We’ll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS, but don’t have time to listen to an entire episode? Then we have a solution for you, MINDWORKS Minis, curated segments from the MINDWORKS Podcast, condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes under MINDWORKS on Apple, Spotify, Buzzsprout, or wherever you get your podcasts.

Julie, what do you think are the remaining big scientific or technological hurdles for the next generation robots in a sense that I know you’re working with students and you’re working in a lab, you have the luxury of slow experimentation and grading semester after semester, maybe a luxury Laura doesn’t have in her world? If you had some wishes for the next generation robots, will they be more socially intelligent, more emotionally intelligence, more culturally aware, more creative? What kind of quality you would like eventually to be able to design into those robots in the future?

Julie Shah:Well, we definitely need the systems to be more human aware in various ways, starting with humans as more than obstacles is a good starting point. And then once you go down that path, what is the right level at which to model a person? What do you need to know about them? And then in collections of people, are the norms, the conventions really do become important. So that’s really just at its beginning. So being able to learn norms, conventions from relatively few demonstrations for observations is challenging, or to be able to update, start with a scaffold and update a model that the system has in a way that doesn’t take thousands or hundreds of thousands or even millions of examples.

And so one of the technical challenges is as machine learning becomes more critically important to deploying these systems in less structured and more dynamic environments, it’s relatively easy to lose sight as to what’s required to make those systems capable. You look at the advances today, systems that are able to play various games like go and how they’re able to learn. This requires either collecting vast amounts of labeled data, in which we’re structuring the knowledge of the world through the system through those labels or a high fidelity emulator to practice. And our encoding of that emulator, it never truly mimics the real world. And so what translates what needs to be fixed up relatively quickly.

Many of our advances in perception, for example, are in fields where it’s much easier to collect these vast amounts of data and it’s easier to tailor them for different environments. If you look at what’s required for deploying these systems in terms of understanding state of the world and being able to project, we don’t have data sets on human behavior. And human behavior changes in ways that are tied to a particular intersection or a particular city when you’re driving or when you’re walking as a pedestrian and so that chance for problem becomes very important for a safety-critical system operating in these environments as well.

And so our own lab has a robust research program and what I call the small data problem. Everybody’s working in big data and machine learning. If you work with people, you live in a world of small data and you begin to work very hard to gain the most you can out of what type of data it’s easy for people to give. And labels are not easy, but there’s other forms of high-level input a person can give to guide the behavior of a system or guide its learning or inference process paired with small amounts of labeled data.

And so we use techniques like that for being able to back out or infer human mental models, human latent states that affect humans behavior. And so as a very concrete example of that, for a very simple system, imagine a subway going up and down a line. And if that’s how it goes up and down the line in Boston or New York, the behavior of the subway is the same. But in Boston, we say it’s inbound and outbound from some arbitrary point called Park Street in the middle of the line. And in New York, we say like uptown and downtown based on when it gets to the end of the line and switches. It’s sort of a two-state latent state that we hold to describe and understand that system.

But as a person that grew up in New Jersey and then moved to Boston, that switch can be very confusing. But if a person is able to give a machine just the change point in their own mental model, even if they can’t use words to describe it, I can say the behavior of the subway switches at this point when it moves through Park Street. But the behavior of the subway in my mental models switches at the end of the line at this point. That’s actually enough for a machine to lock in the two latent states that actually correspond to the human health mental model of the behavior of that system. And so these are your technical challenges, but ones that we can formulate and that we can address and make these systems vastly more capable with relatively little data and with very parsimonious only gathered human input. And so I think there’s a really bright future here, but it’s framing the problem the right way.

Daniel Serfaty: Laura, in your world, if you had one wish that will simplify, that will create a leap into the system that you are designing, what particular characteristics, or am hesitant to call it intelligence, but let’s say social, cultural, creative, emotional components of the robot side of the equation would you wish for?

Laura Majors:One way I think about it is how do we create intelligence that can be predicted by other humans in the environment? And so I think that’s really the leap. We talk about some in our book. Do you have to change the fundamental decision-making of the robot in order for its behavior to be understood and predicted by the people who encounter it? I think that’s a really big gap still today. I think back to some of my early work in autonomous systems in talking with pilots in the military who flew around some of the early drones like Predator and other ones and they said the behavior of those systems was just so fundamentally different than a human-piloted vehicle, that they would avoid the airspace around those vehicles, give them lots of spacing and just get out of town.

And then Julia described in the manufacturing setting that these industrial robots were safe and could be side-by-side with people, but weren’t smart and weren’t contributing as well as they could be. So if we have that on our streets and our sidewalks, these systems that behave in ways we don’t understand and who aren’t able to add value to the tasks that we’re doing every day, whether that’s delivering food to us or getting us somewhere safely but quickly, I think that’s going to be highly disruptive and a nuisance and it’s not going to solve the real problems that these robots are designed or intended to solve. I think there’s an element of predictive intelligence.

Daniel Serfaty: I like that, predictive intelligence. It’s been said that in our domain, in the domain of human systems, quite often, big leap, big progress has been done unfortunately after big disaster. The Three Mile Island nuclear accident, for example, in the ’70s prompted people to rethink about how to design control rooms and human systems. Some accidents with the US Navy prompted the rebirth of the science of teams and on. With robots, inevitably in the news, we hear more about robots when they don’t work and when there is an accident somewhere. Can you talk about these notions and how perhaps those accidents make us become better designers and better engineers? Laura?

Laura Majors:Yeah. It was a major accident that first led to the creation of the FAA. There was a mid-air collision that occurred previous to that moment in time. Our airspace was mostly controlled by the military. Flying was more recreational. It wasn’t as much of a transportation option yet, but there were at least two aircraft that flew into the same cloud over the Grand Canyon. And so they lost visibility. They couldn’t see each other and they had a mid-air collision. And that really sparked this big debate and big discussion around the need for a function like the FAA and also for major investment in ground infrastructure to be able to safely track aircraft and be able to see where they are and predict these collision points. And also is when the whole creation of highways in the sky to enable more efficient transportation in our skyways in a way that safe was created. So we definitely have seen that play out time and time again.

Another really interesting phenomenon is that as you look at the introduction of new technology into the cockpit, such as the glass cockpit, such as flight management system, each introduction of these new generation of capability, there was actually a spike in accidents that occurred right after the introduction of the technology before there was a steep drop-off and an improvement in accidents. And so there is this element of anytime you’re trying to do something really new, it’s going to change the process, it’s going to change the use of the technology. There may be some momentary regression in accidents, in safety that then is followed by a rapid improvement that is significant. So we have seen this, again, in many other domains. I think that unfortunately is a little bit inevitable when you’re introducing new complex technology that there will be some unexpected behaviors and unexpected interactions that we didn’t predict in our testing through our certification processes and whatnot.

Daniel Serfaty: So that gives a new meaning to the word disruption. I mean, it does disrupt, but out of the disruption, something good comes up. Julie, in your world, do you have examples of that, of basically the introduction of robotic element or robotic devices cause actually worrisome accidents that eventually led to improvements?

Julie Shah:I can give you two very different examples, but I think they’re useful as two points on a spectrum. There are a few people killed every year by industrial robots and it makes the news and there’s an investigation and much like should we talk about in aviation? So common themes is that a key contributor to accidents is pilot error. But when you do an investigation and understand all of the different factors that lead to an incident or even a fatality, there is something called a Swiss cheese model, like many layers with holes in them have to align for you to get to that point where someone is really set up to make that mistake that results in that accident.

And when we look at industrial robots, when something goes wrong, oftentimes you hear the same refrain and it’ll be with standard industrial robots. So, for example, someone enters a space while it’s operating and they’re harmed in that process. And then you look at it and you say, “Well, they jimmied the door. They worked around the safety mechanism. So that’s their fault, right? That’s the person on the factory floor, his fault for not following the proper usage of that system.”

And you back up one or two steps and you start to ask questions like, “Why did they jimmy that door?” It’s because the system didn’t function appropriately and they had to be going in and out in order to be able to reset stock for that robot. And why weren’t they going to the process of entirely shutting the robot down? Because there’s a very time-consuming process for restarting it up and they’re on the clock and their productivity is being monitored and being assessed. You put all these factors together and you have the perfect storm that is going to predictably with some large end result and people dying from it.

It can’t just be fixing it at the training level or fixing the manual for putting an extra asterisks in the manual like don’t open the cage while the system is in operation. I think this just points to one of the key themes that we bring up in the book, which is the role of designing across these layers, but also the role and opportunity that intelligence and the systems provides you as an additional layer, not just an execution, but at all the steps along the way. A very different example that comes from the research world is related to trust, inappropriate trust or Alliance on robot systems. Miscalibrated trust and automation is something that’s been studied for decades in other contexts in aviation and industrial domains. And you might ask, “Does that end up having relevance as we deploy these systems in everyday environments?

There’s this fascinating study done a few years ago at Georgia Tech, where they looked at the deployment of robots to beat people out of a simulated burning building, so a fire in a building. The alarm was going off, they put smoke in the building, they trained the bystanders in the operation of the robot system in advance, and half the participants observed the robot functioning very well. It could navigate, it could do its job. The other half directly observed the system malfunctioning, going in circles, acting strangely. And then when they put people in that building, even the ones that observed the robot malfunctioning moments before, followed that robot, wherever it took them through the building, including when the robot led them to a dark closet with clearly no exit.

And this might sound funny, but it’s not funny because it’s consistent with a long history of studies and analysis of accidents and aviation, other domains of how easy it is to engender trust in a system inappropriately. This is something that’s very important in that particular example for a robot leading you through a building, but also think about cars like Teslas and being able to calibrate a person’s understanding of when they need to take over with that vehicle since it’s about its environment and what it doesn’t. And so these are cautionary tales from the past that I think have direct application to many of the systems we’re seeing deployed today.

Daniel Serfaty: Sure. I think I believe the miscalibrated trust problem as the additional complexity of being very sensitive to other factors like culture, like age, things that people in certain cultures… I’m not talking cultures with [inaudible 00:51:05], but even local cultures may trust more the machines and maybe to a fault overtrust the machine more so than other populations. I think that creates a huge challenge for the future designer of the system that this has to be adapted to factors for which usually do not design properly.

Maybe on the other side, I don’t want to sound too pessimistic about accidents, even though the lesson, as both of you pointed out, is that those accidents, even that involved sometimes the unfortunate loss of life lead to leaps in technology in a positive way. But if you had to choose a domain right now where these teaming of human and robots have the most impact other economic impact or health impact, or by any other measures, what would that be? Healthcare, defense, transportation that has the good story, not the accident story now. Laura, can you think of one?

Laura Majors:I think if you look at defense and security applications, you can find some great examples where robots help in ways that we don’t want people to go. So if you think of bomb disposal robots, for example, keeping people out of harm’s way so that we can investigate, understand what’s happening, disarm without putting a person in harms way. There are also other defense applications where we’re able to have autonomous parachutes that can very precisely land to a specific location to deliver goods, food to people who need it. There are different drone applications where we can get eyes on a situation, on a fire, to understand hotspots and be able to attack it more precisely.

I think those are some good examples. And that to me is one of the reasons why I’m so drawn to autonomous cars is because this is a case where many could argue that people are not very good drivers. There are still a lot of accidents on our roadways and so there’s a great opportunity to improve that safety record. And if we look at what happens in air transportation, it’s such a fundamentally different safety track record that we hope to achieve on our roadways by the introduction of automation and robotics.

Daniel Serfaty: What a wonderful reason to invest in that industry. I haven’t thought about it, the social impact and the greater good, not just the convenience aspect is key. Julie, what’s bright in your horizon there? What do you see in robotic applications that are really already making an impact, especially when there is a human-robot collaboration dimension.

Julie Shah:Another one that comes to mind is in surgical robotics, we’ve seen this revolution over the past number of years in the introduction of robots in the operating room, but much like the robots used for dismantling explosive devices. These robots are really being directly controlled at a low level by an expert who’s actually sitting physically in the same operating room. And nonetheless, you see great gains from that in some contexts. So, for example, rather than doing a laparoscopic surgery, which is you can imagine like surgery with chopsticks, that’s going to be very hard. There’s a lot of spatial reasoning you have to do to be able to perform that surgery. A lot of training required to do it. Some people are more naturally capable of that than others, even with significant amounts of training.

For example, a system like the da Vinci robot, it gives surgeons remotely their wrists back so they don’t have to do chopstick surgery anymore. And so it actually enables many surgeons not fully trained up in laparoscopic surgery to be able to do a surgery that would have otherwise required fully opening a person up. And so you see great gains in recovery time of people. Surgery on the eye, if you can remove tremors, verified tremors that any human has that allow for surgical precision, that’s very important in that field.

One of the commonalities between some of the applications Laura gave in bomb disposal and the surgical application is these are systems that are leveraging human expertise and guidance in them. They’re not employing substantial amounts of intelligence. But as a random person in the field pointed out to me a number of years ago, what are you doing when you put a surgical robot in the operating room and put the surgeon a little further away in the room. You’ve put a computer between that surgeon and the patient.

Now, when we put a computer between the pilot and the aircraft, it opened up an entirely new design space for even the type of aircraft we could design and field. For example, aircraft that have gains in fuel efficiency, but are inherently unstable like a human just on their own without computer support couldn’t even fly them. This is a very exciting avenue forward as we think about these new options of a computer at the interface, how we can leverage machine learning and data and how we can employ these to amplify our human capability and doing work today.

Daniel Serfaty: Work today, that’s a key word here, work today. I’m going to ask you a question. I don’t even know if people were asking me that question how I would answer. But you are here helping drivers and fighter pilots and surgeons and all these people with robotic devices changing their life, in fact, or changing their work, can you imagine in your own work as CTO or as professor a robotic device that can change that work for you in the future? Have you thought about that?

Julie Shah:I have thought about this quite a lot actually, [crosstalk 00:56:32] quite a lot.

Daniel Serfaty: Maybe you thought it may be a fantasy, I don’t know but-

Julie Shah:And with my two and four year old, I spent endless amounts of time picking up and reorganizing toys just to have it all done again. I think one of the exciting things about framing this problem is a problem of enabling better teamwork between humans and machines or humans and robots. And Daniel, this goes back to your work from a long while ago which inspired parts of my PhD in coordination among pilots and aircraft is that effective teamwork behaviors, effective coordination behaviors, they’re critical in the safety critical contexts where the team absolutely has to perform to succeed in their tasks, but good teamwork is actually good teamwork anywhere. So you remove the time criticality. If you are an effective teammate, if you can anticipate the information needs of others, offer it before it’s requested, if you can mess your actions in this stance, then that good teamwork translates to other settings.

My husband is actually a surgeon. And when I was working on my PhD, I used to point out to him how he was not an effective teammate. He would not anticipate and adapt, and he still makes fun of me to that day. You’re a surgeon, you need to anticipate and adapt. So good teamwork and cooking together in the kitchen, that same ability, it translates there, being able to understand, hold a high-quality mental model of your partner, understand their priorities and preferences that translates in many other domains. And so by making our teamwork flawless in these time-critical, safety-critical applications, we’re really honing the technology for these systems even more useful to us in everyday life as well.

Laura Majors:Yeah, and I think as a CTO, I also think about a lot of our work is that… and what we strive to do is data-driven, decision-making, so about our technology, how it’s performing in areas where it’s not meeting the standards, the simulation, testing at scale. There are definitely have been many advances in those areas, but I think when I think about how could robotics and automation help a CTO be better, I think about, “Yeah, there are some parts of my job that you could automate. Could you close the loop on finding problems and identifying maybe teams or subsystems that have gaps that we may not realize until later in the test cycle, but could we learn those things earlier, identify those and have the dashboard that shows us where there may be lurking problems so we look at them sooner?”

Daniel Serfaty: No, I agree. I’ve been starting in my own company this philosophy since last year of the new generation called eating your own dog food, basically, which is let’s try those things that we are trying to sell to our customers on ourselves first so that we can feel that pain before the customer does. But that would be an example. Let’s try to help the CTO, the CEO with a dashboard and see whether or not we can actually make a difference. I think it’s important we understand that at that intimate level. Julie, I know that part of your job as associate dean of the school of computing is to consider or to worry about the ethical and societal dimensions of introducing automation, artificial intelligence, compute robotic devices in our lives. What are you worried about? What’s the worst thing that can happen with introducing these new forms of intelligences, some of them embodied into our lives?

Julie Shah:There’s a lot to worry about, or at least there’s a lot I worry about. I was delighted to take this new role as associate dean of social and ethical responsibilities of computing within MIT’s new Schwarzman College of Computing. I was predisposed to step into the role because much of my research has been focused around aiming to be intentional about developing computing that augments or enhances human capability rather than replacing it and thinking about the implications for future of work, what makes for good work for people? So it’s not about deploying robots in factories that replace people or supplant people, but how do we leverage and promote the capabilities of people. That’s only one narrow slice of what’s important when you talk about social and ethical responsibilities.

But the aspects that worry me are the questions that are not asked at the beginning and the insight and the expertise, the multidisciplinary perspectives that are not brought to the conception and design stage of technologies in large part because we just don’t train our students to be able to do that. And so the vision behind what we’re aiming do is actively weave social, ethical, and policy considerations into the teaching research and implementation of computing. A key part of that is to innovate and try to figure out how we embed this different way of thinking, this broadening of these are the languages our students need to speak into the bread and butter of their education as engineers.

On the teaching side, our strategy to do that is to not give them a standalone ethics class, but we’re working with many, many dozens of faculty across the institute to develop new content as little seeds that we weave into the undergrad courses that they’re taking in computing, so the major machine learning classes, the early classes in algorithms and inference and show our students how this is something that’s not separate, that needs to be an add on that they think about later and check a box, but how it’s something that needs to be incorporated into their practice as engineers.

And so sort of applied, almost like a medical ethics type thing. What is the equivalent of a medical ethics education for a doctor to a practicing engineer or computer scientist? And by seeding this content through their four years, we essentially make it inescapable for every student that we send out into the world and show them through modeling, through the incredible inspiring efforts of the faculty to at a different stage in their career also work to bridge these fields, show them how they can do it too. A key part of this is understanding the modes of inquiry and analysis of other disciplines and just being able to build a common language to be able to leverage the insights from others beyond your discipline to even just ask the right questions at the start.

Daniel Serfaty: I think this is phenomenon. Introducing this concept for our engineers and computer scientists today, we’re going to create a new generation of folks that are going to, as you say, ask many questions before jumping in coding or jumping and writing equations and understanding the potential consequences or implications of what they’re doing. That’s great. I think that we should all rather than worrying crazy about Skynet and the invasion of the robots, I think it’s a much better thing to do to understand this introduction of new intelligences, in plural, in our lives and in our work and thinking about it almost like a philosopher or a social scientist would think about it. That’s great.

Laura, I want a quick prediction, and then I’m going to ask both of you for some carrier advice, not mine, perhaps mine too. Laura, can you share with the audience your prediction. You’ve been in different labs and companies and you’re lecturing all over the world about this world. What does human-robot collaboration look like in three years and maybe in 15 years?

Laura Majors:That’s a big question. I know it’s a broad question too because there are robots in many different applications. Yeah, we’ve seen some really tremendous progress in factory and manufacturing settings and in defense settings. I think the next revolution that’s going to happen and really why we wrote the book the way we did and when we did it was because I think the next revolution we’re going to see is in the consumer space. So we haven’t really seen robots take off. There are minor examples. There’s the Roomba, which is a big example, but very limited tasks that it performs. We’re seeing robot lawnmowers, but I think the next big leap is going to be seeing delivery robots, robotaxis, this type of capability start to become a reality and not everywhere, but I would say in certain cities. I think it’s going to start localized and with a lot of support in terms of mapping and the right infrastructure to make that successful.

I think that’s the three-year horizon. I think the 10-year horizon you start to see these things scale and become a little more generalizable and applicable to broader settings, and again, start to be more flexible to changing city, changing rules and these types of things that robots struggle with. They do very well with what we programmed them to do. And so it’s us, the designers, that have to learn and evolve and figure out how do we program them to be more flexible and what are some of those environment challenges that will be especially challenging when we move a robot from one city to another city, whether it’s sidewalk or robotaxi.

But I think we’re… After the deployment in a few years where we start to see these things in operation in many locations, then we’ll start to see how do we pick that robot up and move it to a new city and how can we better design it to still perform well around people who have different norms, different behaviors, different expectations from the robot, and also there are different rules and other kind of infrastructure changes that may be hard for robots to adapt to without significant technical changes.

Daniel Serfaty: Thank you. That’s the future I personally am looking forward to because I think it will make us change as human beings, as workers, as executives, as passengers, and that change I think I’m looking forward to it. My last question has to do with our audience. Many people in the audience are young people that are either maybe finishing high school or in college and they hear those two super bright and super successful professional engineer women. And you painted a fascinating domain that many people do not fully understand that blend of human sciences and engineering sciences. What advice would you have on a young person, men or women for that matter, maybe just trying to choose which direction to pick in college, or even which college to pick? MIT is not the answer, by the way. Do you mind spending a few seconds each on some career advice? Laura, you want to start?

Laura Majors:Yeah. I think you can’t go wrong with following your passion. So finding ways, I think, early on to explore and try out some different areas. So if you’re in high school and you’re trying to figure out what college you want to go to visit, take tours, do a range of different options so you can really understand the space and see what you really connect to and where your passion lies. If you’re in college, do internships, do research in labs, find ways to get exposed to things to see, again, what’s going to spark that interest.

My freshmen summer in college, I did a civil engineering internship. I thought I was going to build bridges and it didn’t click for me. It wasn’t interesting. I’m glad I did it. It was an interesting experience, but it wasn’t something I wanted to do the rest of my life. Try things out, explore early. And then if something clicks, pursue it. Once I found the path I wanted to go down, I never looked back. And so I’d say try to find the intersection of where your passion connects with something that will have an impact. And then if you deliver on that, the sky’s the limit, more doors will open than you expect and you’ll go far.

Daniel Serfaty: Thank you, Laura. Julie, your wisdom?

Julie Shah:One piece of advice I give to my undergrad, or in the past, my freshmen advisees is that… Just to tell a story, I had one freshman advisee that was trying to think through what classes they should take and actually ask, “What is the one class I take that sets me up to be successful for the rest of my career?” I knew what that class was in high school and I took that class and I got here. So what is that one class I have to take here? I think the most important thing to know is that once you get to that point, there’s no external metric for success that anyone is defining for you anymore. You define your own metric for success, and you can define that any way that you find to be fulfilling, and that is fulfilling for you. Actually, only you can define it.

And so to Laura’s point, the critical part of defining that is having very different experiences to explore that space. In our department, we say we’re aiming to train creative engineers and innovators. And creative is a really important role. And so where does creativity come from? The work that I do now, I would say maybe I’m not the traditional roboticist. I approach the work differently and how I frame the problems is different and it’s because of very different experience I’ve had than other people in computer science or robotics. I did my master’s degree in human factors engineering. I started my career in aerospace engineering, which is a systems discipline, which is looking to design a full system.

And so you can’t optimize each individual component of a spacecraft, say, and expect your overall spacecraft to be an optimized system. There’s sort of trade-offs and pulls and pushes. And so those very different experiences set me up on a trajectory to make very different contributions. I think that’s the key aspect about following your passion, like what are your different experiences going to be that you can bring together to have a unique impact? And that’s just your path to be able to carve.

Daniel Serfaty: Thank you, Julie, and thank you, Laura, for sharing some of your knowledge, but most importantly, your passion and your wisdom when it comes to all these topics. I remind our audience that Julie and Laura just published a book called What To Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration. It’s by Laura Majors and Julie Shah. I urge you to read the book and make your own impressions with respect to what you can contribute to the field, but also perhaps the choices you’re going to make in your professional life.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. This week, we are going to talk about something all of us think about, or perhaps should think about, that some of us even obsess about. And that is a need to understand and improve our fitness, our health and our wellness. All of us are very much preoccupied about this topic, especially after more than one year of COVID. I recently read that the fitness and wellness market is a $1.5 trillion business. That’s trillion with a T. There’ve been a huge growth in this industry, not just because of the increased prevalence of chronic lifestyle related diseases like diabetes and obesity, but also because of the explosion of new technologies that are entering this domain. 

Therefore, it’s not just important to understand how we can maintain our fitness as individuals and as a nation, we need to understand the big business and the new technologies around it. And my panel today is extremely well-positioned to show us the way. Today I have with me, a management expert, an engineer, and a doctor, which sounds like the beginning of a joke. Each of whom brings a unique perspective to understand this business of fitness and wellness, as well as the challenges we face as individuals in this area. 

My first guest is Jessica Lynch. Jess is a prior PWC consultant and a Babson College MBA, who turned her investigating talents towards figuring out how to achieve fitness goals after being inspired by her own family’s unique experience with lifestyle changes. In 2018, just founded Wishroute, which enables wellness companies to add human support to text message conversation to their platform, and to provide the personal touch and encouragement that is critical when people are trying to build new habits. 

My next guest is Angelica Smith. Angelica is an engineer with more than 12 years of experience, performing quality assurance or QA and leading tests initiatives on software projects. At Aptima, she manages US Marine Corps physical training platform called the FitForce Planner and Mobile App, which allows her to work directly in the field with both fitness trainers and Marines.

And last, but certainly not least is Dr. Phil Wagner. Phil is the founder and CEO of Sparta Science. He’s a physician and a strengths coach whose own athletic career as a football and rugby player was cut short by a series of avoidable training injuries. Phil’s commitment to data-driven coaching and athlete development began when he was a coach at University of California at Berkeley and at University of California, Los Angeles, as well as a rugby coach in New Zealand and Australia.

Phil received his medical degree from University of Southern California focusing on biomechanics. His passion for protecting athletes health and longevity through injury resilience inspired him to found Sparta Science, a Silicon Valley based company dedicated to helping the world move better. Welcome to MINDWORKS, Jess, Angelica and Phil. I’m going to start with a question, because after all you come from very different parts of the profession, what made you choose this particular domain, human performance and health and fitness as a field of endeavor. Jess, you want to tell us that?

Jessica Lynch: Absolutely. Well, thanks for having us. I’m excited for today’s conversation. Personally, I have a deep passion for wellness from my family’s experience with lifestyle change. My brother was diagnosed with juvenile diabetes as a kid, and my mom actually wrote a book about how our family rallied around the disease and changed our lifestyle one little habit at a time. Through that experience and book tour, I got to tour the country, helping other families change their lifestyle in the same way and have just always been so inspired by the power of not feeling alone with change. And the power of breaking changes down into super small achievable steps and how once they’re ingrained as a habit, they don’t take energy to sustain and it’s your new normal. So that’s been a big inspiration for why I entered this space and with what we’re doing with Wishroute. 

Daniel Serfaty: So a personal reason?

Jessica Lynch: Mm-hmm (affirmative). 

Daniel Serfaty: Phil, why are you here? You’re a bonafide medical doctor. 

Phil Wagner: Yeah. I ultimately came to this space, I was an athlete. Had about a dozen different surgeries to the point where the NCAA, which is the governing body for college athletes basically said, “Yeah, you’re done. We’re not allowing you to play anymore.” So I moved to New Zealand to play pro rugby and got injured my first game and really said, “Okay, that’s it.” I’ve got to figure out for other people who want to be active, whether they’re an athlete or someone serving their country or someone that wants to enjoy being active in their everyday life, how can we equip them with the exercises and the information to do so, because there really shouldn’t be a physical limitation to activity. It really only should be your will. And that’s really why Sparta was started and why I’ve come to work in this area. 

Daniel Serfaty: So two really personal reasons. Angelica you and I work together. I have the privilege of counting you as a colleague, and I’ve seen you work on different projects, but you seem particularly passionate about this one. 

Angelica Smith: I am. I can say I am definitely passionate about this. However, I might be the exception in that. I feel like this industry chose me. I don’t know if I necessarily chose it. I was simply fulfilling my obligations as a QA engineer on the software development team. And a few years later I became project manager. So it was probably due to my many voluntary trips to the FFI school house at Quantico because I live 20 minutes away. And I was constantly there every six weeks, delivering briefings and trainings. And so I was unknowingly becoming the face of this thing, but I’ve enjoyed being in the space ever since. 

Daniel Serfaty: Great. So you’re going to become an honorary member of the Marine Corps-

Angelica Smith: Yeah. 

Daniel Serfaty: With that happening. Very good. The three of you came at it from very different angles and yet it’s a domain that’s once you’re in it, I feel there is more of a passion there than in other domains in technology with technology applications. And so it’s important for our audience to understand, what do you actually do in your day job today? Jess, what do you do? You’re CEO of a startup, you probably don’t get much sleep, but what do you day-to-day?

Jessica Lynch: I try to prioritize the sleep because if you’re trying to create something extraordinary, I always say you have to have extraordinary input to get extraordinary output, and sleep is important for everything we do, including exercise. But I’ll give you a little background on what we’re doing with Wishroute. We offer a personal yet scalable way for wellness companies to keep customers engaged and successful adopting their product. So right now, we all had this experience, you have an idea for something, you download an app and you never return to it. Most wellness apps lose 90% of customers by day 30, which is painful for the companies and painful for the individuals that have a good intention to lose weight and meditate.

Daniel Serfaty: You say 90%?

Jessica Lynch: 90%, yep. 

Daniel Serfaty: Of people drop off?

Jessica Lynch: Drop off, never return. 25% only open the app once, never come back. And it costs these apps upwards of $5 a user to get them to even download it. And like I said, stinks for the individual who had the good intention and didn’t get the support they needed to follow through and actually build the habit. Behavior change is really hard as humans, it’s just a difficult thing to do. And app notifications we know are automated and don’t make us feel accountable. So Wishroute provides the high touch support in key moments of the user journey that people need like onboarding or during a free trial. And we personally text with our partners end users to help them stay motivated and form healthy habits around our partners programs. So essentially the accountability companion.

And we use automation and a variety of AI technologies alongside real life, caring humans to power these personal text message conversations. And this gives our customers the scale they need support all of their end users in a personal way, which they don’t have an affordable way to do right now and increases customer success and company success. So on a daily basis, I’m leading my team, I’m refining the product, I’m talking with new companies and that’s a day in the life. 

Daniel Serfaty: So that’s really a multi-pronged activity. I mean, the value proposition of your business, but also the fact that you are the CEO and founder. As a CEO and founder, Phil, what do you do in your day job? And tell us something about Sparta Science too. 

Jessica Lynch: What don’t we do, right? As a founder.

Phil Wagner: Yeah, [crosstalk 00:09:29] to Jess’s point, that’s why I’m laughing. It’s like, it is more of a what you don’t do as opposed to what you are doing within a startup. So probably just starting with what we do is the company itself uses a machine learning platform to analyze movement, generally a balance test or a jump test. And from that generates your injury risk and subsequently a individualized plan on how to reduce that risk, whether it’s something that could happen or whether you’re going through the rehab process. That’s different based on the individual, we serve sports, we serve the military and then even growing into senior living facilities. 

So my work generally is spent on really making sure that that technology is adopted in a way that’s seamless into whoever we’re serving and their environment, because each one of those environments is different. And to Jess’s point, at the end of the day, it still comes down to habits, no matter how good your technology is, no matter how good the data is. There’s a great quote we have up at our company, our facility, in that innovation is about adoption, not invention. Because the invention and everybody can come up with good ideas, that’s not the challenging piece. Challenging piece to innovation is the adoption side. How can we get folks to adopt it as part of their normal workflow and habit structure?

Daniel Serfaty: I’ll get back to that later when we talk about innovating in this field. But yes, and many people learn that lesson the hard way. 

Jessica Lynch: All those different environments that have it, that you need someone to adopt, it’s going to fit into their life differently to use the tool. Very interesting. 

Daniel Serfaty: Angelica, what do you do in your day job when you work on as a program manager for the Marine Corps and at the same time responsible for the technology that you are to implement, can you describe to our audience a day in the life of Angelica? 

Angelica Smith: Well, let’s see. My duties probably aren’t as complex as Jess and Phil’s. However, in general for the company, I’m actually, like I said before, I lead software quality assurance efforts. So I’m ensuring quality across our software development projects, but on FitForce I’m project manager, I’m wearing a lot more, a lot more responsibilities with managing that project there, which includes, of course, all the financials, all of the budgets, managing all of the different resources that are required for the success of this program, such as employees. Who’s going to be doing what, making sure they have the correct skill set for the software requirements that we’re trying to achieve. 

Of course, looking at our allocations, given that particular budget, and then just managing sort of the day-to-day making sure my team, they have the tools they need to be successful for what they’re contributing to the program. And so right now, the FitForce team, I think there’s a good, solid, maybe seven or eight of us. So at any given moment, I’m shuffling around hours or shuffling around resources and making sure so-and-so has what he needs. 

Daniel Serfaty: But when you interact with the user of that fitness app, Marines basically, how do you do that work? I mean, you go to the field, you go literally on the ground. 

Angelica Smith: Yeah. So with the Marines at the school house at Quantico, they have these six week cohorts, every six weeks they get a group of new Marines who are there training to be these instructors, they’re called Force Fitness Instructor or FFIs for short. And so about every six weeks I go down there and I train that class and show them how to use FitForce Planner and FitForce Mobile. I help answer questions they might have, help them troubleshoot. We walk through a demo, I give them a walkthrough of the user guide that I also created with them. And then afterwards, I’ll probably sit down with the head instructors and if they’ve communicated that they had seen some issues, then we sit down and we kind of walk through some of those issues and we talk about features that they’d like to see in the next version and things like that. 

But I oftentimes get to sit down and see how they develop these programs. What goes into that, what’s expected of these FFIs who have a range of knowledge. You have some of these FFIs who are trained in creating these plans because of whatever their background is. And then you have some FFIs who have no idea, they have no background in physical fitness or musculoskeletal health and things like that. And so they’re just coming in as a newbie and going through the six week training to understand how to create these plans. I’m there to make their job easy. I help them go from using this paper-based PT planning process to using a no kidding 21st century planning tool. 

Daniel Serfaty: Thank you for describing that interaction because it’s not… engineers sometime design things for engineers, not for users. And this is a case in which those users, and we’re going to talk about the athletes or even the regular customers that are not professional fitness folks are using these. And the way people are using those tools as both Jess and Phil reminded us before Angelica, it’s really key. But if people ask us, so what’s new here? At the end of the day, people have been in fitness for a long time, in different ways, at least in America. 

From a technology perspective, and your three companies that are actually using technology or science, or psychology to offer something to their customers. What is new here fundamentally in 2021, when we look at those technologies? Phil, can you take us there? You’ve been looking at this field for a while, but what is really, really new here? 

Phil Wagner: The big thing that people are now starting to realize is, what are we going to do with this data? Because I think what’s happened over the last five, 10 years, there’s been some great technologies that have collected data and that made it presentable in a nice way. But I think we’re getting to the point where individuals are looking at it saying, “Yeah, I know I slept seven hours because if I went to bed at 10:00 and I got up at 5:00, I’ve spent money on this device that helps me do that math, right?

And so, now I think the challenge to technologies are now they have to deliver back. They’re gathering information, now they have to leverage that in a ethical way to say, “Here’s how you can use that data to add and change habits.” Because just telling me, I slept seven hours is no longer enough. And I think the hard part is, we’re all limited by time. So you can’t just say, “Well, just sleep eight hours. What’s the problem?” You can’t make the day longer, so what things in your day should you not be doing to allow more time for other things? So technology should be individualizing what you need to strip away, so you can do more of what you need.

Daniel Serfaty: That’s very interesting. So you’re making the point that just collecting data with the Fitbit’s of this world is necessary, but certainly not sufficient. We need to transform this data into useful and actionable information. 

Phil Wagner: Right. And just say you got to sleep eight hours, that’s just the lazy insight. We all have that, right?

Jessica Lynch: We have these great recommendations now, we’re starting to get better and better at individualized, personalized recommendations for actions people should take. But then how we pick up the problem from there is, we’re human and most of us need external accountability to follow through something and break out of our current habits to form new ones, and take a detour from what we were doing yesterday to now do something different. And we don’t feel accountable to things that aren’t human, and caring, and knowing that someone cares about what you accomplished. 

So we’re enabling that human heart and care to be delivered in a way that is based in behavioral science and motivational interviewing psychology to help people feel accountable and actually adopt those changes in an achievable way. So it’s really that continuum of, “I completely agree follow data into more personalized recommendations, and then get those personal recommendations broken down and delivered in a way that actually helps people adopt them. 

Daniel Serfaty: It’s amazing that it’s not just about sets and reps, or [inaudible 00:18:07] prescription, but there is so much thought behind, as you say, accountability, but also the science of it. That basically it becomes more of a diagnosis and recommendation almost, dare I say, almost like a medical profession. 

Phil Wagner: Absolutely. Yeah. It’s pharmacology at the end of the day, it’s just in this case, the medicine is the exercise. You don’t go to a physician and say, “Go take an antibiotic.” It’s like, “Well, which one, how often? How much, when do I stop? When do I change?” Whereas we’ve kind of looked at exercise like that, where we’ve said, “Well, just go walk for 20 minutes.” Well, one of the biggest breakthroughs in pharmacology was the extended release once a day pill, because it reduced the barrier for thresholds. So if we say, “Hey, just go walk 20 minutes.” And that person says, “I’ve got 10 minutes a day.” They’re going to choose option B, which is nothing. So how can we grade it much like medicine has done with pharmacology?

Daniel Serfaty: Angelica, on this data question, how do you see in your experience, I mean, after all, you’re the one actually touching the data with your fingers. How do you see the relationship between us looking at data, making some recommendation for exercises and the ability of the actual trainers, or trainees for that matter to take those data and trust them and actually comply with the recommendation?

Angelica Smith: So one of the things that I’m seeing, at least for my end users, the FFI, they don’t seem to be as interested in the data. It’s the leadership that wants the data that I think the FFIs were simply looking for a tool that’s going to allow them to do their job, create these plans easier, faster, and distribute them easier. And that’s what the mobile application does. It delivers it to the end users. The FFIs they’re responsible for hundreds and thousands of individual Marines physical fitness. And so I don’t think they’re looking at the data as much, it’s not as important. They just want to get these programs out to these guys and try to reduce injury, whatever that means. 

But then you’ve got the leadership, they want to know how healthy the fleet is, how healthy the unit is. And they’re looking at the data to better understand how effective is this plan, in fact? And so you’ve got these two different groups and what they care about. And that’s what I gather from being boots on the ground. 

Daniel Serfaty: That’s a very interesting observation. At the end of the day, the person who do the exercise wants to have reasonable recommendation, that’s what Phil was saying earlier. It’s not so much that your heart rate is X that matters, is what do you do about it? Or maybe as you say, as the leaders, or the managers, or the commanders are more interested to see, to have some statistic about the general health, the general fitness of their units. These are two different stakeholders in this particular case.

But that’s a big lesson that despite the extraordinary enthusiasm for people to measure everything, it remains to see whether or not those measurements lead to improvement. You need an intermediate step in between. One, is making sense of those data, but also encouraging people to comply with the recommendation in order to be effective. So Jess, could you describe a day in the life of one of your user? You’re the one that is addressing actually the most, all of us, at least with your tools and your services in a much wider population. These are not elite athletes or special forces soldiers, they are basically us. 

Jessica Lynch: We have worked with some pretty unique groups, including frontline healthcare workers, during COVID, which was extremely rewarding and challenging, because they were all being challenged emotionally, physically, mentally. But a day in the life of working with an individual, our business sort of has two key sides of it. Most of our business is working with other wellness companies, helping them increase compliance and engagement of their end users. But we also work directly with individuals as a testing ground for our partners to develop new programs and insights that we can bring to our partners. 

So I’ll give a day in the life, it’s a pretty similar the end user in both. But we help people focus on one healthy habit at a time. So trying to break things down into achievable, sustainable steps. So if someone’s working on exercise with us at the beginning of the week, they pick a daily goal for each day, one thing, and then each day they’re getting a reminder or something inspirational in the morning, maybe a suggested podcast to listen to if their goal is to walk that day or a inspirational message, 10 minutes out of your day is 1% of the time, something like that. 

And then at night, we check in for accountability and that’s when we’re asking about their daily goal, did you go for your walk? And when they text us back, there’s always a Wishroute guide to personally respond to them. And we’ve created a judgment free coaching methodology, that’s providing positive reinforcement, encouragement and the help to game plan if it wasn’t a good day. It’s really important to make people feel comfortable saying, “No, I didn’t do it.” Because that’s when we can be most impactful. 

And I would say the biggest kind of aha is just every day we hear, “I wasn’t going to do it, but I remembered you’d check in and I wanted to have something positive to report back. So even though I only had 10 minutes and my workout I was supposed to do was 20, I got out and made the most of that 10 minutes and I did a walk and some squats. And there it is.” And so just knowing that someone’s going to check in is incredibly powerful. 

Daniel Serfaty: So it’s an actual person. The Wishroute guide is a person who calls the [crosstalk 00:24:07] and asks them-

Angelica Smith: [crosstlk 00:24:08] and text message. Yep, so we’re texting people in the morning and at night and whenever they text us, they’re getting a personal response back. 

Daniel Serfaty: But that sets of recommendation that you provide in the morning, given a particular fitness goal, “This is what we recommend. You watch these, you go walk, you do this exercise.” Those are automated, or these are actual… you said they are texts? 

Jessica Lynch: Yep. It’s a mix of automation and personalized recommendations, which are automated. I mean, it’s a set of preset selection and a mix of personalized things based on someone their own goals. So we’re not a prescriptive service saying, “You need to do X, Y, Z.” We’re giving people inspiration to follow through with the goals that they’ve set and with our partners, we’re helping people follow through with the goal with the partners content. So if it’s a fitness app, we’re sharing, “Here’s the live workout schedule in the morning,” or, “You said your goal was to increase flexibility, here’s a yoga workout we think you’re going to enjoy.” So it’s personalized and related to a preset set of messages. 

Daniel Serfaty: So that personalization is an important theme today, because we are all different, we learn differently, we practice differently. And Phil, I know this is some of the key part of the technology that you’re proposing. It’s not only very precisely diagnosed, but it’s also very personalizing the diagnosis is for that person. Can you tell us a little bit about how you use basically technology and machine learning to turn these measurement, these very precise measurement that you’re doing with the machine learning plate for the athlete to the user?

Phil Wagner: There are two key, I guess, case studies we tend to see with folks that we work with, whether that’s athletes or fighters, or even in the employer’s face it’s, “I only have so much time,” and that’s a piece and a big part of data’s role that we see is to convince the user to let go of things. And that could be something that they found to be useful in the past, or that could be something they’re clinging to because they’re really good at it and want to keep doing it. A good example is, if we think about war fighters, soldiers, CrossFit’s really cottoned on in that group. 

Why? Because it’s hard and it makes you really sore, and it’s really difficult and challenging. The question would be, does that group that has incredible grit, and strong and explosive, is that what they need, or do they need yoga to help promote flexibility and breathing techniques and recovery, or do they really just need to continue to endure more challenging circumstances? So I think technology’s role should be to help aluminate some of these things to teach the individual and say, “Hey, this might be something that’s not serving you. You may enjoy it. You’re going to have to make a decision. Are you training to serve your country, are you training to be in the cross the competition?”

Both are okay goals, but ultimately you can’t serve two masters. And we have this all the time. When we started in sports with football, offensive linemen, they squat for living, every play they squat to get in their stance. They go in the weight room and what do they do? They squat all the time. Best way to have an ACL injury is too much squatting, too much quad dominance. You got to make a choice. Do you want to be an offensive lineman that plays a 20 year NFL career, or do you want to be someone that’s really good at squatting and a short career? You can’t do both. And I think data helps educate the individual. So they come to that conclusion because ultimately that’s what’s going to make it stick. 

Daniel Serfaty: There is a lot of that in the military. Angelica knows that, she works a lot with them. This notion of gaming for example, is very popular in the military because it’s like flight simulation. It’s like war fighting simulation and people like to play games. They like they’re engaged with it. Whether or not that serves the purpose of learning the skills that they need to learn in order to be better war fighters is an open question. [crosstalk 00:28:37] enjoyment, and actually skill acquisition. Your example is the wonderful one too, in that direction. Angelica, was there something during the performance of this project that suddenly you had like an aha moment, you were going in one direction and by interacting with the Marine Corps, with the Marines themselves, or with what you call the FFIs, the fitness instructors, you realized that they say something, they behave in certain way that prompted you to change the direction of the project?

Angelica Smith: We used to have a capability within mobile to launch a workout for a group of individuals. So you’d have this roster feature, you can kind of say, “Who’s in this?” And you can check off all of the participants, and then whoever’s leading that course, they can know whoever took, who was there in class and things like that. And so we thought that was a good feature and we thought that would be something they would use and then it turned out that they didn’t like the feature, they want the feature. So we actually took the feature out and then it came back around during COVID we needed to put this group feature back in, and folks were working out individually.

But then as things started to kind of go back to normal, these group workouts, they were relevant again. And so without the customer coming back and saying, “We want this group feature.” Then I say, “We need to put that feature back. We dropped the ball on it when we took it out.” So we did come to the conclusion that even though they didn’t want it, they definitely need it, but it was an aha moment. They don’t really know what they want or need, we know they need this and we need to put it back. 

Daniel Serfaty: Well, it’s a bit related. It’s this notion that the user know what they want, I think the Steve Jobs famously said, “I’m going to tell the user what they want.” And so it happens in our field too, and we should absolutely respect what our users want, but we should also understand what they need. And sometime by a gentle guidance towards from what they want to what they need, this is what will do the best service we can provide to them. 

Angelica Smith: During some of my face-to-face meetings and trainings with the Marines, I’d get all sorts of requests for crazy features, “We want to see this.” And some of it was kind of gamified, some of it was like leaderboards and things like that. But some of the FFIs did, I think it was the more experienced FFI that had a background in health and fitness. They did have a better sense of what they wanted and what they needed to perform their jobs better and not just do the bare minimum of creating these plans, whether they’re effective or not, and then distributing them to the fleet. But then, like I said, you had some other Marines, they just had some super wild wishes of things they wanted to see in the app that just didn’t make sense.

Daniel Serfaty: I’m not surprised they wanted to see a leaderboard. This Phil’s feels analogy to CrossFit. They want to see who can bench press the highest weights.

Angelica Smith: But that’s dangerous. That is very dangerous, if you tried to do that for Marines. So we steered clear of that.

Jessica Lynch: I have an example of that. In our early testing with individuals, we found people had the best success when they had opted in to their daily goals, they had set them of what they were going to do each day. And then as we were working with more individuals, all different fitness levels, we learned that sometimes the step of setting those goals was too much for someone that week. And so we were skipping the check-ins. If you don’t set your daily goals, we’ll just have you start next week and set new ones. 

And we started testing, we’ll pre-set the goals. We’re still going to check in with you on something and report back on anything you did for your fitness that day. And that actually got people then more motivated to set their goals, but they actually were checking in and responding and engaging. So a little bit of a nudge even when someone doesn’t opt in, it’s got to be a mix. Still find the best success when then we get someone to take more action and be proactive in setting what they’d like to do, but our own intervention there, has proved quite powerful as well. 

Daniel Serfaty: Yes. Thanks for sharing that. I’m going to change a little bit of the topic and Phil, you said something earlier from your own personal experience, as an athlete with injury and fitness, it’s not just about maintenance of health. It’s also about recovery from injury or even injury prevention. Can you tell us a little more about that? How your system at Sparta helps people both in recovery of injury, but also in prevention? 

Phil Wagner: Yeah. Back to the example of identifying habits that could be improved, I mentioned the offensive lineman who squats as part of their sports and then goes and squats as part of their training. That’s where data can say, “Okay, well, you shouldn’t do that because it increases your risk like this, on the flip side, here’s what you should do instead.” So no one likes to hear, “Don’t do this,” and not have an alternative, right? So the data should guide you to, “Hey, this isn’t serving you well, here’s something else you can do similarly to satisfy that need.” Almost a craving, if you will. If someone wants to lift heavy weights, “Don’t do this exercise heavy, do this exercise heavy instead.” That’s really where a technology can feed to identify, but also suggest most importantly, an alternative habit to adopt. 

Daniel Serfaty: Have you had, Jess and Angelica in that order, this notion of injury, how do we, after all, we are giving people advice on their health, on their physical health, on their fitness, how do we deal with this whole dimension of they can injure themselves following our advice sometimes or not following it right. 

Jessica Lynch: For us in the work we do with individuals and partners, it’s about making sure that they are motivated to talk to someone who’s a professional and looking at their individual body and injuries and hands on, and looking at many kinds of wearable data and getting in there. And so a lot of us ignore pain and symptoms and so we view our role as not the ones to diagnose and give you that game plan, but how to rehab or what to do, but to motivate you to go talk to someone who you can be in person with and get that more personalized advice, and then we’ll help you follow through with that. 

Angelica Smith: This is a big challenge, and that’s something that we are looking at addressing this whole idea of injury prevention. Within the app, we have opportunities for the Marines who are executing these programs to provide feedback to the creators of these plans, right? You have a creator of plan who has the set of expectations on how effective the plan should be, how difficult the plan is. But then the unit that he’s distributing this plan to, they could be at all sorts of levels, beginner level, advanced, and working out and physical physique. And then you have those users who execute these programs and how they feel the plan was, they thought perhaps it was more difficult, more challenging than the creator of the plan. Well, there’s a huge gap.

And so we put in place an opportunity for the individual Marine to say, “This plan was a lot more difficult. You said it was a five, it was for sure a 10, And oh, by the way, I got injured by trying to do this plan that should have been a five, but really was a ten.” And so the data that we collect, we’re hoping it’s going to help the FFIs who are the creators of these PT programs, revise plans appropriately. So as a feedback comes in, they can say, “Oh my goodness, out of the 1000 users who launched this program, they thought it was more along the lines of a seven. And we thought it was more along the lines of a five. Let me go ahead and adjust this program for my users.” And so we are looking at trying to fill in those gaps and really help prevent injuries. 

Daniel Serfaty: Good, good. I think this is an essential part of what we do. So Phil, I’m going to use your physician training. There is a big trend in medicine right now, that we are actually copying in the general training and education market called precision medicine. And precision medicine hypothesis is basically, we’re all different, we all have different genomes, literally. And in the future, medicine will be hyper individualized. So somebody who has cancer or who has a heart disease will have a very unique treatment that is totally adapted to their individual makeup.

Obviously we are not at that level, not at the genetic level yet for education and training, but there is a big trend now, precisely because of the availability of large amounts of data, about people and our ability to process it fast as a group and tailor it to have this notion of, “I’m going to give you a prescription,” translate here, maybe a fitness program, “That is just for you at that moment in time for what you want and for what your goals are.” How close are we to that level of individual treatment? In a sense, Jess, in her technology and her services does that, but with a lot of human intervention in between, how do we do that? Are we close to doing this precision fitness goal? 

Phil Wagner: Yeah, I think that the technology is, and a lot of the data sets are there. The key piece is making sure that the data that is or has been collected is good data, that’s one of the challenges with a lot of wearables, is you’re bringing a lot of data, but how much of it is noise? How can you really sift through and identify what activity was what? So you can create those models of, when someone is at risk and how that risk was reduced. And so I think that data is there right now. Now it’s a lot more on the cultural side of things, of how organizations can position it in a mutually beneficial way, because that’s kind of this next part that’s coming, if it hasn’t already, which is, “Okay, all this information is being gathered on me, is that going to be used against me in some way?”

And we’re already seeing it, and people forget that that’s an issue within the military too. The army is rolling out a new army combat fitness test, and the scores are hidden from most military leadership. And the reason being is, there is a complaint and a fear that the performance on this new test is going to dictate promotions or not. That’s the cultural piece that when we get into insights within an organization to keep your organization healthy, that’s going to be the next challenge here is, we need to make sure that it’s presented shown in a way that, “Hey, this information we’re analyzing and providing you as a way to help and not judge your future prospects within the company.”

Daniel Serfaty: I think this notion of what do you do with this data, both the data you feed into the system, and the data that is being produced as a result of the recommendation is a huge issue. We’re going to explore it a little later in the podcast today. Any comments, Jess or Angelica about this notion of precision fitness, in a sense that everybody having not only their own personal trainer in a box, but also that that trainer will recognize your particular circumstances, medical, or mental, or whatever at that moment in time, are we moving towards there?

Jessica Lynch: I think the whole industry is moving in that direction. And it’s really exciting because everyone needs to find what works for them, and there’s a lot of ways to be successful. And so tools that open up people’s perspective of what a workout can even be and what it looks like and how it feels and something that’s going to get them better results than doing what they thought they should do based on some not science-based conception, they read in a magazine. It opens up a lot of opportunity and will help people feel better, because unfortunately, most people don’t feel that good, and that’s thanks. 

Angelica Smith: Yeah. I do think it’s a very exciting time for this. I think we are close, I don’t think we’re that far off. I think this is what executives want, this is what the leadership of USNC, of the army, this is what the military wants. I think they’re ready for it, and I think it’s going to change the way we train our war fighters and it’s going to change our expectations of a war fighter, I think for the better. So it’s a very exciting time. 

Daniel Serfaty: We’ll be back in just a moment, stick around. Hello, MINDWORKS, listeners, this is Daniel Serfaty. Do you love MINDWORKS, but don’t have time to listen to an entire episode. Then we have a solution for you, MINDWORKS minutes, Curated segments from the MINDWORKS podcast condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, Buzzsprout, or wherever you get your podcasts.

I believe that Phil, shared with us a little earlier the large number of wearable sensors and fitness apps in the market. What do you make of it in a sense that, what’s good about them? What other limitations with mostly our people, is America more fit as a result of all these technologies that people wear around their wrist, around their arms, around their belts sometime, are people more fit as a result of the sensors? It’s a big question. Do we know anything about that?

Phil Wagner: No.

Daniel Serfaty: No, they’re not. 

Phil Wagner: No question. 

Daniel Serfaty: Why is that? 

Phil Wagner: Well, if we look at, at least the areas that we primarily are rooted in, which is worse than military, musculoskeletal injuries are drastically higher. Mental health is also higher in terms of diagnoses or symptoms. So we’ve got a higher injury rate and a lower mental health functioning ability. And we’ve also got infinitely more technologies and data sources. So we have more data and more problems. 

Jessica Lynch: Do we know why that is, Phil, do you have an understanding? 

Phil Wagner: It’s hard to say like, “Well, I know why.” But my theory is that a lot of information, when it doesn’t present insights, just causes more anxiety. And we’ve seen that with a group we met with in the air force, they said they stopped using a tracking sensor, a wearable company that’s very, very prominent. They stopped using it because pilots were so obsessed about getting their sleep, it prevented them from getting sleep. So again, rather than just telling you, “You slept this amount, you need to sleep more.” There needs to be, “Okay. You can only sleep this amount, here are some tactics that you need to do, or some other things un-sleeping related to help support and compensate for that lower level of sleep.”

Jessica Lynch: That are achievable. 

Phil Wagner: Yeah. 

Jessica Lynch: Accessible and realistic, [crosstalk 00:44:50]. Yeah.

Phil Wagner: Totally. Totally. Because if you’re, to Jess’s point, they’ve got to be realistic. Otherwise, it actually has a net negative effect because if you’re saying, “I slept six hours, you need eight to be optimal.” You look at your schedule, you say, “I can’t sleep better, so my life is sub-optimal.” You’re better off without it, that data. 

Angelica Smith: It’s interesting because the military, they’re evaluating all of these sensor technologies. Why don’t they have this data, why don’t they have this understanding, why is there such a huge range of technologies they’re evaluating? Have they not seen these effects? I’d just be interested in understanding the rationalization for doing so. 

Daniel Serfaty: It’s a general fallacy of the technology availability will cure the ills of society. 

Phil Wagner: Absolutely. I mean, the analogy we use all the time is, it would be like someone’s standing on a scale. And assuming that standing on that scale causes you to lose weight. It doesn’t work that way, right?

Jessica Lynch: Information’s power. It’s important to know where you are to help form where you could go. But that’s what’s exciting of this next phase of being able to have enough data sources and points. And we have so many devices that are collecting these things around us but, did you start turning the lights lower before you went to bed? When’s the last time you had caffeine? All these things can help us optimize on an individual basis of what’s going to help them get better quality sleep within those six hours that they have. But right now, a lot of these apps are just recommending, “20 general things you can consider to get better quality sleep,” and that’s overwhelming and it can be detrimental, because it’s like, “Oh, well, how do I even?” Or, “I’m not going to do anything.” And then they’re back to where they were. 

Daniel Serfaty: I believe that is a general fallacy. By providing people access to all the information channel, all the news channel, just with a push of a button, doesn’t make people more informed. There’ve been a lot of measurement about that actually that made people more frustrated as Phil say that they are not informed enough. If anything, will give people the illusion of being informed where actually they are not. And the same thing, I mean the data about the proliferation of diets, for example, and yet the obesity rate in America keeps growing. Why is that?

So it’s really something to think about. And when you look at the collective solution that are represented by the three of you, fitness, nutrition, wellness, health, recovery from injury, where do you see the market is going right now? I mean, are we focusing primarily on physical fitness, on something more role, that has many component. Let’s call that, wellness for now, both emotional, and mental, and physical. Can you make a prediction to see where the trend is going to be, where people are going to focus more in the future? Anyone want to pick up that dangerous question? 

Angelica Smith: I think the focus is currently on the physical aspect of health, but I think we are embarking on the idea of total fitness. I see that it’s very immature right now, but I do believe that’s where the industry is headed maybe in another 10 years with prescriptive abilities. So that’s my short response to where I think we are and where I think we’re going. 

Jessica Lynch: I think there’s an exciting new wave of focus on mental and emotional health and meeting people where they are and making it okay to not feel okay, giving people the individual support they need with different apps that now connect you to mental health professionals. But also these consumer fitness companies like Peloton just created a new series based on mood. “Here’s a workout to do if you feel sad,” or if you feel confident. It’s really just interesting with this wave of mental health, innovation and investment in the VC community. So it’d interesting how that the emotional side of making change and focusing on your health moves into every other aspect of health.

Daniel Serfaty: I wonder if that’s something invitation with a lot of quasi science and charlatans to enter the market because for sad it is zucchini, but if you are angry, eat a pepper. People are going to come up with those ideas-

Jessica Lynch: Well, in this case, it’s still just trying to get you to move. So the positive is no matter how you’re feeling, you can still do something and you can still, rather than sitting on the couch and increasing your BMI. So I like that aspect of it, do something, no matter how you’re feeling. 

Phil Wagner: There’s kind of two populations, we all represent on the call. One is people that just need to get up and move to Jess’s point. And so anything that can help inspire them to do that is helpful. And science really isn’t necessarily there, that’s more of marketing, how do you speak to the individual? The other piece though, is where that kind of messaging can be dangerous as people that are already active. I mentioned the air force talking with some air force pilots, they love Peloton. The challenge becomes if they have low back pain and tight hips from sitting in the compact cockpit, small cockpit all day.

If you ask me, what’s the worst thing you could do? I would explain that it would be getting on a bike and riding a bike hard. Because if your back hurts and your hips are tight, you should do zero Peloton, zero biking. And so I think it becomes important too for the science more so on the groups are already active to direct them into which activities are okay. Because sometimes it gets lumped into the non-active population where anything’s good. And the air force or the athletes might not be the same way. “Well, anything’s good. I got on the bike.” It’s like, “Well, no, in your case it’s not. Everything is not good.”

Jessica Lynch: Differentiation. 

Phil Wagner: Yeah. 

Daniel Serfaty: That’s a very, very insightful remark here, Phil. You’re right. Depending on which level you’re at and what are your needs, you should have a particular prescription for you and not everything goes. Again, only because it’s engaging and only because there is a score in the lower left corner of the screen. That’s good. I’m more interested also to continue on something that, Jess, I know you mentioned a couple of times today, but I know it’s at the center of the whole value proposition of what your company is doing. Is this connection often talked about, but seldom implemented, of the connection between physical fitness and emotional fitness or even mental fitness? How do we balance that with technology when we don’t have a psychotherapist on site, in a box or even an MD in a box, how do we combine these two? 

Jessica Lynch: Well, there’s an opportunity to combine less of that professional one-on-one time and more of a lower grade, you don’t need an MD to ask if you went for your walk that day, or if you followed the diet plan that was specified in your last dietician session. So it’s blending the two, but ultimately we’re human, and we like to feel that positive reinforcement from others. And we like to know that we’re not alone and that if we had a hard day, it’s not a reflection of our own worth. And we’re still worthy of getting up the next day and investing time in ourselves versus all the other things that we’re doing for everyone else around us.

It’s really difficult for parents, it can feel selfish to do things for themselves when they otherwise could be doing something for their kids. And so feeling like by doing something for themselves, you’re actually doing something for someone else can be very motivating. And so I think it’s about creating the structure around the prescriptive recommendations to help people follow through, adopt it and be successful. But can’t forget that we’re human, we’re going to have good days and bad days. You need to be able to maintain your motivation and feel encouraged to keep going, because there’s going to be plenty of bad days. 

Daniel Serfaty: Jess was describing disconnection between the physical and the mental for the larger consumer, the population. I wonder if it also applies at the end of the curve, basically with people that are professional athletes. 

Phil Wagner: Yeah. And I actually think more so with trained individuals because they actually are closer to their physical capacities, to the point where if someone’s sleeping seven hours a day and they’re very well-trained, we often recommend, “Hey, train one day less a week, take that time and use that to sleep more. And that’s a workout in and of itself.” Because if the goal of exercise for you is to perform at your job higher, or run faster, or lift more weight, if that’s part of your job, sleeping plays a role over that just as much as lifting weights.

So start changing that mindset of a singular goal, optimizing better through mood, but also physical and you can’t separate those out. I think the other place this comes into play more in the future if we look ahead is, in the war fighting and athlete community, and at least for a lot of type A males, speaking for myself, no one’s raising their hands saying, “I don’t feel good mentally.” No one’s saying like, “I want help.” They should, we all should be doing that more. But physical exercise can be a leading indicator for that mental health. If individuals are exercising less, are less motivated, or performing at a lesser level, that could be a more effective leading indicator, than surveys in groups where they don’t want to admit there’s a weakness.

Daniel Serfaty: That’s very interesting that we have to pay attention more, as you say, for the professionals than just for the general population, because of that reluctance. Do you see that in your world, Angelica too, with the tough Marines, they think that just going to the gym, will fix it as opposed to meditating, or taking a break, or doing something for their mental health?

Angelica Smith: Yeah. Unfortunately that is how my experience has shown that what’s important to them is just kind of that physical aspect of health. But like I said, previously, I’ve seen certain individuals within the Marine advocate for mental health and other aspects of health, not just the physical training part. So you’ve got people who are advocating within the military who want to see more of that research and want to see tools that affect advancements in that area and performance in that area for our war fighters. I don’t think we’re exactly there yet however.

Daniel Serfaty: Yes. Maybe that’s a population that self-select, so like maybe professional athletes, because for particular reasons, think about that. Talking about, again, mental health and physical health, just as we look back at the past 15 months of COVID, what kind of change did you see in the way people use your solutions and technologies? And did that change the equation in terms of how people look at their fitness, whether it’s physical or mental? Jess, you want to take that on? 

Jessica Lynch: We saw two big changes. First, more interaction. A lot of people were very isolated and being able to feel connected with other people during that are in that time made a big difference, especially being able to share some of the hardest things that were going on. Like we were working with a lot of frontline healthcare workers and we got reports of, “I had five friends commit suicide this month,” and to have a safe space to say it was really powerful. So we saw people wanting to interact more and share more than we had ever seen.

And we also saw people adjusting their definition of what a workout meant because they had always gone to the gym, or they had always gone to a class and now those facilities were not available to them. They were in a new location and needing to find new ways to move that worked for them. So we saw a lot of people experimenting and trying a lot of new types of exercise that they hadn’t considered before, which is really joyous because if you dance or do [inaudible 00:58:05], or lift weights at home with things that you didn’t realize counted as weights, and it means you’re doing something.

Again, this is for people who are coming off the couch, not trying to optimize performance after already being over 80% of what they could achieve. Everyone’s having to redefine what their routine is and wanting more human connection, which I hope sustains, because ultimately everyone’s going to be more successful if they find what works for them and can adapt to any situation they’re in, more resilience. 

Daniel Serfaty: That’s interesting as a coping mechanism and the important of being connected to something or someone that will tell them how they’re doing, or they will listen to them. How about on the high end of the training, Phil, Angelica, how did COVID affect the way people use these technologies?

Angelica Smith: I would say similar to Jess, we saw our numbers skyrocket because gyms were closed and they couldn’t do group exercise. Now, although we saw an increased number of users and an increased number of plans that were created, those plans didn’t include dance or yoga, things like that. But we saw an increase in usage. And then afterwards, there was a time when after COVID that those numbers began to drop, as things began to open up, unfortunately those numbers went down quite a bit. But we did see a great amount of usage around COVID for about two months. 

Phil Wagner: From a diagnostic standpoint movement wise, we just saw a lot more heterogeneous population coming in. Because before in a given military unit, or a given sports team, or even employer people would be doing a program that was set, formalized, given to them, supervised. And people would gravitate to move a certain way as a group because they were all doing relatively similar things. Then when COVID hit, everybody was dispersed and then folks chose what they want to do.

And that range from Netflix bingeing as a workout, to running more and utilizing where they’re at to be even more active. And so what it did is created a lot greater need for a diagnostic in terms of, okay, everybody’s coming back into the company, it’s like everybody got deployed and now everybody’s coming back in. So where’s everybody at so we can be able to triage who needs what.

Daniel Serfaty: It’s fascinating to see how many of those adaptations are going to linger way past COVID. I’m really observing that now in all kinds of environments, some things are going to stay, what exactly we don’t know. So I have one last question and then going to ask you for a prediction and a piece of advice. So my last question, I want to go back to the notion of data. There’s a lot of data that’s floating around here, whether it’s actual data that goes to feed the machine learning algorithm or a recommendation engine or data that comes to actually float to your Wishroute guides.

And these are quite personal data, data about physical fitness. So mental, or sometimes even maybe medical, some medical detail may be entered there. Is that an issue? Or do we have to think when it comes to this data about fitness and nutrition and personal health, et cetera, do we have to worry about HIPAA compliance or at least some kind of protection? What are the ethical boundaries of what we can do with all this human data floating around?

Jessica Lynch: Ethical, and Phil, I liked your comment earlier about people only want to give you that information if they know it’s not going to be used against them. So as you think about using these tools in organizations, you’re not going to get people to adopt some if they worry that it’s going to cause them to not get promoted or be judged, not liked by people around them. So the framing and privacy is extremely important. Internally, we remove any identifiable information about the individual to our staff and processes. So nothing that someone says to us can be tied back to an identifying factor about them, essentially, anonymitizing all of the information coming through. So that’s one of the key things that we do to protect people’s privacy. 

Daniel Serfaty: But you do that voluntarily, not because there is a regulation that says that you should, is that right? 

Jessica Lynch: We do that voluntarily. Yes. 

Daniel Serfaty: How does it work Angelica, Phil, for your own technologies? How do we protect those data? 

Angelica Smith: I actually had a question for Jess. I was wondering if someone does submit PII or they submit some piece of information they shouldn’t… I mean, I know you’re collecting that, but how does that work? What do you then do with that data? 

Jessica Lynch: There’s certain parts of the system where we collect certain data points and those pieces of information wouldn’t flow through to key things we’re storing about the individual, because we’re not prompting for anything related to something that would then fall into that category. In the daily conversation, which we do ultimately capture someone could share something like that with us, but we don’t then assign it to their profile and store it in our systems related to that person’s record, if that makes sense.

Daniel Serfaty: Okay. 

Phil Wagner: Similar to Jess, we’re very careful about de-identifying personal information. We actually have gone out and gotten fully HIPAA compliant and treat all our data with that lens because we believe even though it’s not required or mandated, we believe that that information is HIPAA information and that it is medical data. It certainly is personal information, but I think we need to start treating it as such much like we would a medical record. And I think it’s important that that standard is set and communicated because that’s only going to enhance that trust where individuals don’t feel it’s going to be leveraged in some way against them. 

Daniel Serfaty: Absolutely, yes. That’s exactly what I was thinking about. In fact, this is one case where the feeling of having data protected may actually render the two much more effective because of that trust, as you say. People are pretty obsessed is that they don’t want those data to end up in some pharmaceutical company that is going to try to push some pills on them because they know something about their condition, for example. Thank you for that, because I think that’s probably on the minds of many people in the audience, if providing my data and receiving data is going to make me more fit, that’s great. I just want to make sure that some people are watching what’s going on with that data. And I think you just reassured them that even the industry is not as regulated that say the pharmaceutical industry, people self regulate right now in anticipation of that. 

Angelica Smith: I’ll go ahead and go on record and say that FitForce was developed in a way as though we do not collect any of that information. I should probably make that statement, PII, HIPAA stuff. We developed it in a way that we do not encounter any of that information, and so we’re not even going down that rabbit hole.

Daniel Serfaty: Okay. But you might one day and then these are good guidance that you have from Phil and Jess about how to deal with that. 

Angelica Smith: Yes. I see that in the near future. 

Daniel Serfaty: Yeah, absolutely. Good. So now I’m going to ask you to put your prophet hat on and then your guide hat on. Close your eyes, it’s not 2031. Is America more fit as a result of these next generation technologies? More fit, less injured, feeling better, et cetera. Basically what we learned that just the sensors by themselves do not accomplish, is this combination, this industry going to make America more healthy? So 10 years from now, we wake up and what? How do people use those? We will say that prediction is very difficult, especially about the future. So who wants to make that prediction?

Phil Wagner: I can start. In 10 years, I think that at least from a domestic standpoint, the US will be a fitter, healthier population, in three years will be worse than we are today. And I believe that, and maybe it’s just by Silicon Valley perspective, there’s going to be a bubble that will burst over the next few years and that all these technologies and gadgets that are gathering information, our judgment day is coming. And they’re going to be judged in a few different ways, which we’ve covered on this call. One is they’re going to be judged on how they handle that data. The privacy, the HIPAA compliance, we’re seeing it right now with Apple, who’s throwing down the gong. 

The other way, they’re going to be judged his insights, and Daniel, you brought this up before ultimately all data’s going to be shared. So it’s a matter of how comfortable we are with the transaction and what we get back, right? We being the consumer, the patient, the employee. So ultimately those companies are going to be judged on that security and the insights that are actionable for them as an individual. And then ultimately I think there’s the science piece because it’s very easy to create something to measure things, but in order to stand the test of time that it has to be an accurate scientific variable, or at least metric that’s being tracked.

Jessica Lynch: I’m optimistic on two levels. One, it is absolutely amazing what’s possible because of the smartphones in everyone’s hands and the baseline that we’ve established with the tools that are currently in existence and the exciting innovations that are coming and are being worked on right now to make things more personal, more specific, more human, back to more human. And then the other side of it, I’m optimistic because unhealthy people cost a lot of money and that’s a big problem that there’ll be continued more and more eyes on. So as the US population gets more unhealthy, there’s only more resources going toward trying to fix that problem and more money to be made fixing that problem. So the mix of the technology innovations that have happened over the 10 years and the motivation of people to make money and reduce costs, I think will get us there. 

Daniel Serfaty: That’s excellent. Angelica, you’ve heard two CEOs with two long-term optimistic view. Even if in the short term, things are going to be messed up. What’s your view?

Angelica Smith: I’m conflicted. I’m essentially an optimist at heart, I want to be optimistic. I feel like the industry will provide the technology and the means, but I don’t know if the population will use it appropriately. A huge part of health is diet, and we struggle with maintaining a good diet. The things that we put in our bodies, are these applications going to address that. If your budget is $30 to feed a family of six, McDonald’s is up the street and you’re pressed with time. And so a home cooked dinner isn’t… the app is recommending you cook a home meal, but you don’t have the time or the money and so McDonald’s it is. So I think the technology might be there, I don’t know if people will be ready to actually utilize it.

Daniel Serfaty: That’s a very important point here. The interaction between these technology solutions for physical fitness and our own society and the social dimension, social economic dimension of all that, are good lessons for our listeners to ponder on. Talking about our listeners, this is where I’m going to ask you for a piece of advice as a way to wrap up. We have a lot of young folks in the audience that are considering a carrier that may be in college or before college and they are listening to those three brilliant people, coming from very different perspective. 

The medical athletic perspective, the technology software engineering perspective, and the entrepreneur management expert consultant. And the question is that what if they are fascinated by this field, fitness and the technology to help people feeling better about themselves, be more healthy, what’s the career advice you would give them? What should they study to get into that and to succeed in that Angelica you’re ready to talk, what advice do you have? 

Angelica Smith: I would give some general advice just for them to be bold, get involved as much as they can, whether that means taking a class or several courses, interning with a company that’s doing similar work that interests you. And lastly, I would say to remember, it’s always easier to ask for forgiveness than permission. 

Daniel Serfaty: Jess.

Jessica Lynch: I think having the confidence to be comfortable being uncomfortable is one of the most important things in life. And so just as Angelica said, go pitch an early stage startup to be an intern. I hire 10 interns this summer, and there’s so many ways to get involved and get that experience. And I think one of my biggest aha moments career-wise early on was it doesn’t matter how much experience someone has. They don’t know everything, they don’t have everything, figure it out and we all bring unique perspectives and critical thinking to the table that are valuable. So yeah, be bold, go get some experience. There’s so many open doors. Sometimes you just have to knock and ask and be comfortable being uncomfortable because that’s going to serve you well, your whole career. 

Daniel Serfaty: Thank you. That’s very wise. Phil, for somebody looking for a career path to succeed in that direction, what would you tell them? 

Phil Wagner: Yeah, I think that uncomfortable piece is certainly great advice. That things don’t work out like in the movies. There was a lot of discomfort that’s necessary in the learning process and to recognize that and be okay with that. I think the other thing probably to add on is how important the soft skills are. I think if you’re going to be involved in health and human optimization, human performance, data is critical, but having good data, good science, it’s not enough. There has to be an understanding and an empathy of who’s using it and why they would want to use it. And that has to be really understood, not only initially, but on a really ongoing process and never losing the touch that there is a human piece to this. And there has to ultimately be some sort of engagement, and whether that’s a wearable or another device. 

Daniel Serfaty: Well thank you for very wise advice. All three of my guests gave you right now some nuggets that I hope you will use as you make choices. These are nuggets that they learn not because they’re read it in a book, because they practice it themselves in their daily job. 

Thank you for listening, this is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast. And tweet us at @mindworkspodcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. These days, whether we are in our cars, at our desks at work, or at play, we are increasingly surrounded by automation and so-called intelligent devices. Do they help us, or actually make our lives more complicated? My two guests today are coming to the rescue. They’re experts in envisioning and designing intelligent cognitive assistance to harness these emerging technologies in order to help us make better decisions, alleviate our workload and achieve, perhaps, and that is a question of higher quality of life in the future.

First, Valerie Champagne is retired from the United States Air Force, where she served in the intelligence field, specializing in all sorts analysis, collection management, imagery exploitation, and command and control for more than 20 years. After the Air Force, she worked as director of advanced technology for PatchPlus Consulting when she first partners among others with my second guest. Currently, she’s a lead for Lockheed Martin Advanced Technology Labs, portfolio in command and control with a focus on human and machine capabilities to support decision-making.

My second guest is Sylvain Bruni, who I’m honored to count as a colleague at Aptima. He is principal engineer at Aptima and the deputy division director for Performance Augmentation Systems. He’s an expert in human automation collaboration. His current work focuses on the design, development, and deployment of, hear that, digital sidekicks that provide cognitive augmentation to mission critical operators, both in the defense and the healthcare domains. Valerie and Sylvain, welcome to MINDWORKS.

Sylvain Bruni: Thank you for having us.

Daniel Serfaty: Let me ask first to introduce yourself and what made you choose this particular domain that is a complicated domain, exciting but complicated as a field of endeavor? Valerie, you could have stayed in the Air Force and being a command and control expert. Sylva, you’re an electrical engineer. You could have been an engineer working on complex electrical systems or different kinds of complex system at MIT, other places. Why did you go into this human field, Valerie?

Valerie Champagne: For me, it’s all about making a difference to a warfighter. Let me just expand upon that. You’ve heard my background. With most of the time spent in the Air Force, I typically been on the receiving end of technology. I’ve experienced what we call fly-by-night fielding, I think you’ve probably heard of that, which basically means capabilities were developed and fielded void of real operator input or training. And so ultimately, and we used to say this at the unit I was in, these capabilities became very expensive paperweight or doorstop because they either weren’t appropriately developed for what we needed, or we didn’t know how to use them. And so they weren’t used.

Toward the end of my career in the Air Force, I did have an opportunity to lead an acquisition of emerging technologies for command and control. And for me, that was a eureka moment. That is what I was in a position to be able to connect the developer to the operators so that what we delivered was relevant to what the operator needed and that the training was there so that they could actually put that technology to use. When I moved on from the Air Force, I pursued work in the emerging tech area because that’s where I really thought you can make a difference to the warfighter, developing and delivering those capabilities that make their life easier and their life better. That’s why I’m in this field.

Daniel Serfaty: Thank you. No, that explains it. I think that’s a theme that we’ll explore over the next hour or so, this notion of inadvertently in an attempt to help the warfighter or the user or the doctor or the surgeon actually making their lives more complicated. And that’s happens when engineers design for engineers as opposed to human beings. Talking about engineers, Sylva, you’re a bonafide engineer. What made you come to this field?

Sylvain Bruni: Interestingly, it’s actually very personal from a childhood dream of being an astronaut and going to Mars and explore space. I kept seeing on TV, whether it’s in cartoon or in TV series, all of these space folks going into those advanced spacecrafts and exploring. They always had this kind of omniscience automation voice that they can talk to that was doing those incredible things. And to me, I was like, “Well, I wish we could have that. If I’m going to be an astronaut, I want to have that kind of technology to help me out because certainly I’ll never be able to do everything that characters in those fictional things are doing.”

And little by little, learning more about technology, becoming an engineer, what I’ve come to realize is that there are so many different problems that need to be solved for those types of space exploration, propulsion, radiation, that kind of stuff. But the human aspect, the cognitive and behavioral aspect of the technology that interacts with a human and help do things that the human otherwise cannot do weren’t really paid too much attention to. And so to me, it was like, “I want to look at, I want to solve those problems.” And so little by little, learning and understanding better what this is about, so the field of human factors, cognitive assistance engineering, and then getting to actually build some of those technologies is really is what’s driven me to this domain and the passion to actually build those types of digital assistance.

Daniel Serfaty: Thank you for both of you mentioning this kind of dedication and passion because I think this is a common denominator in people in our field, whether they come from an engineering angle or from the expert or the warfighter angle in your case, Valerie, to say there is a way we can do better. And for our audience, this is a mystery because it sounds, “Oh my God, this is art. This is not science. This is about the human mind.” And yet there is deep science and complex technology that can take that into account. Talking about complex technology before our audience, what do you do today in your day job, Valerie? Can you describe things that you actually do for our audience so that they have a more tangible sense of what the research and engineering this field has?

Valerie Champagne: I still work in the emerging technology field for Lockheed Advanced Technology Lab. I focus on command and control as was stated at the beginning and specifically on the development of artificial intelligence and machine learning for decision-making. A big area for us is on AI explainability and how the human will interact with that AI to enable speed to decision-making. We’re very focused right now on Joint All-Domain Operations and there’s a big speed and scale problem with that. We’re focused on the development of AI to enable speed and scale, but also to ensure that the human is able to understand what the machine is providing to them for a potential option or operation.

Daniel Serfaty: We’ve come back to that notion of human understanding the technology and perhaps also the reciprocal design, which is technology understanding humans in order to help them best. But for our audience, you said command and control. Most people don’t know what command and control is. Can you tell us what it is? Is that just decision-making in the military or does it go beyond that?

Valerie Champagne: It does go somewhat beyond that. Ultimately, command and control is having the authority to make decisions. And it consists of strategy development, target development, resource to task pairing, and then it, of course, includes the execution of operations and the dynamic replanning that occurs when you are executing operations and then the assessment of those operations. And that’s all part of what we call the air tasking order cycle and basically what we would consider command and control. And for now, Joint All-Domain Operations, we’re seeing that tasking order cycle expand to all domain.

Daniel Serfaty: That sounds pretty complex to me in a sense that it’s not just the decision-making like taking one decisions and moving on to the next decision, but this notion of constantly for the human to be in that planning and replanning loop. I can imagine how humans can need assistance from the technology in order to deal with that complexity.

Valerie Champagne: Absolutely. We can talk later about some examples where I had had a cognitive assistant to help me out.

Daniel Serfaty: Yes, of course. Sylva, how about you? You are both in a principal engineer, which is really senior engineer, but also you are managing a division or co-managing a division called Performance Augmentation System. That sounds like space exploration to me. What do you augment?

Sylvain Bruni: That’s a very good point. On the daily, I’m a program manager for those digital sidekick efforts. The types of augmentations we do are actually pretty far ranging in terms of the domains that we work in. We cover a number of things, including intelligence analysis, whether that’s for the army or the Air Force. We cover project maintenance and inspection of maintenance operations for the navy. We help the missile defense agency with data analysis in their simulation environments, which have a lot of data, the most I’ve ever seen in any domain. We also help other agencies and also commercial partners figure out how technology can serve as a way to expend the cognitive capabilities of humans. That means exploiting better the data that they have in a way that matches what they want and the time that they want to actually do the work that they need to do, ultimately yielding the types of outcomes that they want.

For example, if you think about cognitive assistant in the maintenance environments, one of the big problem that the navy has and other services is the lack of expert maintainers. Oftentimes there’s just not enough people who have the skills and experience to actually do all of the work that is backlogged. What if a cognitive assistant could actually help novice maintainers perform like they are experts quickly? We’ve built some technology where a combination of augmented reality and artificial intelligence model basically help augment what maintainers know how to do, so the basic skills in maintenance but perform those skills at a much higher level than they have been trained for because the technology is helping bridge the gap.

For example, concretely, what that means is if they are wearing augmented reality glasses, they can see information in the context of operation, so highlights on certain parts in a landing gear, for example, the specific instructions that they have to follow and keep an advice that other people have told the systems before to say, “Hey, you should stand on this side of the lending gear because in the next step, you’re going to be opening this valve. And if you’re on that other side, you’re going to get sprayed with stuff in your face and you don’t want that to happen.” All of those things that experts would know the system could know, the digital psychic could know, and basically preempt any future problem by delivering the right information at the right time in the right context to a more novice maintainer. Those are the types of things we do in all of the domains I’ve mentioned.

Daniel Serfaty: That’s interesting what both your answers to my question are because on one hand, you are arguing that through technology, we can increase basically the level of expertise at which a maintainer, a commander, a surgeon can perform, which is a pretty daring proposition, if you think about it, because sometime it takes 20 years to develop that expertise. In a sense, so you’re talking about accelerating that with a piece of AI or digital sidekicks. But are you as in tempted rather than to augment, to replace in a sense, you say, “Why do I need the digital sidekick, may be the digital psychic, may be the main actor?” Valerie, are they domains in which we say, “Oh, the heck with it, it’s too complicated for a human. Let me just invent a new device that can do that humans job.”

Valerie Champagne: My perspective is that the technology is there to support the human and not the other way around. I think certainly there’s types of paths that could be delegated to a machine to perform. But in that delegation, just with like people you work with, you delegate a task, but you still follow up. You’re still cognizant of what’s going on and making sure it’s being done the way you want it to be done. And so the human, in my opinion, is always at least on the loop at times, maybe in the loop. There’s really three reasons that I jotted down related to this. And that is, first, the AI is there to support the human, not the other way around, which is what I just said. And so one of the examples I was thinking of was from this weekend, Memorial weekend.

I don’t know if you guys travel, but I traveled to Maine to go up to our lake house. There’s a lot of tolls going from where I live up to Maine. It used to be in the past that your hour… hours upon hours waiting to get through all of these tolls. There’s like four or five of them. And so now with the E‑ZPass system, this is just a simple example and they have those overhead cameras that you can just zip right through, and so it’s a huge time saver. And that’s simple AI, but it really helps to save time for the human [inaudible 00:15:16] AI as a force multiplier in this case.

The second reason would be the human is the ultimate decision maker and I think has qualities that at least for now in AI don’t exist. Things like intuition, judgment, creativity. And so for that reason, you wouldn’t want to take the human out of the equation. And then finally, I do believe that there are some things that just can’t be delegated to AI. These are things where the stakes are very high. For example, the decision to strike a dynamic target. AI can certainly support bringing speed to decision and assisting the human in identifying options that they may not have thought of. But ultimately, the decision to strike requires at the very least a human in the loop.

Daniel Serfaty: I’m going to play devil’s advocate, but I want to hear Sylvain’s answer also to this challenge with devil’s advocate in your first example of you driving to Maine and not having to stop and saving time basically adding a few hours to your hard earned vacation. What happened to that portal worker? He doesn’t have a job anymore, or she doesn’t have a job anymore. This is an example actually of elimination rather than augmentation. I know it made the life of the user better, but is there a trade-off there in a sense? Just a question to ponder about. We’ll come back to that because many people are gloom and doom predicting that this is it, the robots are coming to take our jobs and on and on.

I think that you gave two very good examples, very one in which the consequences of a decision are so important that you’ve got to keep a human commander on the loop. But at the end of the day, the responsibility is there, where another example in which we found a technology that totally replaced what used to be decent paying job for some. And so I think on that continuum, we should talk about this notion of replacement or augmentation. But I’ll plant the seed, we’ll come back to it later. Sylva, on my question, are you ever tempted in your many projects to just say, “Hey, this is one in which we need to replace that human operator”?

Sylvain Bruni: No, never, because as a human systems engineer, if I were to say that, I would be out of a job. No more seriously, I would completely support what Valerie said about certain aspects of… particularly in critical environments where the human is absolutely needed and where technology is nowhere near where it needs to be now to be thinking about replacing the human. Valerie mentioned creativity and intuition and judgment. I would add empathy in certain kinds of environments like healthcare. This is a critical aspect of the human contribution that AI will not replace anytime soon.

You also mentioned in the loop and on the loop as the types of relationship that the human has with the system. I would add the perspective of the with the loop environment, where there are multiple types of loops that exist in the system and the human needs to understand how those work in relationship to one another, things that currently AI models or the type of technology we are devising cannot really do.

Even with the exponential availability and capabilities of AI and automation, there are still those types of roles and responsibilities that the human needs to have because we can do it otherwise. If I go back to the example of the maintainer, we don’t have robots that are good enough to actually change the switches and the valves and the little pieces in the types of environments that we need them to be. There is a degree of nuance and finesse that the human can bring. And by the way, this is both physical and cognitive. The physical finesse of having your fingers in the engine moving things, but cognitively understanding the shades of gray in complex environments is really critical, going back to the word judgment, that value you mentioned.

I think for now, Serf, you’re right, there is an entire discourse and argument you have about the displacement and replacement of jobs. I’m a firm believer that it’s not a net negative, that on the contrary there’s advancing technologies and net positive in terms of job creation. But you’re right, those are different types of jobs for different types of purposes. We always go back to the same quotes from Ford. If we were to ask a long time ago what people wanted for transportation, they would have said faster horses. Now we have the car, which did eliminate normally the job of the horse, but also when you have carriages, the driver of the carriage, but at the same time, there are tons of more jobs that were created in the manufacturing world, in the maintenance world, et cetera.

Daniel Serfaty: Now, these are all very wise remarks. Let’s dig a little deeper into the core of the topic of today, which is really intelligent cognitive assistant, very loaded words, each one of them. Humans have been interacting with machines for a while, from the basic manipulation of information through windows or through clicks or through the mouse. And so human computer interaction has been around. What is new? Is there something that is qualitatively new here in this notion of an intelligent cognitive assistant that is here to help you do your work? Can you unpack that for our audience and tell us what is novel here? Valerie, earlier you mentioned artificial intelligence and explainability. These are, again, very loaded words. Can you just tell our audience whether you think it’s just a continuous of regular human computer interaction design, or there is something that is fundamentally novel.

Valerie Champagne: For this one, it’s a little tougher for me because I think we are more in the technical space here. But I really think what’s novel about the direction we’re going is we’re seeing technologies now with the idea of a cognitive assistant is we’re not delivering just black box to the end user, but we’re delivering a capability that will interact with the operator and serve as a force multiplier to that operator. I think one of the things that Sylvain said that really resonated with me was the idea of taking a novice person. And with that cognitive assistant, they’re able to get smarter with that cognitive assistance. They’re able to be more productive, and that’s the idea of being a force multiplier.

A case in point when I was doing all sorts of analysis, we had to build three things and we had to go through what’s called message traffic. There were keyword searches that were in Boolean logic theory, very difficult to formulate yourself. And so what would happen is that senior mentor would always just cut and paste over his keyboard search to the new guy so that they would be able to get the right message traffic to be able to do their job. And so you could imagine that cognitive assistant could be that senior mentor to that new person, be it a novice person or a new person to the job because in the intelligence field, the second you arrive in theater, you’re the expert, even if you’ve never actually worked that country. It can be a little scary and having that cognitive assistance would be so helpful.

Daniel Serfaty: Basically, this notion of helping you when you need it most and knowing when to help you it’s really key to that intelligent part of the cognitive assistance. There’s a certain degree here, I sense, of the other side taking some initiative in order to help you. Sylvain, can you expand on that from your perspective again, what is new? Valerie made a very good point here. Are there other things that our audience need to know that we are witnessing some kind of revolution in the way we think of technology to help us?

Valerie Champagne: I would say so. Traditional HCI design is looking at interaction affordances. For example, graphical user interfaces, or tactile interactions, or audio, or all interactions and sometime even smell, or taste interactions, but all of those have some form of a physical embodiment, the way the interaction between the human and the machine happens. To me, cognitive assistants go a step beyond. I’m going to use a big made-up word here. I consider them to be at the cognosomatic level. That means that they produce interactions that are… Those are the cognitive level, so the cogno, and at the physical totality of the human user, that’s the somatic part. They account for both what the human can see, hear, touch, et cetera, but also what they think, what’s in that context in the brain of the human what they want, where they want to go, what are their goals and objectives and how they go about that.

If you think, for example, about Siri or Alexa, those are called assistant. But to me, those are not cognitive assistants because if you ask them, for example, what time it is, they will both respond very accurately, or if you set up an alarm in advance, they’ll be reactive and they ping you when you have requested them to alert you. If you type in a search, for example, in their visual interface, they will give you a series of answers and oftentimes pretty good answers. They’re getting better and better. But none of that actually touches the cognitive level. They have no clue. Why am I asking for reminder? Why am I asking for certain kinds of questions?

The opportunity lost here is that Siri and Alexa could actually provide me better answers or very different kinds of answers and support if they knew the reason behind I was asking those questions. When Val talked about force multiplier, Siri and Alexa could multiply my impact in the world by actually giving me better support, maybe slightly different from what I’ve asked for, but understanding and contextualizing what I’ve asked for for better outcomes. In that sense, the human computer interaction goes beyond the traditional design, which is quite transactional in nature to focusing, as you said, on the context and the totality of where we are and where we want to go. Does that make sense?

Daniel Serfaty: It makes sense. It also I’m sure sounds a little bit like science fiction for our audience, which is good because we’re getting paid to work on the science fiction. But what does Siri or Alexa of the future need to know about Sylva in order to be able to understand the why, or to answer the why is he asking me that question, or even beyond that to provide you with information before you ask for it because it understands that you may need that information today? What do they need to know about you?

Sylvain Bruni: That’s a great question. And really, that is at the heart of the research we were doing and the prototype development we generate to identify what are those pieces of information that need to be in that cognitive assistance so it is able to provide that kind of advanced argumentative support. So far, what we are really focusing is context. In a wide encompassing definition of what we mean by context, I can summarize it usually into three buckets. Number one is what is the user trying to accomplish? What are the goals, the objectives, what are the end state? What do they consider to be success for them in the situation they are currently at? Not in a generic way, but for the specific situation.

In bucket number two, it’s more about the processes, the missions, the methods that they want to employ to reach those goals. Think about your own work in everyday life, you always have certain ways of doing things. What are those? How do we know that this is what we’re using to accomplish those goals? If the cognitive assistant can help that, it can provide a more granular support at every step of the process, every step of the way getting to those objectives.

And bucket number three is about the tools and capabilities to accomplish the processes, and what are the contributions or the impacts of those tools and processes and capabilities in reaching the goals? Think of it as the various levers and knobs and things you can parameterize to make your tools and your capabilities actually in the service of reaching a goal. Once a cognitive assistant has an understanding, even if it’s very basic of those three types of things, then we can start actually building the AI, the models, advanced interfaces, where the system will be able to support you at a very precise and helpful level.

Valerie Champagne: I just felt like comment. I think we need to simplify the cognitive assistance so that we can actually get some technology out there to the warfighter that will provide the support, maybe an incremental development process where instead of trying to get to that full understanding and reasoning, we just get a cognitive assistant who can help me with some of my mundane tasks. I can think of a time when we were getting ready to brief a general officer and I’m sitting there trying to plot ranges of ISR capabilities and figure out placement of orbits and tracks instead of focusing on the message that we needed to provide the general.

And to me, what I was doing, I remember thinking when I was sitting there, “Why can’t automation do this for me?” If that cognitive assistant could see me do this task three or four times, why can’t he then pop in and say, “Hey, let me take this task over for you to free you up so you can go think about something a little bit more important.” I just think if we could start there, maybe it’s not quite as cognitive as what Sylvain is talking about, but it’s certainly extremely helpful to the end user who’s in the military.

Daniel Serfaty: Thank you for that example and that clarification. I think it’s okay for the audience to understand that this continuum of sophistication regarding those cognitive assistance is there. Some of it is really researchy in a sense that we are still exploring and some of it is actually ready for prime time. We’ll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS but don’t have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis, curated segments from the MINDWORKS podcast condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the minis along with full length episodes under MINDWORKS on Apple, Spotify, Buzzsprout, or wherever you get your podcasts.

But Valerie, I have actually the mirror image question for you I asked you earlier. What does a cognitive assistant need to know about us to actually perform all these wonderful things that we want it to perform, that anticipation and that depth of understanding of my need as a human, as a user? What about the reciprocal question? In order for us to use those cognitive assistant or maybe not use, maybe they will be offended if I say that verb, but maybe collaborate with those cognitive assistant the best, what do we need to know about them? You talked earlier about the ability of those cognitive assistant to explain themselves in term of AI, you used the term explainability, I believe, tell us a little bit about how that works. Does a cognitive assistant need to explain what he does in order for the human to use it better?

Valerie Champagne: For AI explainability, and I’ll just talk briefly, we had a project that we were working with the Air Force Research Lab related to distributed operations. And as part of that project, we developed some AI explainability capabilities. What happened on the project was you had distributed nodes who would bid on task to figure out who the trade-off of who could do what for resource task pairing. Some of these nodes were connected with comms and some weren’t. And so as a user, you could get the printout of, “Okay, here’s your allocation, here’s your resource to task pairing, basically the answer.” But as a user, if I want to go in and understand, “Well, why did you pick this resource instead of that resource,” there needed to be this idea of AI explainability.

And so we developed a means to drill down and there was actual tech that would say, “These are the nodes I talked to, and this one couldn’t do it for this reason, this one couldn’t do it for this reason. So we went with this nodes solution.” As a user, that was really important to be able to understand how machine came to their answer. Now, that’s great if you have just a single thing that you’re looking at. I think the real difficulty, though, is when you try to scale. And so for the cognitive assistant to be able to render information in a way that highlights there may be problems or highlights where I might want to drill down further to that actual tech would be really beneficial, I think.

I think the idea of AI explainability is building that trust in the system. And so as an operator, I want to be able to basically test my cognitive assistant to make sure they’re doing what I intended them to do, or maybe they’re doing something better, but obviously I want to make sure they’re not doing something wrong because that’s going to create a problem for me. I look at the idea of building trust in the system as essential, and that comes from AI explainability, and it comes from three basic characteristics that the cognitive assistant needs to add to my job.

I need to be timeless to my workflow. I don’t want to be jarred every time there’s an update that makes things take longer for me. The cognitive assistant has to fit into my workflow. If I have to pull out a checklist for the buttonology every time I have to use this cognitive assistant and I don’t want it, it’s not intuitive. The cognitive assistant has to be intuitive to me. And then if the cognitive assistant makes work for me, then I definitely don’t want it. The cognitive assistant has to be productive.

A case in point… This frustrates me to this day and it happened probably 20 years ago. I was working at DIA, the Defense Intelligence Agency. They gave us a new link analysis tool and it was supposed to make our job so much easier identifying what the vulnerabilities were in these… I was a drug analyst on these drug trafficking organizations. And so we would spend hours inputting the relationships some nodes and whatnot to be able to get the software to work. And ultimately, it would dump our data. And so all of that time spent entering the data was lost and that was really painful and nonproductive. For me, I didn’t want anything to do with that software. And so for a cognitive assistant I think to be valuable, has to have those three traits, timeless in my workflow, intuitive, and productive. And that spells out TIP, I call it the TIP rule.

Daniel Serfaty: Thank you. This is a great tutorial on understanding really what it takes really to engineer those systems so that they are useful. I’m reminded of an old human machine interaction design and all the acronym HABA-MABA. Have you ever heard that one? About 20 plus years ago, the principle was human are best at, that’s HABA, and machines are best at. This notion that if we can partition the world into two categories, things that machines are good at and things that humans are good at, we can therefore design that world and everybody will be happy.

Obviously, it didn’t happen because… and this is my question to you, is there something else when we design a cognitive assistant today, an intelligent cognitive assistant very much along the lines of what I very just described that it’s not just enough to have a human expert in that and a machine expert in that, we also have to engineer, I’m going to say the word, the team of the human and the assistant together? There is something that needs to be engineered in order for the system to work better. Sylva, what do you think?

Sylvain Bruni: I’m glad you’re bringing this up and I’m having flashback to grad school about HABA-MABA and how this [crosstalk 00:37:35]-

Daniel Serfaty: I’m sorry about causing you that kind of thing. Yes.

Sylvain Bruni: No, but to me, it’s a very basic dichotomy of task allocation which served its purpose many, many years ago. But it’s honestly very outdated, both with respect to what we know now, but also what is available in human automation, collaboration and technologies such as AI, events interfaces, and things like that. And unfortunately, I have to say that this pops up more than what we would think in the research, in what people currently do nowadays. And that’s really to my dismay because I think it should be just abandoned and we need to move forward following the very simple principle, which is the whole is greater than the sum of its parts. We use that in everyday life for many things. It applies here just as well.

And from my perspective, moving beyond HABA-MABA is looking at transitioning from task allocation to role and potentially responsibilities allocation, so a higher level of abstraction in terms of the work being performed. What we are working on and what the field in general is moving towards is this dynamic meshing of who does what when based on the context and the needs that the human has. And when I say who here, it can be one human, multiple humans, it can be one agent, multiple agents. And by agent, that can be algorithms, automated things, automated components in a larger system, it can be robots, things like that.

And to me, it’s important to think more in terms of the roles and the responsibility, which has ties to the outcomes, the products of what we want, and then figure out what the technology and right allocation of which parts of a task are done by which member of the team gives automatically a lot better performance and better system design in general thinking about the acceptability down the road and as you said, Val, the explainability. That type of granularity in how things are allocated enables the better explainability and transparency in the system.

Daniel Serfaty: That’s good, Sylva. Thank you. I think probably in the mind of many members of our audience, those of us old enough to remember that, people are thinking as you’re talking about trust and you’re talking about reliability, they remember that little cartoon character in Microsoft that was on your screen that looked like a paper clip. And that was perhaps an early naive attempt of what an intelligent assistant in terms of organizing your actions on the screen was supposed to do. I’m thinking about the keyword that you pronounced, Valerie, the notion of trust that Clippy was trying to infer from your actions what you needed and what’s wrong at least 50% of the time, which prompted most people because it says, “If Daniel wanted that and Daniel moved that window, et cetera, that means that he intends to do that.” Those connections were not always right.

In fact, they were wrong most of the time. And that became very annoying. It’s like that overeager assistant that wants to help but cannot help you at all. People rejected it as a whole. I’m sure it had some good feature. But it seems to me that that notion of an assistance that is there that is designed to help you but actually go the other way destroy the level of trust people have in that Clippy and probably others for years to come. How do we build that trust? How do we build the trust into those system? They need to reach a certain level of reliability that perhaps is beyond the reach of current technology or not. We want to tackle that notion of building trust for technology.

Valerie Champagne: I’d too like to make a comment about the problem of the Clippy and giving basically false alerts. From an operator perspective, if you receive too many of those alerts, they do become annoying Chicken Little or The Boy Who Cried Wolf. But the real danger here is then you get de-sensitization. And so instead of heeding the warning and getting out and doing your checklist if the adversary is scrambling their aircraft, maybe you do nothing because you’re like, “Oh, that’s happened 50 times and it’s never been accurate. My assistant is wrong here.” That’s one issue.

Another issue is that the alerts can become [inaudible 00:42:24]. And so it can decrease this idea of critical thinking where you don’t look beyond the obvious. If you think of, in some part, Pearl Harbor or 9/11, we had some of that going on. And so the cognitive assistance, it’s super important that you do build that trust. I think ideally for me, if I was going to have a cognitive assistant, I would want to be able to test it. And ideally, the assistant would co-evolve with me. And so I would want to be able to test it as we evolve. You don’t want to have to go back through your whole… the testing cycle and has to be something where I’m able to generate the tasks myself and execute it on the cognitive assistant to see how they perform. That would be my ideal world.

Daniel Serfaty: Thank you very, Valerie. Sylvain, you want to chime in on that, and especially let me add a level of complexity to my own question based upon what Valerie just said. That notion of co-evolution, that notion of a cognitive assistant that continues to learn after the designers have been done with it for perhaps months or years by observing what you do, inferring, making conclusion, making inferences, and acting accordingly, it mean that you have in front of you a piece of technology that will behave tomorrow differently than the way it behaved yesterday because it learns something in the past 48 hours about you and your habits. How do you trust such a thing? How do you build that trust. Valerie, suggests the ability to continuously test and continually understand really the direction of that co-evolution. Any other ideas along those lines before we take a break?

Sylvain Bruni: I agree with the suggestion Valerie made. The testing, which to me is just like training or co-training, that humans would do with a team of humans, you are in certain situations, you rehearse things, you work scenarios, you explore, you see how your teammates react, you course correct as needed. That type of principle I agree is probably the easiest for the human to understand, but probably too for the AI and the system design side to account for. And creating those opportunities is certainly a great method from an engineering perspective to build trust and enabling the trust to grow and the relationship to grow and get better over time.

I will say that trust transparency explainability, though, in the last couple of years have really become buzzwords, so I very much like the way value has been compromising. What exactly does that mean in terms of the engineering aspects that we need to focus on? And add a couple to that, going back to one word that was mentioned earlier, which was the fit, the fit I would say in the conversation, so the back and forth and back and forth between the two, the intent fit and the role fit between the users and the cognitive assistance, though, I think have some kinds of dimensions or constraints or assumptions that really need to be thought about to enable that trust and a good working together to happen.

In some way, it reminds me of the origin of all of this, which is the gliders symbiosis between humans and machines. I think this is about the compassing the engineering dimensions that enable that type of symbiosis. To address your question about methods to co-learn and co-evolve, apart from training, I would say it’s about providing the points of leverage within the system itself so that every operation of use creates some form of learning for the human and the system. You mentioned earlier tool versus teammates, to me, this is where it’s going beyond a debate of tool versus teammate. There needs to be key learning happening at every type of interaction that is happening. And when you design a system, you have to put that in there.

If I return to the maintenance example, when the novice maintainer is using this system, the cognitive assistant as they are repairing, let’s say, landing gear, there needs to be learning on the human side. The human should learn something that they can reuse somewhere else at another time in the future with or without the cognitive assistance. But in reverse, the cognitive assistant should also understand what the human is doing and why they’re doing it because it might modify the way the AI is actually modeling the environment to task the system over which the human interacts.

The human may be very creative and find another way to replace that little valve number three over there which is not in the original guidance that the cognitive assistance may have learned from, or may not have been something any of the previous human experts have demonstrated, so the cognitive assistant would have never seen that before. And that that new person who has this new way of doing things is injecting new data that can be helpful to the AI model to increase performance over time. All of that needs to be designed for an engineer when the system gets created so that evolution benefits from operational use as well.

Daniel Serfaty: Thank you, Sylva and Valerie, for these explanation. This is complex. It seems like until we get an intelligent cognitive assistant that can collaborate with a human operator in a way the manner by which, say, a true human intelligent assistant will collaborate with the user, we have to pay special attention in order to build trust with these learning moment, these co-evolution moments because without them, we might throw the baby with the bathwater, in a sense that until we reach a certain level of smoothness, for lack of a better term, in that collaboration, we have to pay particular attention as designers to injecting opportunities for collaboration in the design of the system.

We just talked about the complexity, but also opportunities of designing those intelligent cognitive assistance, perhaps in the future with some kind of proliferation of them around us that it is some very natural for us to work with different intelligent cognitive assistance for different parts of what we do. In what particular domain areas of work or play, actually, do you believe cognitive assistance, especially those intelligent cognitive assistance will have the strongest impact? Is that at home, in defense, in cyber, in healthcare, in gaming? You’re welcome to give examples of what’s happening today already, in which you’ve seen already an impact, but also what’s coming up? What are the markets, so the domains of work where they are ready to welcome those?

Valerie Champagne: I really think there is a high payoff in the defense industry for these cognitive assistance. Like we’ve said previously, they have an opportunity to be a real force multiplier. I think depending on the field you’re in, depends on what the current status is, I think. Sylvain, you’ve been talking about the work you’ve been doing in the maintenance field. I know for intelligence, we had outplayed the butler that we worked on together, but I don’t think they’re heavily deployed to the field right yet that I have seen in recent time. But I think there’s real opportunity there.

My sister is a nurse. Before this call, I went ahead and talked to her about, “What do you think about a cognitive assistant?” I was asking her about medication. She said, right now, that’s already automated for them, where if it’s not a narcotic that has to be handled separately, but basic medications are automatically distributed, basically that’s an assistant. It gives you the pills to provide to the patient and it automatically inventories those. She feels that they have complete trust. It’s super great, she loves it, and that’s very much integrated into their workflows there. She’s a nurse on their floor.

Daniel Serfaty: You’re talking about defense healthcare as probably right for that kind of accepting that kind of innovation that’s coming out of the labs right now and maybe it has not been fully fielded. Is that because those domains are particularly complex, Sylva, have particularly variables, or because those domains have grave consequences for errors?

Sylvain Bruni: I would say it’s both. To me, defense and healthcare, and I would also add cyber as another domain or market where I would see intelligent cognitive assistance as having a major play and role to serve. I think that’s for a couple of reasons because in those domains, humans need additional support or cognitive augmentation to perform at their best and beyond and avoid those type of critical outcomes, namely death. In those domains, if you make a mistake, a human might die, or the wrong human might die in the case of military operations. And that is really a cost that we don’t want to bear, therefore, considering the complexity, the dynamicity, the uncertainty of the environment which apply to defense, cyber, and healthcare, this speed of automation, the repeatability, the potential accuracy that the automation, the AI can provide at a speed much greater than the human could ease a necessity to embed.

The question is what type of mechanisms do we want to embed and how? I think that’s where the crux is and where it’s getting really difficult to bridge the gap between the research and the actual deployment of a developed version of a cognitive assistance because you need to select. Like you said earlier, Val, we have those grand ideas about perfect cognitive assistants and what they need and what they could do. But in the reality of engineering and deploying the systems, you need to focus narrowly on something that is accomplishable right away and demonstrate the value before you can increase that.

I will say in those three domains of defense, healthcare, and cyber, we are witnessing a widening gap between the amount of data that’s available and the human’s ability to handle those data. It’s only getting worse by the advent of 5G, the Internet of Things, edge computing. All of those new technologies basically increase multiple times the amount of data that humans have to handle. And to me, that’s where I see or identify what kind of domains are ready for this kind of technology.

Daniel Serfaty: It’s interesting, so complexity certainly and mission criticality in a sense of the consequences of making certain decisions. How about economics of it, or even the social acceptance aspect? Like I wear my Fitbit or my smartwatch has become socially acceptable. They don’t augment me, they just measure what I do. But having an assistant on my desk… And actually, let me ask you, Valerie and Sylvain. Both of you are highly qualified experts, making decisions all day long about projects and about customers and about collaborations. Are your jobs going to be impacted by intelligent cognitive assistants? Can you imagine a day when you have a cognitive assistant next to you that helps you do your job, eating your own dog food?

Sylvain Bruni: I definitely do.

Daniel Serfaty: You do?

Sylvain Bruni: Yeah, eating our own dog food, I absolutely do. I would say maybe another characteristic of where cognitive assistant could take off as a technology, which is when there is a huge backlog of work and not enough humans to perform it. I see that in my own job where I have so much work. I would want to clone myself, but if I can, then maybe an intelligent cognitive assistant can help. I always think of that in terms of two lines of work. There is the overload of work, I have a ton of things that I need to do, but then there is the overhead associated with the work, which is sometimes, let’s say, I need to write a proposal.

Well, writing a proposal is not just me taking pen and paper or a text processing system and typing the proposal. There are a lot of overhead to that. I need to fill out forms, I need to understand what the proposal is going to be about, what the customer wants, what kinds of capabilities we have to offer, all of that, or the type of additional things that need to be done, but they don’t interestingly provide a specific value to the task of writing a proposal.

Along those two threads of overhead and overload, I could see you an intelligent cognitive assistant helping me out. In proposal writing, filling out those forms for me using the template, using the processes we currently use that are very well-defined, why couldn’t I have this cognitive assistant actually do all of those menial tasks so that I can focus like Valerie mentioned earlier, I can focus on those parts that I really need to focus on? Same thing as in your example with writing a report for a general. You want to spend your cognitive abilities on what’s going to make the difference and bring the value to the general, not on selecting the right font and the right template and the right colors for the reports you want to do.

And that, to me, applies to almost everything I do, customer management, project management, and even the basic research of keeping up to speed with literature reviews or the content of a conference, all of those types of things there are a lot of the actual work that I need to perform that could be automated. So my brain focuses on only what my brain can do.

Daniel Serfaty: In a sense, you’re imagining the world in which you will have several cognitive assistants that specialize in different things. You’ll have one of them who helps you write proposals in the way that you just described, another one who helps you maybe manage your schedule, another one that you can send to listen on to a conference that you don’t have time to go to and can summarize the conference for you or the talk.

Sylvain Bruni: Interestingly, I would say, I would want one cognitive assistant to manage an army of other cognitive assistant doing those things because there is an interplay between all of this. Remember when I was talking about the complexity of the context and what the context means. When I’m writing a proposal, I’m also thinking in the back of my head about the work I’m doing on this other project and that other customer for whom the proposal is end, but could potentially be interested in the work from this proposal. All of those things are so interrelated that I would want my cognitive assistant to be aware of absolutely everything to be able to support me in a way that augments everything and not just being siloed. Does that make sense?

Daniel Serfaty: That makes sense. What about you, Valerie? If you had a dream intelligent cognitive assistant, what would it do for you?

Valerie Champagne: I agree with everything that Sylvain said. That sounds awesome. I will just add in, so when I was an executive officer for a general officer for a brief period of time, one of my tasks for him that I did was every morning came in and I would review his email too. I would get rid of all the stuff that wasn’t a priority, and then I would highlight those items that he really needed to look at. I think it’s like my email, I don’t know about you, but as soon as I get off this call, I’m going to have hundreds of… or at least 100 emails in there that I’ve got to weed through to figure out what’s important and what isn’t.

That’s a very simple example of how this cognitive assistant could really help us out. I think it’s probably doable now, where it can learn the things that are most important to me, rather than… I know you can input different rules and things into the email system and make it do it for you, but that also takes time and it’s not always intuitive. And so that cognitive assistant that can just make it happen, yeah, that’s what I want.

Daniel Serfaty: It’s interesting. You describe a world where that device, so that collection of devices or that hierarchy of devices, according to Sylvain, he wants to have all army helping him, has quite an intimate knowledge, is developing quite an intimate knowledge about you, about your data, your preferences, but also perhaps your weaknesses and even beyond that. In order to help you very well like a partner or life partner, there is that kind of increased intimacy of knowing about each other. That intimacy is really the result of a lot of data that that assistant is going to have about you, the user. Do we have some ethical issues with that? Are there ethical consideration in the design of those systems to protect those data almost and perhaps beyond the way we protect medical data today through HIPAA compliance or other ways? We’re talking about getting way inside our psyche at this point. How do we design? As engineers, how do we conceive of protecting those data?

Valerie Champagne: I have no idea about the design. I’ll leave that to Sylvain. But I will say there is a feel of big brother with that cognitive assistant. I know I’ve seen some sci-fi movies where that cognitive assistant then turns on you and does nefarious things. And so I think security… And I mean, we’ve just seen recently what two cyber breaches in the last couple of weeks, one for our oil and one for a meat factory. Imagine if your personal assistant got hacked, that could be pretty scary. We definitely need to build in security and then whatever else Sylvain says.

Daniel Serfaty: Yes.

Sylvain Bruni: No pressure.

Daniel Serfaty: Immense stuff. Valerie said this is entirely your responsibility to do that. Have you thought about that as you design those data stream as way to protect that? Because as you said, if I want to know something about you, it will be very costly for me to spy on you, so to speak, but I could easily hack your assistant who knows almost everything about you, your cognitive assistant.

Sylvain Bruni: This is a valid concern just like with any technology that’s going to be handling data generated by humans. I think there are two aspects of the problem. The first one which you went into was the cybersecurity, the integrity of the system aspect of it. Both in the world of cybersecurity and healthcare world, there are a number of protocols and methods in engineering to design systems that counter as much as possible that. Obviously, 100% safety of access does not exist. There is always a risk. A secondary part within that realm of cybersecurity and data integrity is in the way the data are stored or manipulated adding layers of security.

Blockchain is one emerging way of ensuring that the data is better protected through distribution, but you could also imagine certain things about separation of data. Two and two cannot put together kind of process. Anonymization and abstraction of information is another method we can think of to do that aspect of data security. But there is another bigger problem to me, which is what the assistant could do or reveal with the data it has beyond its current mandate of supporting the human for something that sometimes that latent knowledge can yield opportunities for learning and betterment of the human, we’ve talked about that a little bit, using the gaps that may be identified by the cognitive assistance as an opportunity for learning.

That aspect, we were definitely thinking about that and intentionally trying to put in place the mechanisms such that when a gap is revealed, it is not about saying, “Oh, you’re bad and you suck at your job,” but more, “Hey, here’s an opportunity for improvement,” and cognitive assistant could trigger a task, an activity, something for the human to learn and bridge that gap. The problem is beyond that when AI becomes a little bit more intelligent and it can do things that we can’t anticipate necessarily just yet, and that I do not have yet a good answer for it, but a lot of other people are thinking about those types of issues right from the very beginning because that’s where it needs to be thought of at the design level.

Currently, the gate is really in the interaction modalities. The cognitive assistants are built for a specific mandate with our interface. All of the latent knowledge that could be used for something else typically would not come out. But who knows? We could have a cognitive assistant say things that are very inappropriate in the language that they use. That has happened. There are some methods to go against that, but we’re discovering what those types of problems may be as we implement and test those kinds of systems.

Daniel Serfaty: I’m sure Hollywood is going to have a field day with that. I’m waiting for the next big feature when the cognitive assistant goes wrong. It happened before in 2001, one of the first one.

Sylvain Bruni: Correct.

Daniel Serfaty: My last question for you is non-withstanding the science fiction aspect of it, it’s not so much fiction. We’re really touching those assistants right now as we speak and they’re becoming more sophisticated, hopefully more helpful, but with the danger of being also nefarious. And the question is if you imagine a world, 10 years from now, so I’ll give you enough time to be right or to be wrong, describe me a day in the life of a worker where you can pick a doctor, or a nurse, or a command and control operator, or an officer, or a maintenance worker leaves the cognitive assistance of the future. How does it work? Can you make a prediction? Are they going to be everywhere? Are they going to be very localized in some specialties? Who wants to start with that wild prediction? Don’t worry about predictions about the future. You cannot be wrong.

Sylvain Bruni: My prediction is that in the next full of years, we’re going to see incremental evolution of the types of assisting capabilities that we have mentioned throughout the podcast, ID the Siri and Alexa type of things getting better, a bit more clever, having some basic understanding of the environment and the conversation being more fluid and multimodal. I think that constant improvement is going to happen. However, further down the road, I would see really a big leap happening when data interoperability in various domain is a lot easier and faster particularly in the consumer world.

I would imagine that in the future, those cognitive assistants will be everyday work companion that we cannot live without, just like the cell phone. Nowadays, we would not be able to survive the world without a cell phone. I think down the road, the same will be true for cognitive assistants because they will have proven their value in removing all of the menial little things that are annoying every day about doing things about data searches, about data understanding, about data overhead. I would really see that as being the way this concept is going to be in the hands of everyday people.

Before that, I think the research is still needed and those key critical environments like defense and healthcare will be the drivers of technology development, such that in those areas where the cost of a mistake is so high and the demand in human brain power is so high and currently resources so limited will have to have that type of tool or teammate. I don’t want to reopen Pandora’s box on this one, but that type of support to actually do the work that needs to be done. Those are my two cents prediction for the future.

Daniel Serfaty: Thank you. That’s very brave and exciting, actually, as a future. Valerie, you want to chime in on this one, 10 years from now?

Valerie Champagne: Sure. Mine is going to be a little more grim than Sylvain because I’m taking it from the standpoint of an air operation center. And back in the early 2000s where the air operation center was technologically and where it is now, it has not advanced very far. And so when I look at the capabilities of a cognitive assistant, I just think in 10 years that it will take a while for it to maybe not be developed, but to be fielded and tufted within the field and integrated fully into the workplace. In 10 years, I think you may get a cognitive assistant that does some of those rudiment mundane types of tasks that free up the operators so they have the time to really think. And then once they gain trust in that, then I think you could see it leap. But I don’t think that will happen in the next 10 years. I think the leap will happen after that.

Daniel Serfaty: Here, you have audience, both optimistic and a more cautious prediction about the next 10 years where artificial intelligence and power devices, software or hardware, are going to be here to make our lives better, alleviate our workload, help us make better, wiser decisions. Let’s meet again in 10 years and discuss and see where we are at. The MINDWORKS podcast will go on for the rest of the century. So don’t worry, we’ll be here.

I’m going to ask you before we close advise. Many people in our audience are college students or maybe high school students thinking about going to college or graduate students, so people wanting to change a carrier and they might be fascinated by this new field when, in a sense, you’re fortunate to work on, Valerie and Sylvain, that look at the intersection of psychology and human performance and computer science and artificial intelligence and systems engineering and they are asking themselves, “How can I be like very like Valerie or Sylvain one day?” What’s your career advice for those young professionals or about to be young professionals? You have each a couple of minutes. Who wants to start?

Valerie Champagne: I’ll go first. My background is not computer science. I don’t code. I’m not even a cognitive science folk, although if I was going back for my bachelor’s, that’s what I would study. There is no doubt about it because I loved the work. And so what I would say to folks is I have a bachelor’s in German and I have a couple of masters related to the intelligence field that I got when I was in the military.

And so what I would say to folks is studying of the sciences isn’t directly your passion. You could still work in the space by, first of all, having passion for wanting to deliver relevant capabilities to the end user and then gaining experience so that you can be a connector. And that’s how I look at myself. I don’t develop, but I do know how to connect and I also know a good thing when I see it and can help to connect the developers to the people they need to talk to so that their product is better and ideally would have a better chance of transition.

Daniel Serfaty: Passion and connection, that’s really your advice, Valerie. Thank you very much. Sylvain, what’s your advice here?

Sylvain Bruni: I have three pieces of advice. The first one from an academic and professional development point of view, I would encourage folks not to overly specialize in a specific field for two reasons. Those fields that you need to understand to do this kind of work, they’re very dynamic. They change all the time. The tools, the knowledge, the capabilities of what’s out there, everything just changes so fast. The technology we are using in 2021 is vastly different from what we were using even like three years ago.

My advice there is learn the basics in systems engineering, human factors, design, AI, software, a little bit of everything here and see how those connect with one another because then you will need the connection between those different fields to be able to work in the area of cognitive augmentation. Number two, it would be for folks to be curious and eager to learn and tackle things that are completely foreign to them. That’s mostly for the field of application.

I started being very passionate about space, little by little that moved to aviation, that moved to defense, that moved to healthcare. When I started, I had no clue about anything related to defense and healthcare. And now, that’s where my professional life is and continues to evolve. And being curious about those things you don’t know is really going to be an advantage to you because then you’re going to start asking the questions that you need to really understand to be able to build a technology that’s not going to fail at certain critical times.

And finally, going back to our previous discussion, I would very much encourage everyone to watch and read science fiction. This to me is my best source of information for the work I do, because, one, I can see what authors have been dreaming as the worst of the worst that could happen. We talked about that. What could go wrong with AI? Well, it turns out there is a huge creative community in the world that is thinking about that all the time and making movies and books and comics out of that. And so just for your own awareness and understanding what could go wrong, that can have an influence on your design work.

But also for the good side, not everything is apocalyptic, and so you have some good movies, you have some good books that will tell you a brighter future permitted by robots and AI and all of that. And those types of capabilities and features, they’re always something you want to aspire to in the work that you do in building them and delivering them to the world. I could go on and on and on about science fiction and how it’s actually useful for everyday engineering and design, but I will encourage people to take a look.

Daniel Serfaty: Here, you heard it, audience. Learn something new, be curious, read science fiction, be passionate and make connections. Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast and tweet us @mindworkspodcst, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima, Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.

 

Daniel Serfaty: Welcome to Mindworks. This is your host, Daniel Serfaty. COVID-19. This has been, and continues to be, the seminal event of our lives. In addition to the obvious threat to our health and safety, it has affected the way we work and the way we perform as humans in very profound ways. In this special two-part edition of Mindworks, we take a deep dive into this topic and share with you insights from experts on the ever-changing nature of work and the workplace.

What is it that we have learned from working through the COVID pandemic that we can take forward and help us reimagine work in the 21st century? My three guests are imminently qualified to discuss and enlighten us on this matter from their very different perspectives.

Karin Sharav-Zalkind is an Israeli bond designer and founding principal and creative director of NoBox Studio. Her true passion lies in creating meaningful spaces that promote creative company cultures. She does it by integration of all design forms, with which she creates spaces people love to work and live in. Karin uses her deep expertise in creating experiential interiors to help companies manifest their identities in the workplace. In 2017 Boston Magazine held her talk, called The No Name Office, during Design Week Boston as one of the can’t miss events. Currently Karin is actually documenting the current state of the workplace around the Boston area, observing the effects of the pandemic on these spaces.

Mary Freiman graduated with a Master’s Degree in Human Language Technologies from the Linguistics Department at the University of Arizona, where she also studied linguistics and philosophy. Her career has centered around research on how humans communicate with each other, as well as with technology, and how communication in both directions between humans and technology can support understanding on how to make humans and computer work effectively together. Mary is currently leading the Cognitive Augmentation Technology Capability at Aptima, where she continues to care about humans working well, but also how technology can support their work and make it good.

My third guest is a returning guest to the Mindworks podcast. She appeared the first time earlier this year, when we talked about teams and distributed teams. Dr. Kara Orvis is a principal scientist and the Vice President of Research and Development at Aptima. As VP, she manages four divisions with more than 100 interdisciplinary personnel across 20 states. Kara herself has worked remotely for more than 10 years. She received her PhD in Industrial Organizational Psychology from George Mason University, and her dissertation was focused on distributed leadership and teamwork. She has published several articles on topics related to the nature of distributed work.

Here we are with very different perspective, the language and cognitive one, the industrial organizational one, the design and spatial one, to help us understand a little bit about what COVID-19 has done to our work, but also how it has affected the way we think about the future of work and the future of the workspace.

First, I would like my guests to introduce themselves. And I want to ask you, specifically what made you choose human work in all its aspect as a domain of interest? Mary, you’re a cognitive scientist. You could have worked with language and babies. Kara, you would have worked about giving very expensive advice as a management consultant somewhere in the corporate America. And Karin, you could have also thought about designing high rises and things like that. But you all focus actually on the human dimension of work in different aspects. So let’s start with you, Karin. Why did you choose this domain? What is particularly interesting there?

Karin Sharav-Zalkind: Well, my parents really wanted me to work in communications because I had really good human skills and I felt that after working in a newspaper for many years in Israel, this was not something that would interest me in the long run, and I wanted something deeper with more impact. And as I started design school, I was dabbling between architecture and design, and then I realized that architecture does not really interest me in the sense that it’s a lot about the space and not enough about the human.

What really drew me in, especially to interior design, was the idea that we’re from the inside looking out and how we feel as human in these spaces, and I think that’s very important to remember because a lot of spaces are very foreboding when you walk in. Even walking into big office atriums and everything, and you’re feeling very small and insignificant, and I think the human experience is what makes us difference in this world between animals and that’s what sparks our creativity, our connection, our passion. So I think that’s what led me to that world.

Daniel Serfaty: Well, I’m certainly happy you’re with us today.

Karin Sharav-Zalkind: Thank you.

Daniel Serfaty: Mary, what about you?

Mary Freiman: I was initially drawn to linguistics in particular because I thought that I wanted to be an English major when I went to college, but then I took a few literature courses and found that they sucked the fun out of literature for me. And I read a book at my sister’s house that she just had laying around called Mother Tongue by Bill Bryson, and it was just such a cool way of looking at English in particular, but language, as having really specific patterns across time and across people, and I was completely fascinated by that.

I went back to school and changed my major to linguistics. It was probably my eighth major, so I’m not sure why that one stuck. But I’ve since college always been fascinated with how people use language as a tool in itself and what it reflects in how minds work, how when we are speaking and we are communicating with each other, that is a reflection of what we understand about the world and what we understand about our relationships with each other.

When we then insert technology into that ecosystem, it’s an odd fit because technology doesn’t have that same context and understanding about what language is to technology, it’s just not the same thing. And so I’ve always been really fascinated with that and trying to figure out how better to describe how language is used by humans and computers when communicating with each other, but also how to make that better, so that humans are getting what they want from technology, which is actually harder than you would think. It seems like we’re frequently disappointed in the skills of computers, especially when you think about talking to Siri or any other device that seems to be talking to you and you quickly realize that you’re on a totally different page from what Siri has to offer you.

Daniel Serfaty: Kara. [crosstalk].

Kara Orvis: Yes, hi. Good afternoon.

Daniel Serfaty: You’re both a student and the practitioner of distributed work and complex work arrangements. So tell us a little bit how you got to that particular focus. What attracted you to this part of it?

Kara Orvis: I was a psychology major in undergrad at Ohio Wesleyan University, and I thought all psychologists had to be clinical psychologists, and I thought I wanted to work with kids. And I started to learn a little bit about the success rates and some of the difficulty of folks who were working with children, and I’m a pretty positive person, and I really wanted to make an impact and work in an environment that I found I could make an impact. So I thought maybe psychology is not for me. But then somebody had recommended I take this organizational behavior course that was offered in, I think, the economics department or something like that, and it was all about humans in the workplace.

I learned about these people called organizational psychologists and they studied motivation and engagement and things that I was really excited about. So I decided to go to grad school, and then when I was in grad school at [inaudible] University, this was the late nineties, so the internet was just starting, I was beginning to use it, and I was working on this really big virtual team across multiple academic institutions, and it was really, really hard to work with people over the internet and through some of these new collaborative technologies that were coming up and people were starting to use. So I got really interested in how it’s different working through technology than when you’re in a face-to-face team environment.

I also want to mention that I am a dual-career family. So my husband and I both work full time. I have been working home, like you mentioned, Daniel, for over 10 years. I worked from a home office. And I have two kids, a six and eighth grader. And one thing that I found really interesting about the other folks on line here, Mary and Karin, is you guys are mothers too, and you are dual career families as well. So I think all of us can offer a personal perspective on what it’s been like working in the era of COVID as well as our interests in our careers as well.

Daniel Serfaty: That’s a nice segue into my next question. To what extent, your personal individual professional work, what extent did it change during COVID? Can you provide some examples? I mean, as you said, the three of you are, indeed, parents, mothers, with kids at home, with husbands at home sometimes. How did that work, especially for a person like you, Kara, who’s been working already in a distributed home office while your staff is actually all over the country. How did COVID specifically change that?

Kara Orvis: I can tell you the most startling difference, and it was hard. I mean, I’m not going to sugarcoat it. So as a woman who has a career and children, I’ve tried really hard and put a lot of time and effort into setting up my kids’ experiences so that they’re taken care of and they’re engaged in the right activities when they’re not with us. So I’ll stand in line at 3:00 in the morning to get them into the right summer camps. I’ll go to great lengths to make sure they’re taken care of, and that’s so when I’m at home, I have the environment in which I can focus and have my full attention on what I’m doing at my job.

Because of COVID, kids were taken out of the classroom and then put into the home environment 24/7, I no longer had that carefully curated environment in which I could do my work during those 10 hours that I had to do my work. And now I was fixing lunches or I was helping with homework or helping with Zoom calls that were falling, and I had a husband who was also working from home as well. And so an environment that was super peaceful, everyone would leave and I could go sit down in my office and do my work, that all changed completely, and it was constant interruptions all the time.

Daniel Serfaty: Karin, how did it affect yours?

Karin Sharav-Zalkind: It was devastating, I think, for an office designer. And I was in this moment in time where my kids were old enough, I had taken back from a “sabbatical” that I had for myself because we were building our own home, and I was like, of course I’m going to be the designer of my house, and I’m this great offers designer, and two three companies that I design for have really succeeded and exploded, and it was a really badge of honor because I have been piecemealing work ever since I started on my own and I’ve always worked from home.

There were many, many reasons for that. Because when I started off, there was the big 2008 crash and nobody was building homes, so I pivoted to office design and I loved it. And then as I was re-emerging back on the market and starting again, COVID hit. And the way that you were describing it Kara, It was just I never got that moment, my headspace, which is so critical.

At some point, I remember talking to one of the principals in the school and saying, “Look, I’m suffering from ADHD just from managing their Zooms. Either put someone there for 30 minutes or just get them off. I don’t care what it is, but I can’t manage 15 minute increments of Zooms, especially with a first grader that can’t read.” So that was very difficult.

Then nurturing a high schooler who was in public school and the public school completely dropped the ball all last year, and there was no program for them, and a middle schooler who was just really he’d get the Zoom rebound, if you’ve ever experienced that feeling of you’re all Zoomed out, but you’re 11 and you can’t see anybody. And it was really hard.

But I think for me personally, it was very, very devastating. It was actually heartbreaking. Because it was like, I’m fed up. I’m an immigrant, I’m a mother of three kids, I’ve just done this thing and I can’t do this anymore. But I think from those moments, you dig deeper and you redefine for yourself, what does it actually mean to be a designer today? Does it mean that I have to be in an actual physical space? I mean, we were talking about the virtual space versus this, and I’d write about design, I’d think about design. I think there’s deeper meaning to what we do as humans and it’s not just me going in a space and figuring it out and solving that problem. It’s deeper than that. That’s how the project of just documenting the office spaces during COVID arrived and I’m excited about that.

Daniel Serfaty: Mary, how did it affect you?

Mary Freiman: Oh, my goodness. Just as badly as the others. I thought that I would be prepared, because I have also been working from home since 2010, and my husband, since he got a job as a professor, he’s home three days a week normally. And so I thought that we could work in the same office. We’ve been doing that for a long time. But then when you add in a five and a seven year old, or now they’re six and eight, and we’re managing them as well, and now my husband is, yes, working from home as usual, but also teaching from home very loudly, suddenly the space for all of us, going back to the theme of space, has completely changed, and the ability for me to focus and have a quiet moment to actually get work done, it just completely changed.

I rearranged our house. We rearranged our lives to coordinate with another family to have our kids there a couple of days and their kids here a couple of days and then have somebody to help them. A lot of the work that my husband and I did in the past year to make it work was figuring out that suddenly our jobs became a zero sum game where the more that I was working meant he had to work less because we couldn’t do all of the things we normally could do in the same amount of time, and so that was difficult for us to negotiate and to make sure that our kids were doing what they needed to do.

We were figuring out how to coordinate with another family and have somebody watching them and teaching them and all the other things. Meanwhile still having the same job I’ve had, and I would say even busier in the past year than ever before, just as a coincidence at that time in my career.

We were very fortunate in that I do have a partner. We are very fortunate in that we were able to coordinate with another family and that we were able to hire somebody to help, and I know that that made us very lucky, but it was still extremely difficult, especially as time went on and things change. It seemed like things changed every couple of months.

Karin Sharav-Zalkind: I think for me, one of the words that struck me at the beginning, and I think that was really hard for me is the word of the essential. Am I essential? What is my job here in the world? I think for many women it shrunk me down to that very immediate, I’m the mom, I need to do ABC, and then you’re like, but I’m at this intelligent woman, is this it? Kind of thing. And it’s a very existential moment within this world of, who’s essential, who’s not essential, so forth. So it’s tough.

Kara Orvis: I was going to say, Mary, just to pick up on something you said, because we’re all women that are talking today.

Mary Freiman: Sorry, Daniel.

Kara Orvis: My husband has a career and he is awesome, wonderful, and we really try to approach our family through a team perspective. But he struggled too. He would tell you the same story that I told you. And like you, Mary, I’m so grateful that I did have a partner to go through that were able to figure it out. I think about single parent families, and I don’t even know how they could do that. Or families that can’t afford to hire help in, I don’t know how they survived this. So there’s definitely folks that are worse off than we were.

Daniel Serfaty: Listening to those confession, almost, I would say, [inaudible] I really appreciate that, and I think our audience appreciates it because they lived some pieces of that. They totally recognize themselves in your stories. And not just women, you’re right, Kara. Men also underwent some a transformation during COVID. But what strikes me, it’s not just the work that has been redesigned, it’s not even the workspace that has been redesigned, it’s other structure, like family. The concept of family has been reviewed, as all your examples [inaudible] or even multifamily units, as Mary shared with us the pod idea. That suddenly families have to collaborate in way that they were perhaps not prepared, not designed, to do. It’s a big redesign. It’s not just the work. And I think that is something we’re just starting to appreciate, the degree to which that thing has been truly traumatizing the very structures that we use to live with.

Then go even deeper at the individual level, at the intimate level, as Karin was sharing with us about, who am I? Am I essential? Am I a mother? Am I a worker? Am I both? What is it? So I think that not understanding those deep human implications is doing a disservice to all of us who. during our professional lives, are thinking about designing work. That’s what we do for our customers at the end of the day, whether they are soldiers or corporate clients or surgeons, that’s what we do for them.

Before we take a deeper dive into work into the 21st century, how important it is to reflect, to do what you just did for yourself, but also in general, as professional, to reflect really of the past 16 or 18 months, to really understand the shifting nature of work. For example, had work shifted already and that was just the catalyst that exacerbated a trend, for example? How important it is for us in order to think about the designers of the future, to look back in the rear view mirror and understand what happened.

Karin Sharav-Zalkind: It’s vital. I have to confess, when I started school way back in the 1990s, the first studio that we had at the time was to build a space for a person who works from home. So we’re talking way back. So as designers, it’s been a question that we’ve been pondering for a long, long time. It’s not something new.

I think what happened here was we got the ultimate lab [inaudible] innovation in terms of work from home. It’s interesting how it’s unraveling. It’s really, really fascinating for me, because at the beginning it was very cyber, futuristic, we’re all on Zoom, we can do this and do that, and as we’re getting more and more closer to a point where we can reacquaint ourselves to spaces again, what does that mean? And how do we reacquaint ourselves? And the way that we can reacquaint ourself is through reflection and through seeing what worked, what didn’t work, what is something that we can make better for the future? It’s not a gimmick anymore. It’s not something that we just ponder as designers and we make these cool spaces.

I was thinking about it today that more and more in the past 10 years, we tried to make the office feel more like home. We designed all these communal spaces that look like a library or our living room, and now we’re officizing our home spaces to create more thinking pods. And I found this funny image of this guy from the ninth century sitting at a desk and doing clerical work. So we’re basically going back in time. So I don’t know. It’s like we’re in this entrapment moment. So reflection is important.

Daniel Serfaty: Kara, Mary, both in terms of your own personal life, but also the people you work with, your colleagues, the people you supervise, how important it is really to take the time to think about all that, reflect on that?

Mary Freiman: I can say from a personal professional perspective what I saw, but I can also mention that we did a study at Aptima seeing what we saw and how that changed over the course of the year. Or in fact, really the first few months. So personally, as I said, I’ve been working remotely for a long time and I had all of that disruption around me in my home office that made working very different and more difficult. But actually when it came to working with people at work, having people at home was nice because the conversations that they would have normally in the hallway, or in passing, or in each other’s offices, that I was never really party too, started to happen online in Zoom chats, or in conference calls, or things like that, or emails. And so I definitely felt that I was more included in some of the conversations that I may have missed out on previously.

That, I think, has really held over since then. Maybe because I’m bossier or more intrusive about getting into conversations, but it’s nice to at least feel like I’m in the loop more on some things like that. And then speaking of loop, we did an internal research study for Aptima looking at how patterns of work changed and how people felt about their effectiveness and their workload and how they were interacting with each other, right in the first couple of months after the pandemic really hit. So everybody was working from home and we could very clearly track the increase in emails and chats and meeting frequencies, and when we asked employees at Aptima, how much more frequent are these sorts of emails, how much more frequent are meetings, and things like that, it was something that everybody felt that they were now in more communication by these other channels than they had been before.

But I think part of what was interesting is that most people felt like their effectiveness was actually about the same. So they didn’t feel that their work was overwhelming them with this new way of doing things and they felt largely just as effective.

Similarly, we had mostly positive responses about whether or not they felt their work was flexible, and if their workload had increased. There was an increase in workload noticed at that time, and again, that’s partly just because of some coincidental things happening at the company. So that was really interesting to track, and we were able to track it both through surveys and by looking at our communications and seeing how often meetings were set in calendars, and what was the change in the number of emails sent and the number of messages sent in Zoom chat.

That was really nice to see and to have confirmation that we weren’t crazy, but really in fact communications and how things were happening had changed significantly.

Daniel Serfaty: Thanks for sharing that, Mary. Certainly it will be very interesting to repeat the same research over time periodically, because of the initial effect is, isn’t that cool to be on the virtual water cooler and talk to friends kind of thing, eventually subside over time and to see whether or not actually it still takes basically more time to achieve the same level of productivity.

That’s really one of the insights perhaps here, is that this notion of people optimizing their outcome or their productivity or their accomplishments, but in order to do that, they have to work harder. And the nature of the work harder has to do with more communication and coordination [inaudible]. We’ll be back in just a moment, stick around.

Hello, Mindworks listeners. This is Daniel Serfaty. Do you love Mindworks, but don’t have time to listen to an entire episode? Then we have a solution for you, Mindworks Minis, curated segments from the Mindworks podcast condensed to under 15 minutes each and designed to work with your busy schedule. You’ll find the Minis, along with full length episodes, under Mindworks on Apple, Spotify, [inaudible], or wherever you get your podcasts.

Kara, I just want to wrap up that question, but suddenly you were not the one who was working at home, everybody was like you, everybody you supervised actually worked from home, I think, without exception, at least the first month. What insight you learn from that?

Kara Orvis: I won’t hide the fact that I’m very much pro distributed work. I think it can be done well, just as well as face-to-face work. I think that our organization was largely distributed anyway. I know people who work in an office who mostly work with people who are in another office or another state, working from a home office. So my perspective has always been that our company is largely distributed.

The one thing that I found challenging, I have this mental model with everyone, all 120, 130 people in my mind, and I know who’s looking after who, and I used to utilize folks who were on site to check in on people. Walk by a office, see if somebody’s there, are they withdrawing from the organization? Can they stop by and have a chat rather than send an email, because it might be something sensitive, or I need to gauge how somebody is doing.

I lost that because people weren’t seeing each other. It was much more difficult, it was more time consuming, to help take care of anyone. We had to reconfigure that virtual space to make sure that there were enough touchpoints to make sure that everybody was doing okay, especially under the circumstances where there was a lot of ambiguity and stress outside of work, but related to work. So I lost that.

What I was so excited about, like Mary, was that I belong to this virtual crew at the company and no one really understood our experience. I remember somebody saying, “You have to be on camera all the time.” She did not know what it was like to be on camera eight hours a day, and how you needed to use the restroom once in a while, or you needed to stand up and stretch. And it was like all of a sudden everyone got the same perspective of what that kind of work was like.

I also appreciated the stigma that sometime was associated with remote work, but there was no clear research or data to support some of the negative things about remote work. I love seeing how easy it was really for people in our company to go to a fully remote situation and keep doing the work that we needed to do. So that was really nice.

Daniel Serfaty: Yes. No, I think they are two words that I like very much personally, and I hope we can dig a little deeper in them, this notion of empathy, as you say, because right now we’re equalized and we all work from the same situation, or similar situation, so naturally it’s easier for you to take the perspective of the other. Something that perhaps when we have a hybrid workforce, you cannot really identify with the person who’s been working from home for the past 10 years. It’s difficult. We can make the intellectual exercise perhaps, but it’s not the emotional one.

The other word that I think is important here is this notion of intimacy. Many people worry that because we’re not going to be together we’re going to lose that human contact. And it’s absolutely true in many circumstances. I think the fact that we’re invited into people’s homes, that we had more opportunities to share, made us more intimate with each other at some level. And I wonder which we will be able to replicate that in the future.

Karin Sharav-Zalkind: Yes and no. I wanted to share, we had a conversation about this with a group of people, and there was a woman of color in that group and she said some profound things that I think none of us could understand, but she said, “Look, when I was working at home and my boy would run behind me, I would get different reactions, and I could tell from the faces of people on Zoom, that I had a different reaction to my wild child running behind me than a white woman that had the same child running behind her.” And it came to a point that it was so difficult for her, that intimacy was very, very difficult. And she said, “Look, when I go to work, I can bring my whole self physically to the space and leave behind those things of bias that people may read into.”

I think that’s something very important to understand that not all remote work is created equal for everybody. If it’s the infrastructure, if it’s the gender bias, so we’re three women here, but a lot of spaces where there were a lot of male dominant virtual meetings, sometimes women get swallowed up as they would in just corporate meetings, but at least there’s a physical presence where they can stand up and show themselves versus in Zoom where it’s supposedly the great equalizer or the pigeonhole, as I like to call it.

I think it’s a very important question of equity and not to overlook that question of what happens with gender bias, color bias, race bias, all of these things in these kinds of spaces, and they probably have the same problems as you would have in a physical space, but maybe more so because suddenly people are in your home, or in your closet, depending on where you are. So I just wanted to point that out.

Daniel Serfaty: Thank you, Karin. I think it’s a very important point that I wanted to also explore with you. I have so many things I need to explore with you, we’re going to need 10 hours. But that particular one had to do with the differential effects that this whole COVID had on different sub population, whether they are women, or minorities, or maybe people with limitations, old people or young people. I think that didn’t treat everybody exactly equally, and I think it’s important we explore that if we need to design the work of the future.

I’m going to work back a little bit here and ask you to describe, irrespective of COVID for a second, for our audience, maybe with examples if you can, to describe situations in which things such as the work structure, the work condition, the organizational arrangements, have been conducive to success in your own professional life. And also examples of the opposite, when sometime you find yourself in a project, in a company, in a situation, in which those context variables, the work structure, the work environments, the organizational arrangements, have been actually conducive to failure, or friction, or difficulties.

It’s important because we’re going to talk about designing work and we need to understand that these external design variables affect basically our productivity or also our failures, our frictions, our workload, our stress. So could you pick a couple of examples to share with our audience like that from your professional environment?

Mary Freiman: Well, one thing that was noticeable to me, especially as somebody who has worked remotely for a long time, is a few months ago, I went on a trip for work where we were going to be having a presentation, and I met several coworkers at the location, and over the course of a day or two ahead of the presentation, we worked together in a conference room that we commandeered at the hotel. And working together and preparing for this presentation in the same space at the same time, really proved valuable, and I thought made it possible for us to have a more natural interaction and a more natural revision process for our presentation over that day or two, so that we could be quiet in each other’s presence and think, and be working on things, and then come back to something that came to mind and come back to having a normal interaction that you can’t do in the same way over Zoom.

In that case, I definitely saw the value in being with people and working toward the same thing at the same time, so that you could have a normal conversation that wasn’t so pressurized like it is on Zoom, where you feel like if you’re quiet for too long, you might as well hang up. That was really good. I’m not saying I want to be in an office, but I do see the value for some occasions.

Daniel Serfaty: No, but that’s very interesting Mary, because if you project that, if you have a blank sheet, which you need to design, basically, the work organization post COVID, beyond COVID, the injection of those moments into professional life are important, but they will need to be orchestrated. And that’s really the lesson here, that it’s not all the time, but from time to time, you need almost to push the reset button on that and enjoy, as you say, the silences and all the team dynamics that you experienced there. Thank you for sharing that.

Karin, Kara, any experiences one way or another, again, in which those external variables affected actually, positively or negatively, the experience?

Karin Sharav-Zalkind: I think because I don’t work with big corporations all the time, I work with smaller companies, so that’s not something that I missed being in or not, but I was part of the conference and it was the first time in many years as a designer, especially working in the US, that we were able to have speakers from around the world. And that was mind blowing that you got speakers from the far east and from Europe, and it was very, very exciting to be in a space where you don’t have to fly in or pay a lot of money to go to it, it’s just an afternoon you spend at home and your mind is blown just from meeting different people from different disciplines and different places. So I think that was huge.

Daniel Serfaty: That happened in the past few months?

Karin Sharav-Zalkind: I think it was in November, even. It was before the big wave of January. And I think it was in November and it was super inspiring. It was like November is the darkest month, and it was just a moment that you were kind of, this is not so bad. People are thinking about it and there’s something to look forward to. For me, spiritually, that there’s humans that are still thinking around here, you’re not alone in this, we’re all in this together, in this shared experience of being a part and making an effort to be part of these “smaller” conferences and so forth. So I think that was a huge moment.

Daniel Serfaty: Absolutely.

Kara Orvis: I find this question a little hard to answer, and I think what’s rattling around in my head is the space and the work activity need to match. So there’s probably some activities, like what Mary described, where getting together, working collaboratively in a synchronous physical space, is going to be good for some activities. But there’s other activities where I know in our organization folks who work in an office have their own office with a door that they can close when they’re working for six hours by themselves writing a book chapter. So I think it’s about the match of the space to the work activity and what you’re trying to achieve that’s more important, and when those two don’t work together, I think that’s where you find challenges.

Daniel Serfaty: We talk about this notion of congruence in our human engineering, human performance engineering, balance. This notion of matching the work structure to the organizational structure or to the informational structure or some other dimension of human structure. And that congruence, I think while it was an elegant, theoretical concept for a while, I think we realized this year, how important that harmony is matching.

How many of us have been in Zoom meeting where the purpose was to do something like working on a document together, and it was a disaster because that was exactly the wrong work environment for that task. And vice versa, when sometime we all go into a physical meeting, and these are not just the two dimensions by the way, physical versus virtual, in which it becoming very unproductive because one person grabbed the microphone and everybody else just check their email. So it’s understanding better that harmony, or that matching, between what people do and the environment that facilitate that doing is important.

I want maybe to dig a little deeper. Karin, you spent the last few months doing something really interesting, which is going around the greater Boston area and visiting deserted offices, or offices that were very partially populated. Tell us a little bit about that project. What did you see? What did you learn? What did you expect from that study?

Karin Sharav-Zalkind: I came in with zero expectations. As I said, the whole beginning of the pandemic was very devastating for me, especially as an office designer, when everybody’s leaving the office and everybody’s talking about, well, you can design Zoom spaces. I’m like, no, I will die before I do that.

Daniel Serfaty: It’s almost becoming a cook when everybody’s going on a diet.

Karin Sharav-Zalkind: Exactly. It was bizarre. It was a bizarre sensation. I was like, okay, this is ground zero, and I was miserable and morbid, and it was dark, and it was just sad. And I just sat there and I was like, well, you have to do something with yourself. And one of the things that I said is, I’m going back to basics, the DNA of design. And the first thing that you do as a design thinker, which is really big now in all the business schools, all these design thinking projects.

Basically it’s an exercise of observation, note-taking, making serendipitous connections between different cultures, ideas, poems, history, newspaper articles, podcasts, whatever it is. And it’s like a melange of things, and then you come out with this beautiful space. And it hit me one day as I was looking at a Facebook memory, which were awful, by the way, to look at during the pandemic, it was like the life before or the before life. There’s all these phrases that I can’t even say them because they’re intense.

There was this photo of this window of an office space in Boston, and it was my aha moment of how to solve the space. That window was the key to solving that space. And I was like… And it was a beautiful space, and it was just occupied barely a few months before the pandemic broke. And I was like, I wonder how it looks like now. And that’s what started it.

There was a slew of emails that I sent right and left, and everybody was connecting me to people. And I started just by getting a good camera, my first love is photography, and just going there and just documenting. And first of all, I got a sketchbook. That’s what we do. We got a sketchbook, we put a photo, and we start writing and we start documenting.

I interviewed a lot of people and my friends that came to like dealing with death, when a person dies some closets are emptied out immediately, but some, all the clothes are left there for years or the room is untouched, and it was like that. Some offices were, it’s all neat and tidy, the caution tape over all the kitchen [inaudible], all the stickers of the six feet and going open in the elevator for the first time, and the seats were there, but there was a box with all of the people’s stuff and their personal belongings from their knickknacks on their desks.

Then some offices, it was just like Pompeii. There were Halloween candy from the year before and a newspaper from March 9th, 2020, and a calendar on March 2020, and it’s written dad’s birthday, and things like that. And that was even harder. That was very eerie for me, because it was just… There’s something about the neat and tidy that it’s like, okay, it feels like an office in movement versus the left behind.

It was a lot of reactions. Some people were, we started this and we did this and we did cocktail night. And I don’t know exactly what the end result will be of it, but what I have learned, and this is my hope, is there were some low tech companies versus high tech companies versus urban spaces and suburban spaces, and I wish we could create more synergy in the workflows of places, of low tech manufacturing, essential, what we call, and so forth, versus the high tech sitting at home coding. We all need a little bit of this and a little bit of that.

For me, I think that’s the key for moving forward, is understanding what worked in those environment versus those environment, and each approach brings something else to the table and I think it’s fascinating. There’s also other considerations of the environmental impact of driving into work every day or in urban settings, what happens to all of the small businesses around it when office people don’t arrive. So I don’t know that there’s a straight up answer. It was definitely a fascinating process and I’m just now starting to sift through the photos and figure out what to do with this project next.

Daniel Serfaty: No, it must have been beyond the emotion that you must have felt getting into places, it’s almost after a nuclear explosion or something. The deserted places. And then Hollywood is making a lot of hay out of these looks, post-apocalyptic almost looks. But the question that interests me is the degree to which you will be able to derive some design principle, as you said, that was part of a design thinking impetus, way to which you will derive new way of thinking about office design for the future. How those static pictures were there, and how do they drive the design. And I’m really curious about how you’re taking all that in the next steps. Maybe you’ll come back to the podcast [crosstalk] and let us know.

Karin Sharav-Zalkind: I think there’s two steps that we need, there’s two keywords that I continuously have in my head, is one is flow and the other is flexibility. And I think the other pieces that I find really fascinating is, initially when people go back in the spaces, we have to be very, very careful of not changing too much. We’re all in some post-trauma and there’s something very gut-wrenching coming to a space that’s completely been redesigned without you there. I think that can really scare people away. And I think rather than focusing on redesigning the space, is adding more places of respite, of reflection, of pause, and to acknowledge that there have been some personalized work rhythms that we need to acknowledge and take note of it and not be too quick to return to that same pace. So a lot of flow, flexibility. We don’t have to do this so fast all the time. We can redesign as we go too, and that’s okay. Completely okay.

Daniel Serfaty: That’s really an interesting project. I’m really eager to see some of those pictures at the next expo, I guess. I’m going to show interest.

Thank you for listening to part one of the special two-part series on the future of work and the workplace. You can tweet us @mindworkspodcast or email us at mindworkspodcast@gmail.com.

Mindworks is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Carlos Simmons. To learn more and to find links mentioned in this episode, please visit aptima.com/Mindworks. Thank you.

gacorway
gacor way
gacorway login
gacorway slot
GACORWAY slot gacorway GACORWAY