When robots stop feeling robotic.

Design This Day | Episode 5

What is a social robot?

Carla Diana joins Teague Futurist Devin Liddell to talk about the difference between a social robot and a chatty robot, how robots may surpass us one day in emotional intelligence, and why you might want to add LED lights to the ears of new robots. Carla is the author of the book My Robot Gets Me, where she explores how technology and social design shape our daily lives. She is also the designer in residence at the Cranbrook Academy of Art in Michigan. 

Scene: You walk into the hotel lobby. A check-in kiosk glows softly as you approach. Before you can tap the screen, a sleek, cylindrical robot glides up beside you.

"Welcome back. You stayed in a quiet corner room on your last visit. Would you like the same this time?"

You nod. The robot prints your key. You take the card and slip it in your pocket.

"Traffic on route to dinner is heavier than usual. I've pushed your reservation back and notified the restaurant. Would you like me to book a ride now?"

You select a vehicle from the choices on the robot's LED screen. Just then, another robot walks up beside you. It's wide-set and sturdy.

"I can bring your bags to your room if you'd like."

Normally, you like to carry your own bags, but you don't want to be late. You step back. The robot extends an arm, gripping the suitcase handle. It pauses to adjust its posture to accommodate the new weight. Then it slowly turns toward the elevators.

"Enjoy your evening."

You check your watch. Time to go. As you step outside, a driverless car pulls up. The door unlocks as you approach. This is the future of robotics. Welcome to Design This Day, a podcast about the future, the futures we want, and the people working right now to make those futures real. 

 

Devin Liddell: Before we jump into our conversation, Carla explains how she discovered her passion for social robotics.

Carla Diana: I spent a year as a visiting assistant professor at Georgia Tech, which is such a fabulous research institution. And there I met a professor, her name was Dr. Andrea Thomaz, and she in particular was in the forefront of a field of robotics, it was called social robotics, that I didn't even know existed. And she was looking for an industrial designer to work on a robotics project. And that project was called Simon.

Devin Liddell: Simon was the first robot that showed Carla the potential of this field called social robotics. A social robot is designed to respond to human interactions, not just technical commands, like how you might program a computer or drive a car. Most importantly, it can communicate in a way that feels natural, so you don't need any special technical knowledge to engage with it.

For example, if you wave to a social robot, it may acknowledge the greeting by changing the color of its indicator lights or moving toward you. Not all social robots have human-like features. Simon did have anthropomorphic ears, eyes, a nose, and even arms. But it was how Simon used those features to communicate that really caught Carla's attention.

Carla Diana: Talk about a light bulb moment. I mean, one of the most formative moments for me was the first time that Simon was trotted out in public that I experienced was at the CHI conference, a computer human interaction conference, in I want to say 2012, something like that. And what happened for me then is that I interacted with the robot in the way that it was designed for the first time. I knew this thing really intimately, so to speak. I knew every motor that was in there. It had eyelids, it had these irises that could move.

I like to exaggerate the ears because there's a thing that animals do where they're expressive with their ears. So Simon has this helmet with these ears that move and light up. And at the conference there was a demo that the PhD students were doing where you would ask Simon to, it was a simple exercise, well, simple for humans, to sort colors into their associated bins. So you would give Simon a green object. You would say to Simon, "Where does this go?"

This robot itself could put the pixels in front of the camera, its eyes, and then could parse the word and map those two things together so that from that moment on, it had a definition.

And Simon could say, "I don't know," and lift his hands like this. And then you'd say, "It goes in the green bin," and Simon will parse the word green. And then from then on associate those pixels. This robot itself could put the pixels in front of the camera, its eyes, and then could parse the word and map those two things together so that from that moment on, it had a definition of what green would be.

And when it was doing that, if it knew what the color was, the ears, the lights that I had designed to be in the ears, would match the color we were talking about. And it sounds like a pretty straightforward thing, but to me, it gave me goosebumps because I felt like this exchange was so natural and it knew what I was thinking, right? I was there with this object, going, "Don't you know the color?" And for these ears to show up at that color, it really blew my mind with the way that the exchange felt both natural and unnatural. This idea of the social exchange and being so focused on one another as two social beings was really an extraordinary moment.

Devin Liddell: This experience kick-started Carla's career of designing robots to interact with us at a social level. Essentially, she was designing non-human things to interact with us in human-like ways. As we continued to chat, I began to wonder whether we as humans would eventually meet the robots halfway. In other words, would humans and our world ultimately become more robot-like?

Carla Diana: Definitely. And this is something, I wrote an essay for Qualcomm a few years ago about the robot-readable world and how much that's affecting our own culture. And what I mean by that is having, let's say, roads that autonomous vehicles need to be able to navigate. And so all of the signage, even the storefronts, let's say, might change. I mean, those are cultural artifacts. A little cafe on your corner might actually change so that it could be camera-readable, let's say. Or your environment would change. Even your home environment might change or have ramps in different places or particular doorknobs.

I began to wonder whether we as humans would eventually meet the robots halfway. In other words, would humans and our world ultimately become more robot-like?

And then there's the question of language, which I think we're already seeing. I think certainly the way, I mean, AI's really with large language models jumping ahead to meet us at the human point. But I've been feeling like over the years, we change our semantics, we change our language, we change the way that we phrase things, and I think it affects us subtly. I think it becomes part of human language.

Devin Liddell: Yeah, exactly. This may be a strange anecdote, but I'd be curious what you think of it. When we as a family, we signed up really early on to get an Alexa device. And at one point, and my kids were really young at the time, but I came around the corner and they were speaking kind of gruffly to her, is the way I would describe it. They were speaking in a tone that I didn't approve of. I was like, "Oh, that's not how you speak to this thing." And of course, as a parent, and a lot of parents probably empathize with this, you're just kind of making this up as you go along.

And I said, "Hey, hey, you don't talk to Alexa that way." And then they turned to me and they said, "Well, Dad, she's just code." And then of course that put me on the spot. And I said, "Well, but she has a voice, and in this house if you have a voice, we speak to that voice with respect." And of course, they said, "Well, the dog doesn't have a voice," so then I'm really making stuff up on the fly. I said, "Well, that's true. Ludo doesn't have a voice. Sort of, I mean, he can bark. But in this house, if you have a voice and you have a body, then you get even more respect."

Carla Diana: Okay.

Devin Liddell: I'm making this up entirely, totally on the fly, completely making this up. But the reason I bring it up is some of those things you're bringing up along the lines of how language shapes the human experience, but also how these physical entities and our experience of the physical world, like you're mentioning doorknobs and storefronts, those are all big parts of our human experience. And I feel like we were even wrestling with what to call AI-powered robots. I've heard the term physical AI emerging, but it seems like we're even wrestling with what to call these things, like call these things that have a body in a way, but also have intelligence.

Carla Diana: Right. I've been working with the Diligent folks for so many years, and that robot really uses AI. So I struggle with the semantics of AI a lot because it's pre-large language models.

Devin Liddell: The AI-powered robot Carla is talking about is called Moxi, created by Diligent Robotics. It is a hospital robot designed to help healthcare workers with tasks like fetching supplies and transporting lab samples. The goal was to give nurses, pharmacists, and other healthcare workers more time to spend with patients. But even though Moxi's main purpose is basically to do the grunt work at hospitals, Carla knew it was important for the robot to be socially intelligent.

Carla Diana: The value proposition is that it's a social robot, even if it's not a chatty robot, I'd say. And the social part came a lot with just knowing how to roam a hallway, knowing how to say, excuse me if you're too close, or hello if you're far enough away, or move out of the way. And it's such a fast-paced environment. So it's funny to me to think about an AI physical robot when I feel like, well, but we've had AI in physical robots. But I feel like what you are talking about is something specific around maybe a humanoid talking robot.

Devin Liddell:
Could be. Yeah. I think that's another topic as well, is the notion of a social robot, is it unbound by form factor? Meaning could a future autonomous vehicle, for example, be considered a social robot?

Carla Diana:
Yeah, I think absolutely. So in an autonomous vehicle, if you can just show up as a human being and intuitively understand, I mean, there's so many things with a vehicle that are really challenging. And also there's so many things to talk about with this, but I don't believe in the chattiest robot. It doesn't need to be KITT. Do our listeners know the reference? I don't know.

Devin Liddell:
Well, I'll back up. For those of you who don't know, KITT was one of the protagonists in a great TV show called Knight Rider. It was an autonomous vehicle from the 1980s, but I think it emerged from a secret lab, essentially. I forget some of the backstory. But anyway, David Hasselhoff was the human.

Carla Diana: David Hasselhoff.

Devin Liddell: Yeah, the Hoff. The Hoff was the other half of this dynamic duo, along with KITT. Yeah.

There's one particular challenge in vehicles, autonomous vehicles, and it happens with pilots too, it's the handoff problem.

Carla Diana: Yeah. So I go back to KITT as a reference sometimes because I do think a lot of our expectations from the world of our physical objects is super influenced by sci-fi to good and to bad. There's one particular challenge in vehicles, autonomous vehicles, and it happens with pilots, is the handoff problem. If the human needs to intervene, they need to, let's say, grab the wheel and take over and take full control, or if the human is ready to pass control back to the vehicle or aircraft, whatever it is, that's a really sophisticated moment of communication between the human and the machine, and it's super challenging.

Devin Liddell: Interesting. One of the things I always found interesting, and it's especially true when you look back at the Knight Rider episodes and see their interactions between KITT the vehicle and David Hasselhoff as the character, Michael Knight was his name, there was often this strange exchange between the two of them where Michael Knight would essentially kind of encourage KITT to think bigger about its capabilities than it was considering itself. And I always remember this, one of the car's superpowers was a turbo boost, so it could jump from, it's really ridiculous, but the car could jump, for example, from one rooftop to another through this turbo boost function.

Carla Diana: Turbo boost.

Devin Liddell: Yeah. And oftentimes, KITT would tell Michael Knight, "I can't make that distance. That distance between these two buildings is too far." And Michael Knight would say, "No, no. Come on, KITT, you can do it. You can do it." And then he would acquiesce and then do it, which now I look back and it's like, well, that doesn't make much sense at all. The robot knows what it's capable of doing in a very cold, rational way. But Michael Knight had this kind of American can-do attitude of, come on, you could just, if you believe in yourself, KITT, you can do this.

Carla Diana: This is the challenge, frankly, with designing social robots, is making sure that there is a set of expectations that are set very clearly, because we have all these sci-fi models in our minds. So we're used to, there's this phrase that roboticist use. It's like, "Where's my robot?" Okay, well, where's Rosie? It should be happening now.

Devin Liddell: As in literally Rosie from the Jetsons?

Carla Diana: Yeah, right. "Where is my robot already," I guess would be the phrase. We all have this expectation that's set by science fiction. And the beauty of those images is that they portray these machines in these really sophisticated ways, modeled after human behavior. But the machines are not humans.

So the autonomous vehicle sees the world through camera vision, through whatever, like this box is indicating a person's body and this box is indicating a line in the road. It's not the way that we see things or we parse things. And so some of the assumptions that we make about the social interaction can be really risky and inaccurate.

Devin Liddell: While social robots may engage in behaviors that cater to our human sensibilities, they process information entirely differently to us. And yet, social design can lead us to inadvertently project our assumptions about social interactions onto robots. We are biologically hardwired to feel for things that mimic life. This becomes especially tricky with machines that have human features like eyes, arms, and legs. In some ways, it can actually make more sense to design social robots that don't look like humans at all. In fact, some of Carla's favorite robots look like everyday objects.

Carla Diana: So my fascination as a designer, and what I wrote about in the My Robot Gets Me book is really not around robots at all. And it's around this idea that what I experienced with Simon could show up in our everyday products through gesture, through sound, light, and movement, but that they wouldn't necessarily all be robots, or that we wouldn't necessarily need one walking, talking thing with arms and blinking. That we could take this idea of gesture and communication and build subtle moments into our everyday objects, whether it's the chandelier, whether it's the vehicle, or whether it's this microphone. That's really where my fascination lies, is what are those subtle gestures that we can take from the field of social robotics without having to build everything into an everything machine.

What are those subtle gestures that we can take from the field of social robotics without having to build everything into an everything machine.

Devin Liddell: Interesting. I recently just finished a project in collaboration with MIT, a researcher there named Sheng-Hung Lee, who is with the MIT AgeLab. And we were working on a project around what types of both physical, but also digital only AI people would want as they age in place, so as they're in their homes. But this discussion has kind of prompted something, a conclusion that we came to, which is that there is a place for what we described as a conductor AI, and it's basically just like a systems AI. It's one that controls otherwise sort of non-smart or non-intelligent things, like wheeled porters is an example, we use. If you were alone in your home and you wanted to be able to move your meal from the kitchen to the living room, you could have your conductor AI summon a wheeled porter that would otherwise not have any AI on its own and move it over there for you.

And it was interesting because when we asked seniors who are aging in place right now about it, when it was moving things around or lifting things, there was very few objections around that type of proposition. If it came into a more intimate sphere, if it helped them with their bodies, if you will, then a lot more objections came up. Like, I'm not sure if I really want this machine to help me put my socks on, for example, or get me dressed. And I'm curious, what is it about that? What's been your experience in terms of how people distinguish what kind of help they want and what type of help is too much or just isn't even appropriate for a non-human thing?

Carla Diana: Yeah, that's a good question. First of all, we don't know what we want, and I hate to be so dismissive of this collective us, but I don't think I would tell you that I would've spoken into a phone. Well, I would've spoken into a phone. Spoken to a mobile device to have it do things beyond speaking to the device, not to a person. I do it all the time. In fact, I'm finding myself bored of typing. The more that I do voice to text, the less patience ... Oh, God, this typing, right? I don't know if you experienced that. And I see my son, my nine-year-old does voice to text more than he does typing.

There was another example that's in the space of folks with cognitive disabilities, and it's a company called LEQ. Well, the product is called LEQ. So what it is, it's a conductor robot. Now I'm using your phrase for that, but it's a shape kind of like a lampshade that has a tablet that sits next to it. And when messages come in on the tablet, this lampshade talks to the person and says, "Oh, you have a message," or, "Your doctor called. Oh, it's time for you to take your medication." Or, "How about doing some exercises? I can coach you."

And so when I first came across it, I thought, I don't get it. And then I looked deeper and watched a lot of the testimonials and interviews and could see how that would be helpful, that there's this cognitive load of having to launch an app and navigate to it, and then dissect the message and go back to the other app, and to just have this conductor essentially do that. So I do think there's a lot of potential for that.

Devin Liddell: Right. Yeah, exactly. Yeah. That use case, I think we had to, it was related. It was an adjacent territory. We called it an advisory, but it was exactly what you described. It was something that would help surface all of the stuff that you had to account for, whether that was medical appointments or oil changes and so forth. But also to your point, potentially bring up like, "Hey, I can coach you in, you haven't seen your friend Carla in a while. It would be good to call her and see if you want to go to the movies soon."

The word robotic suggests something is stiff or awkward, something that doesn't move or behave like a human. But we are heading to a future where robots no longer possess this kind of social clumsiness.

The word robotic suggests something is stiff or awkward, something that doesn't move or behave like a human. But we are heading to a future where robots no longer possess this kind of social clumsiness. When interactions feel natural and easy to understand, it can help build trust between humans and robots, but it does raise some interesting questions. For example, what happens if one day robots surpass humans in emotional intelligence?

Carla Diana: I am an optimist and a utopian. And I do think that the AIs, certainly, they can surpass us in EQ. I ultimately think that bumbling, messing up, hurt each other's feelings is part of being human, and it's what endears us to each other, and it's what makes us grow closer. So I think even if this EQ can surpass us, it's not going to replace us in that way.

Devin Liddell: Right. Yeah. Well, it's funny that you talk about bumbling and messing up and hurting each other's feelings. That's also a science fiction trope. Like C-3PO.

Carla Diana: C-3PO.

Devin Liddell: He just talks too much. He doesn't read the room.

Carla Diana: Right, right, right. Yeah.

Devin Liddell: So maybe we'll imbue them with that as well. All right, so we're going to move into a speed round. I am curious, what is your take on the most over-hyped technology?

Carla Diana: There's been a series of products that are around AIs that record everything in your life and every image, and then distill it back to you. And I can see where there would be some convenience in that, but it feels like obsessive overkill.

Devin Liddell: Obsessive in the curation and recording of things?

Carla Diana: Yeah, or in the I need to be able to access every single piece of information that's come across my life.

Devin Liddell: Got it. Okay. Yeah. Interesting.

Carla Diana: Yeah. Maybe if everyone adopted one, then it would become the social norm, and then maybe I'll change my tune on that. Come check in with me in 15 years. Maybe I'll feel like, well, I couldn't keep up.

Devin Liddell: Well, I can understand the appeal of having everything at your fingertips. What about your take on the most under-hyped technology?

Carla Diana: I just saw this designer artist, Andrew Shea, give a presentation about how he was collecting data, all environmental data from in the middle of a field using some of the sensors that farmers use, but then mapping it to sound and being able to hear it. And then when a storm came in and when the ground was moist, the landscape of the sound completely changed. So yeah, I guess I would say that under-hyped is having really sophisticated soundscapes that map to data that allow us to understand the world without it being all spelled out.

Devin Liddell: Right. Yeah. Years ago, we had an issue of we had rot, we had doors that were leaking that we didn't know they were leaking. But it made me, from a design standpoint, sort of pine for a future where a future home would actually just tell me, "Hey, by the way, there's water that's coming in under this eave. You're not seeing it, but I just wanted to let you know that it's coming in."

Carla Diana: Right. It'd be just like a high-pitched squeal.

Devin Liddell: Exactly. Some alarm. Okay. Is there a book or a show that's influenced how you are thinking about your work as an innovator?

Carla Diana: All right. This is going to be everybody's example, but the Black Mirror.

Devin Liddell: Oh, yeah.

Carla Diana: The whole Black Mirror series.

Devin Liddell: It's extraordinary. It's pretty remarkable.

Carla Diana: It's extraordinary. From time to time, I have my graduate students do a seminar where every student picks an episode and we talk about what the cautionary tale is. Because I do really feel like designers have a role to play in really thinking about the implications of technology. And that show Black Mirror, all of the things they're cautioning us about are genuine fears that we should have in our everyday life.

I do really feel like designers have a role to play in really thinking about the implications of technology. 

Devin Liddell: Sure. It doesn't require that much of a suspension of disbelief. The episode about the robot dog-

Carla Diana: Right.

Devin Liddell: That episode's dog, robot dog that never stops hunting this person, looks very similar to existing form factors of those types of devices.

Carla Diana: Yeah, that's probably one of my favorite episodes. But yeah, the fact that so many of those things are possible today, I think is what makes it especially interesting.

Devin Liddell: Very cool. What about an innovation in another industry? So something completely out of AI, completely out of robotics that has inspired you to think differently about the work you're doing?

Carla Diana: I would have to say the food industry and the restaurant experience, how much the restaurant experience, a great restaurant experience can be like theater.

Devin Liddell: Oh, okay. Sure.

Carla Diana: Where the server may tell you something about the farm that your vegetables came from, and then you get to actually experience it. Or even just the way that the plate is shaped and the timing of when dishes come out and pairing things together. I think a great restaurant can really bring that experience of theater that I think we can bring to product design.

Devin Liddell: That's fascinating. In anticipation of our conversation, I was thinking what were my first experiences of robots? My in-person, real-life experiences with robotics? I think it's actually at an early age going to essentially these restaurants. For me, I grew up in Denver, Colorado. It was called Showbiz Pizza, and it featured a robotic band of bears and other animals. The appeal was that theater you're talking about.

Carla Diana: Yeah. There's a dystopian version of that that my nine-year-old likes. There's a horror game. Do you know about?

Devin Liddell: I know exactly what you're talking about because my kids were into it as well. Five Nights at Freddy's.

Carla Diana: Five Nights at Freddy's!

Devin Liddell: Right. Carla, thanks so much for being here today. It's been a fantastic conversation.

Carla Diana: Oh, you're welcome. Thank you so much for having me. This has been a blast.

Devin Liddell: That's it for today. Thank you for listening to Design This Day, a podcast by Teague. Subscribe on your favorite podcast app so you don't miss the next episode. We have some really exciting guests coming up. I can't wait to share more with you next time. And if you have a complex problem that needs solving, we'd love to hear from you. Visit us at teague.com or send us an email at hello@teague.com.

Design This Day takes you on a journey to our future world.