Illustration of a robot metaphorically processing external thoughts

Designing robots for humans.

Sushant Vohra & Clint Rule

Sushant Vohra

Industrial Designer

Clint Rule

Clint Rule

Creative Director

In recent years, the autonomous technology sector has expanded from the factory floor and into our homes and offices, engaging and collaborating with everyday people—not just trained professionals. At Teague, we’ve used research and design to help companies create the next generation of self-driving cars, drones that deliver packages to backyards, robots that entertain and educate children, and more. As autonomous tech continues to leap from the industrial and into the everyday, a pivotal question emerges: how can we guide and optimize human-robot interaction in this new age?

Drawing on our past projects, we have identified six key design principles that can help organizations prepare autonomous tech for widespread adoption in consumer and professional spaces.

1. Intent is indicated.

    As autonomous tech engages with the day-to-day, it must find ways to effectively communicate with us in a way that mitigates the anxiety that can come from uncertain or vague interactions. Where user control decreases, expression of an autonomous robot’s intent must increase. Hardware should display clear, intuitive indicators for both current and future intent—“intendicators”—just like a turn signal, but for a wider, more diverse set of intentions.

    As major retailers build their airborne drone delivery operations, customers will expect to have a clear understanding of where a drone intends to leave a package, especially if these spots are dynamically determined by the drone on arrival. An inability to communicate where a package will be dropped may lead to safety concerns on the ground, and when safety standards are jeopardized, we’ll see a corresponding decrease in robot agency. Given this, intendicators will quickly become a competitive edge for autonomous tech companies.

    Intendicators will quickly become a competitive edge for autonomous tech companies.

    Agility Robotic’s bipedal robot, Digit, is designed to function alongside warehouse workers. Standing at over five feet tall, its humanoid shape helps it to navigate the architecture of human spaces. In earlier versions, Digit lacked a head, and instead featured an unadorned lidar sensor—effectively decapitating the form and handicapping Digit’s ability to replicate expressions and facial articulations, features which serve an essential purpose beyond cosmetics. In Digit’s most recent model, the hardware now includes a head with moving, blinking digital eyes, which work to convey “useful information like changing direction and other actions while at work.” Digit’s eyes are intendicators that improve its ability to navigate the nuances of collaborative environments.

    As autonomous vehicles populate our roads, there is an essential need to communicate intent to nearby pedestrians. Teague researched and prototyped approaches to autonomous vehicle intendicators, such as integrated walk signs or traffic signals on every vehicle, to help pedestrians confidently cross a street full of autonomous traffic. If established and maintained by a regulatory body—such as the telltale lights regulated by the US National Highway Traffic Safety Administration—these intendicators would help smooth consumer skepticism, and pave regulatory pathways for autonomous vehicles in more markets.


    2. Privacy is paramount.

      In 2014, Ford executive Jim Farley alarmed attendees of the annual CES conference by stating, “We know everyone who breaks the law, we know when you're doing it. We have GPS in your car, so we know what you're doing.” Though he later retracted this statement, he raised an important ethical concern for spatial-mapping technology.

      There's a fundamental and paradoxical misalignment between people and robots: the technology’s need for accurate, often continuous spatial monitoring conflicts with people’s desire for privacy. As we invite robots into our spaces, how can we protect consent? Affordances must be made that allow people to renegotiate privacy in relation to autonomous machines.

      Physical inputs should be available to control both data capture and retention. Many consumers are accustomed to webcams with indicator lights or physical shutters and will have similar expectations for autonomous technology. For example, when monitoring sensors are not actively being used by an autonomous system, the sensors should automatically be shuttered. And data collected by the system should—ideally—be processed locally and stored on removable media, allowing users the confidence of physical control over personal data.

      The technology’s need for accurate, often continuous spatial monitoring conflicts with people’s desire for privacy.

      As we create interfaces for the public sphere, it is important to accommodate users’ need to understand how and when they are seen by others on the network and by network administrators. Users make more informed choices about their use of a product when armed with perspective on their aggregated data. For better transparency, autonomous tech must signify when and how they are sensing. For example, when Teague designed the exterior of an autonomous vehicle, we included lights around its perimeter that would glow when people were detected nearby. Research participants understood this cue to indicate that they were being sensed by the vehicle, informing their understanding of the car’s awareness and subsequently their own behaviors in its presence.

      An assortment of artificial intelligence robots are collected in a group, with glowing orbs representing their intelligence and ways of communicating with humans.

      3. Purpose is presentation.

      As ever, the adage still applies: form follows function. In robotics, this is not only for the sake of machine performance, but also to benefit the user. For humans to quickly understand what an autonomous robot does, its appearance must emphasize its primary use case. This practice is prevalent in smart objects like Nest thermostats, which use large, legible numbers and knob-like dials to leverage established archetypes and communicate function. As we encounter an increasing diversity of autonomous devices, clarity is essential. When a variety of machines are mingling with pedestrians and whizzing above our heads, knowing what each device does will make integration easier and allow users to take full advantage of new tech.

      When it comes to linking purpose and presentation, vehicles might be the most successful autonomous robots thus far. However, things become more complex in the case of multi-functional robotic platforms like Hello Robot’s Stretch. Stretch can do many things: play tug with the family dog, vacuum hard-to-reach surfaces, open doors, and grasp a variety of objects. However, this versatility can lead to vagueness in structural design. The Stretch robot appears as a moveable platform with an extendable limb—its purpose and skillset are impossible to know just by looking at it. This indistinction could leave users confused about Stretch’s behaviors—a potential issue if the robot were trying to interact in a public space. Humanoid robots face similar issues, even causing revulsion and alarm. Effective design incorporates visual cues that are not just decorative, but also functional. Each element plays a part in guiding the user's understanding of what the robot can do, which, in turn, helps the tech fit more comfortably into our world.

      A notable example of functional visual cues is the Wii controller. The game controller, designed for motion, communicates its different functions with different attachments; i.e., the steering wheel attachment signals its use for driving games. Branding is another method of communicating purpose: An unmarked robot on the sidewalk might raise questions, but if it bears the FedEx logo, its role becomes immediately clear.

      4. Motion is meaningful.

        How an autonomous robot moves is just as important as how it looks. In her talk, “Why we have an emotional connection to robots,” Dr. Kate Darling, a Research Scientist at the MIT Media Lab, posits that humans are “biologically hardwired to project intent and life to any movement in our physical space that seems autonomous.” Though we are decades away from creating robots that autonomously “feel,” as social creatures steeped in community, it is inevitable that we will feel for them. People will treat all sorts of robots as though they are alive, with an expectation for them to mimic those same gestures of life. Dynamic motion in an autonomous robot can encourage use through affinity. But for many companies developing autonomous robots today, choreography and animatronics are not a priority, and are instead the byproduct of a utilitarian focus on how a machine efficiently completes a task.

        While working on an AI assistant robot for children, Teague conducted user research on an early mechanical prototype. We observed the choreography of the robot’s single-axis articulation, particularly in its head, was notably “unfriendly.” Its movement, though functional, had the precision and artifice of a weapons system dialing in on a target—far too ominous for its intended context. To solve for this, Teague designers and engineers arrived at a design that used a six-axis Stewart platform to allow for more fluid, gentle articulation of the entire robot.

        People will treat all sorts of robots as though they are alive, with an expectation for them to mimic those same gestures of life.

        Another great example of motion design that encourages companionship is the AI assistant NOMI, made by Chinese car manufacturer NIO. Sitting on the car’s dashboard, NOMI is a ball-shaped device featuring cartoon animations that resemble a face. NOMI emotes and reacts to voice commands in real time, physically turning to address a speaker, squinting, smiling, dancing, and patiently watching drivers as they type on the vehicle screen—like a true companion would. When asked what inspired its motion, the designers said, “We looked to the film and animation industry to see how we could design the whole character, so that the movements are not simply robotic and mechanical but driven by emotions.”

        NOMI emotes and reacts to voice commands in real time, physically turning to address a speaker, squinting, smiling, dancing, and patiently watching drivers as they type on the vehicle screen—like a true companion would. [Picture by NIO]


        5. Configuration is collaboration.

          The initial tuning phase of an autonomous robot is acutely make-or-break for users. Especially in personal settings, every user will have a unique set of needs and expectations for their tech. Failure for the autonomous robot to accommodate configuration is a failure to gain trust and establish clear value—for that product, and for autonomous tech broadly. When users configure their autonomous tech, they train these products to best serve them.

          Hardware controls can make robots more accessible, which will be necessary for wider adoption.

          Advanced configuration is often best served with a software control center. However, simple hardware interfaces can serve as a complement, allowing configuration that is immediate, specific, and accessible. For example, a simple “like/dislike” button makes it easy for a human collaborator to give quick feedback on a specific behavior—or, at least, flag a robot for follow-up triage. For example, if Alexa devices included similar hardware, users could train their Alexa without needing to rely on randomly solicited feedback, nor plumb the depths of an app to make a simple tweak.

          “Teach pendants”—remote controls for programming robots—have been common in industrial contexts, but strikingly absent in robots introduced elsewhere. While the heft and complexity of the typical teach pendant won’t translate to most consumer contexts, there remains an opportunity to introduce a lightweight, user-friendly variation for consumer-grade autonomous tech.

          Introducing more hardware affordances—particularly for configuration—runs against the current trend in consumer products of offloading control to software UI. However, this trend is often not based in user need, but is instead driven by a single-minded focus on cost control or an over-indexing on a “smooth, seamless” aesthetic. Bucking this convention and focusing on user need, the Microsoft XBOX Adaptive Controller provides configuration of the controller with a bay of simple headphone ports, making it the most accessible controller on the market. Hardware controls can make robots more accessible, which will be necessary for wider adoption.


          6. Override is overt.

            A key aspect of user adoption of new technology is the ability to exert control. Even when our lives are saturated with autonomous robots, users will always place greater trust in products that provide them with fundamental control.

            While most autonomous devices are — ideally — intelligent and self-correcting, they inevitably encounter scenarios beyond their programmed responses. When working with Intel on their autonomous vehicle rideshare technology, Teague’s prototyping and user research revealed a need for an immediate override of all vehicle operation, returning the car to a neutralized state. Boston Dynamics introduced a similar feature to Spot. The inclusion of a motor lockout button ensures that users can completely disengage the robot when necessary.

            Releasing hardware is high stakes for organizations. This is even more true for autonomous robots, which face a torrent of scrutiny and fearmongering.

            While break-glass-in-case-of-emergency overrides should be present on the autonomous robot itself, there is also opportunity for a hardware peripheral to act as additional fail-safe. A smart bracelet, for example, might be appropriate for certain professional or prosumer autonomous robots—such as maintenance, agricultural, or healthcare drones. If the bracelet is removed, that autonomous robot assumes a safe state. Furthermore, fail-safe bracelets could also detect when the wearer loses consciousness or experiences another emergency, causing the paired autonomous robot to either return to a safe state or provide aid.

            Releasing hardware is high stakes for organizations. This is even more true for autonomous robots, which face a torrent of scrutiny and fearmongering. This means autonomous machines are often developed in a vacuum, without consideration for the breadth of actual human collaboration. As designers, we are constantly engaged with the possibilities and constraints of the real world. At Teague, our experience designing for everything from AI assistants for children to space station hardware and software for astronauts means we bring the above principles and many others to life in unique ways. For the next era of robots, this will be critical to mitigate risk, increase uptime, reduce training time, and appropriately integrate robots into our everyday lives.

            Let's talk about your robotics program.

            Teague’s in-house expertise serves as a critical liaison between designers, engineers, and developers. We produce detailed designs, digital and physical prototypes, and appearance mockups of all related touchpoints for user-focused business solutions. Use the form below to connect with our team.