Technology
Trust interactions in autonomous systems.
Matt McElvogue | Vice President, Design
Matt McElvogue
Matt tackles user experience problems at the source and finds creative solutions through forward-thinking strategy, ideation, and creative direction.
User experience, to this point, has been under-considered in autonomous systems. If AI is to continue to modify the landscape of work and play, good UX is as essential as the technology itself; one will not progress without the other. Users face some growing pains as they renegotiate their understanding of actually “using” new tech, transitioning from rote operator to something more nuanced: administrator, overseer, interlocutor. As designers and creators in this space, we need to prioritize and foster a robust, intrinsic feeling of trust between users and the systems they rely on—and innovative, human-centered design is a crucial component to building that trust.
The feeling of safety looks different to each individual user and may change from moment-to-moment.
Autonomy without trust will not see the widespread adoption needed for the tech to mature. Without adoption, feedback from user experience will be insufficient or misleading. Furthermore, if new forms of autonomy aren’t designed to win trust, some protocols may fail to take hold completely—even in spite of their worthiness. Users themselves may be at a disadvantage, unable to develop the skills necessary to keep up with rapidly changing economic and social conditions.
As world-changing technology is introduced, designers shoulder the responsibility of preparing people for a paradigm shift.
What does trust look like in autonomous systems? Teague has identified four key pillars for establishing user trust.
1. Safety.
On its face, safety may seem like a relatively straightforward metric. We’re asking a basic question: “Is this system looking out for my well being?” However, when extrapolated, safety is highly contextual and conditional. The feeling of safety looks different to each individual user and may change from moment-to-moment.
There can’t be a one-size solution to feeling cared for by autonomous systems
Autonomous vehicles are a good case study for this—beyond ground floor crash protection alone. This is where automation really rubs up against the imprecision of human experience. Safety thresholds vary based on personal identity and background. For example, someone traveling alone at night will have a different safety standard than a group. Varying road conditions may affect one’s sense of security. A passenger may feel unsafe if a designated drop-off zone is too sparsely or too densely crowded. A passenger who has experienced an accident in the past may wish to travel differently than one who has not. How can autonomous programs account for an array of biases?
There can’t be a one-size solution to feeling cared for by autonomous systems. Instead, systems judgment needs to work in concert with human judgment. We imagine interfaces that are actively influenced by individuality, employing generative, empathetic methods of communication that not only solicit preferences in clever ways, but also interpret and predict user ideals.
2. Comfort.
Unlike the circumstantial, often immediate nature of safety, comfort looks like familiarity: approachable, personal interfaces that actually feel familiar, allowing users to develop a rapport with their tech.
Human communication is deeply subjective and multivalent, combining gesture, tone, expression, context. In conversation, we’re naturally attuned to these subtleties, and we draw meaning through summation.
Imprecision is difficult to systematize, but at this foundational moment, emergent AI has the opportunity to win trust by communicating in recognizable, everyday language. As systems and users converse and “get to know” one another, their interactions could mirror the reassuring elasticity of non-tech communication.
To inspire trust, interfaces will need to cut through the heightened unfamiliarity of everything.
Comfort also hinges on transparency. When systems reveal their reasoning, they become more approachable and user-centric. Novel tech that helps to navigate its own novelty feels more like a partner and less like an instrument. Ostensibly, the more chatty an AI becomes, the more kinship (and comfort) we’re likely to feel.
Consider, for example, the future AI that will be used during civilian space flight, where travelers lack highly specialized training. To inspire trust, interfaces will need to cut through the heightened unfamiliarity of everything. If communication is colloquial and clear, passengers can act on directives without feeling anxious, poorly informed, or underprepared.
3. Confidence.
Nascent technology has a lot to prove. Algorithms need to nail their assigned tasks so that when users ask, “Will this system complete this as I would complete it?” The answer is a resounding, “Yes—and (hopefully) better.” When warehouse pickers, for instance, receive optimized workflow directions from an autonomous system, they should believe that these instructions are the best, most efficient ways to accomplish a task. But confidence isn’t about good tech alone. Even with precise autonomy doing what it’s been created to do, there are the psychological elements of UX to consider.
With the increasing ubiquity of Agentive Technology, productive input/output loops are essential. Confidence follows when an AI is able to share how it interprets direction, and what sort of thinking and learning is happening with each subsequent command. If there are choices to make in between, AI could allow for users to make selections throughout the process. Lifting the curtain on systems processing may allow for fine-tuned and tailored communication, fomenting congruency between users and their tech.
Even with precise autonomy doing what it’s been created to do, there are the psychological elements of UX to consider.
As a counter example, early consumer versions of OpenAI’s ChatGPT software raised eyebrows with its infamous hallucinations—where relatively innocuous inputs yielded both total falsehoods and general strangeness. ChatGPT’s black box design means that it does not expose its thinking. By contrast, if the system was able to explain why it was producing undesirable results, users could feasibly retain a level of confidence in the AI’s reasoning capability, even if faulty or imperfect. Armed with this extra knowledge and understanding, users could then, in turn, have better control over outcomes, partnering with systems in greater symbiosis.
4. Control.
In autonomous systems, users retain agency, but relinquish operating power. User control looks different: more managerial and higher-level. This can be an uncomfortable transition. In general, we are unaccustomed to machine reasoning, and especially unaccustomed to deferring to it.
As users navigate this sea change, and as the tech itself matures, retaining elements of override control—while also having these override options easily accessible on an interface—may bolster trust. In essence, there is a trust benefit to systems design which clearly allows for admin interjection.
Designed control can be demonstrated in autonomous vehicles, where passengers maintain the ultimate ability to stop, pull over, change destinations, or cancel automated decisions at any point. Even more specific: in the case of mobility aids in airports—such as autonomous wheelchairs or carts—without manual steering, interfaces could be incredibly sparse. However, passengers may be hesitant to ride in a vehicle without any visible controls. The inclusion of something as rudimentary as a joystick may make a difference, encouraging use through the visual reassurance of override power.
In the case of Agentive Technology, control takes new forms—like defining input parameters, or choosing between a set of potential conclusions drawn by the system. Midjourney is a good case study for this administrative control. In response to initial prompts, the system offers users the opportunity to select from a variety of results, guaranteeing the best outcome while improving system learning.
Engineering human-ness through transparent design.
Despite paradigm-shifting advancements “behind the scenes,” the success and longevity of AI has as much to do with its reception as its development. When possible, relationships between people and automation need to resemble real relationships, with malleability and reciprocity. This “humanness” can be engineered through transparent, intelligent design.
Trust interactions are practically simple: small inclusions and deliberate, straightforward language. If we choose to open up the black box, we empower the user with an understanding of how decisions are made, algorithms function, and data is utilized.
Through well-designed trust, users are given the keys to incorporate and shape current and future iterations of autonomy, unlocking its potential while still preserving an essential, immutable human element in its functioning.
Let's work together.
Is your organization interested in implementing AI and autonomy according to human-centered principles? Get in touch with us using the form below to get guidance on designing your product, experience, or interface.