Illustrated artificial intelligence original Teague

The role of design in the future of AI.

Matt McElvogue | Vice President

Matt McElvogue

Matt tackles user experience problems at the source and finds creative solutions through forward-thinking strategy, ideation, and creative direction.

LinkedIn

We're in an era of very rich Agentive AI: everything from a Roomba, to Siri, to new Large Language Models (LLMs) like ChatGPT and Midjourney. We ask them to do things on our behalf, they scamper away, get busy, and then come back to say "how's this?" We set a goal, or ask a question, and things happen.

As AI becomes a bigger part of our daily lives, organizations can wield the power of autonomy as part of the products and experiences they create—but the current standard of prompt-based interfaces won't cut it for long.

Users will likely interact with autonomy in more complex or predictive ways without needing to constantly direct the system in a chat feed.

With the wide-release of ChatGPT, we’ve seen that prompt composition isn't self-explanatory or intuitive. For the best results, users have to learn to write prompts well, and support on the topic of prompt writing is either non-existent or minimally documented. Plus, these are simplistic forms of input/output interfaces. As innovation continues, users will likely interact with autonomy in more complex or predictive ways without needing to constantly direct the system in a chat feed. To get a sense of the potential complexity here: think of all the ways we communicate with and collaborate with others, how we navigate choices, how we engage in healthy conflict during conversation, and how we come to consensus. We should expect AI to fit into our lives on our (human) terms, and we need good design to prepare us for what’s next.

Here are some aspects of human and AI interface that could benefit from some design attention:

1. We'll need ways to better communicate with AI.

When it needs clarification, AI will have to get our attention immediately and nag us into responding. To keep communication ongoing, we'll need ways to set systems goals, and then confirm when these systems have completed a task (or not). We'll want to explore how choices and consequences are presented to us, and we'll have to think of ways that we can stay appraised as progress is made during longer bouts of agency.

Imagine riding an autonomous vehicle. The AI is made aware of an accident miles away that will cause delays on the route, even though the human passenger has no cues indicating something is awry. A change to the route could unsettle the passenger if the alteration isn't clearly and simply communicated by the experience. As a counter example, think of how frustrating it is when a navigation app insists on and changes your route without rationale. Poorly communicated changes in the world of AI could feel much worse, as users experience less direct control. When subjective choices are presented, the system will need to let you know why one option might be better based on your personal preferences (more scenic versus quicker, less fuel use, etc).

2. We'll need to consider how and when we use AI.

We have to consider how AI manifests in our experiences. If AI is playing a significant role, this should be communicated to users so that they aren't misled. Hiding the fact that something is AI-powered could impact trust and dash expectations. Additionally, the balance between AI and human control needs to be well considered. Too much automation and folks could feel disempowered. Alternatively, some brands may want to be more AI-forward depending on their own principles and style. Microsoft Teams, for example, boasts its use of precise AI to transcribe and summarize meetings–while a company like Nike may back away from overuse of AI in its creative processes. In the case of chat bots, companies may want to be transparent about the fact that a customer is communicating (and problem-solving) with an algorithm, not a live agent.

Hiding the fact that something is AI-powered could impact trust and dash expectations.

Beyond brand identity, this can also be a concern for critical, infrastructural decision making. Consider a future where power plants and other facilities are run (at least in part) by an AI system. Humans will likely remain involved for redundancy, but if a system is too heavily automated, the humans on hand may become disinterested and unfocused, which could be dangerous in the case of an emergency.

Power plant autonomy artificial intelligence illustration original Teague

3. We'll need AI to share its thinking, not just the answer.

As AI gets used in commercial work, academia, and research, we'll need systems to share how they arrive at their answers. What were the sources? Is an image appropriately representative? Are there counterpoints to a given answer? Even if this is unrealistic for the current generation of LLMs (which function on vast, unrestricted, conglomerated datasets), it will become increasingly necessary.

Imagine an AI system being used to provide feedback on student essays. When correcting grammar and composition, it will be important for the interface to provide both the grammatical rule for why a change is suggested, and how the suggestion stacks up to what the student wrote. With its use of longform text, a conversational chat interface will make it laborious to understand the before and after. Plus, if an interface doesn't manifest the grammatical rules it is following, you'll have a hard time trusting it—let alone learning for yourself.

Another example can be assumed in autonomous image generation. Say X University is using AI to design and launch a campus-wide campaign and needs imagery that is appropriately inclusive and diverse. A well-designed algorithm could provide a sort of “calorie label” to support its work, detailing the contents of its output and where they come from. This can go a long way to verifiable representation of the student body and creating a sense of trust in the tech.

4. We'll need better interfaces for setting parameters.

We'll want to define where our agents go hunting for answers, what to look for, what to ignore, how far back to go, etc. Additionally, we'll want to accurately control length, tone, and other specifics when tuning AI output to fit our needs.

In some contexts, there could be a need for AIs to organically break the parameters set for them if it senses human biases.

Imagine an AI system that analyzes health data to assist in diagnosis. Poor interface around defining parameters—how far back to go, what groups to consider for comparison, etc.—could have massive repercussions for data privacy, and even cause bodily harm due to misdiagnosis.

Yet, in some contexts, there could also be a need for AIs to organically break the parameters set for them if it senses human biases, prejudice, or harmful guardrails. This can help to prevent misinformation from permeating in a vacuum.

5. We'll have to ensure it's actually the right tool for the job.

It's intoxicating to think about leveraging the latest tech in our products. But, does it actually enhance what we provide to users? Would the implementation of AI have negative impacts? Are there other technologies that are better for the problems a company, or society, are facing?

Some questions businesses will need to ask internally:

  • Is there access to the right data to train and support an AI model? There are cost implications around this, including staffing needs, collecting data, developing a model, hardware, ongoing server use and maintenance…
  • Can AI provide an advantage versus more traditional methods? AI is well-suited for tasks that involve pattern recognition, prediction, optimization, and automation. Is that important here?
  • What are the use cases? AI is great for personalization and recommendation features, or taking care of repetitive tasks. Could an experience be benefited by implementing AI in those areas?
  • Would there be sensitive or regulated data involved? Would that make the use of AI ethically problematic? Legally impossible? What would the risk of using AI be?
  • Does AI align with business goals? Will it contribute to the success of a product?

There are a lot of "we know we need AI, now what?" conversations happening and not a lot of clarity beyond that. For businesses that are trying to integrate AI into their products and experiences, design can provide crucial support and answer pressing, big-picture questions—while also progressing us towards richer, more complex interactions with AI agents that will, we hope, blend seamlessly into our daily lives.

Let's work together.

Are you leveraging AI and autonomous systems in your work? Fill out the form below for guidance on designing your product, experience, or interface.