
Technology

Matt McElvogue | Vice President, Design

Matt McElvogue
Matt tackles user experience problems at the source and finds creative solutions through forward-thinking strategy, ideation, and creative direction.
If human collaboration is about working together and building trust, then human-machine collaboration is about “orchestration.” That’s the word I keep coming back to. It’s the collaboration of human judgment and machine capability toward a shared outcome.
We think about this every day at Teague.
Before this golden era of AI, much of this showed up in software. We were helping teams build systems on top of powerful APIs, working with developers, translating algorithmic capability into something usable for who I like to call “normal humans,” or muggles, if you’re a Harry Potter fan.
Now under the umbrella of AI, you’re really just dealing with agents and LLMs working behind the scenes, so a lot of the effort is invisible. This creates a design challenge. If people can’t see what’s happening, how do they understand it? How do they know what the system is about to do?
When technology moves fast, trust becomes everything. Right now, for many people, it feels like they are giving up control, especially when they consider self-driving cars or working next to a robot in a warehouse.
When technology moves fast, trust becomes everything.
As designers, our job is to figure out the mechanisms that build trust and help people feel oriented. That’s where the idea of the human-in-the-loop becomes important, where a person remains actively involved in reviewing and approving key decisions within an automated system.
We saw this clearly while designing the command and control interface for Overland AI’s off-road autonomous vehicles used in defense environments. These systems operate in complex terrains and high-risk situations, where the people in charge may not always have the capacity to consider every variable. This is why as autonomy increases, the interface needs to support people at every level of interaction.
When it’s time to make important decisions in mission-critical environments, you need encoding, which is the deliberate layering of visual, physical and behavioral signals that reinforce the weight of a decision. Physical cues can help with this need. Think about the red cover over a switch, having a crank instead of a button, or two people turning keys at the same time. These rituals matter because they slow you down just enough to make decisions intentional. That’s human-machine collaboration at its most intense, and human-centered design is the most important piece of this puzzle.

This connects to something we’ve lived with for a long time in product design: the 80:20 rule. Many tools are built to cover 80 percent of use cases very well. The remaining 20 percent often does not make it into the roadmap. But the reality is that everyone has some part of their workflow in that 20 percent.
AI gives us an opportunity to move beyond that constraint. A person can bring their hyper-personal, hyper-contextual information into the interaction, and the system can respond to that in real time.
Instead of designing only for the average user, we can support more individualized ways of working. That has real implications for accessibility and performance.
On my most optimistic days, I think about these systems as tools that give us extra sensory capability. Superpowers, basically.
Manufacturing is another example where collaboration between humans and machines is evolving quickly. A common frustration we hear from frontline workers is when something goes wrong on the factory floor, fragmented data and systems make it hard to find the right information.
AI is already bridging these gaps. By surfacing the right information at the right time, it expands perception and processing capacity, while empowering frontline workers to take faster and more informed decisions.
You cannot have machines and AI produce answers without clarity around how it got there.
This is where transparency becomes critical. You cannot have machines and AI produce answers without clarity around how it got there. Workers need to understand the reasoning, and autonomous systems operating in any industry have to communicate clearly.
As automation scales, humans won’t always be able to stay in the loop. They’ll be on the loop, supervising and guiding the systems. That partnership needs to be designed intentionally, wherein systems communicate intent and confidence in ways that support human judgment more than ever before.
Naturally, designers have to think about how those explanations appear and when, such that they augment decision-makers at every stage instead of overwhelming them with information.
Human-centered thinking often enters the process late, shaping interfaces after major technical decisions are already made. That’s a major risk. With AI systems, there’s a case for involving designers earlier, especially around training data and guardrails.
This is why design needs to be upstream. What a system is trained on shapes how it behaves. When designers are involved from the start, they can help surface potential risks and impact early on, embedding human-centric thinking from the start.
When I think about the next 25 years, I think about balance.
My hope is that by 2050 we have a more balanced relationship with these tools, wherein we build systems that extend human capability while keeping human judgment at the center.
Human-machine collaboration will continue to evolve. The opportunity is to design that relationship intentionally and make sure the orchestration feels grounded in real needs.
And that’s a future worth working toward.
For a deeper discussion on human-in-the-loop design and the future of hyper-personal human-machine collaboration, listen to Matt McElvogue on the Future of XYZ podcast. This conversation explores how how designers can shape AI and technology to extend human capability while keeping judgment, transparency, and trust at the center.
Listen to the full episode: