OpenAI is going all-in on audio. Over the past couple of months, it’s reorganized key teams across research, product, and engineering to focus on building smarter, more conversational voice technology. While the focus is on making ChatGPT sound better, it’s also on laying the groundwork for a new kind of device that could arrive in 2026, which won’t have a screen at all.
Across the tech industry, there’s growing momentum behind voice as the next big way we interact with machines.
From smart glasses that help you hear better in noisy rooms to cars with built-in conversational assistants, the idea that talking might soon replace tapping and swiping is becoming more and more prevalent. And for companies building products or platforms, it’s a shift that could change everything from how software is designed to how users expect to get things done.
Why It Matters: We’re entering an era where voice is becoming the interface. As AI gets better at understanding us and responding in natural conversation, there’s a huge opportunity to rethink how people interact with technology. That shift could unlock new use cases in work, mobility, and everyday life, especially in places where screens get in the way.
- OpenAI Is Restructuring Around Voice: The company is investing heavily in audio, aiming to build models that sound more natural and feel more like talking to a person than a bot. The next version of its voice model will reportedly handle things like interruptions and talking over each other, things that make real conversation feel real. That’s a big leap from today’s rigid, turn-by-turn voice assistants.
- A New Kind of Device Is on the Way: OpenAI is working on a screenless personal device that would let people interact entirely through speech. It might take the form of smart glasses or a small, always-on speaker, the kind of thing that lives in the background, ready to help without demanding attention. Think less like a tool you use and more like a presence that’s just there when you need it.
- The Big Tech Players Are Thinking the Same Way: OpenAI isn’t alone. Meta’s smart glasses now use a five-microphone setup to help users hear better in loud environments. Google is turning search results into spoken summaries. Tesla is integrating a voice assistant into its cars so drivers can control everything from directions to temperature just by talking.
- Startups Are Betting on Voice-First Wearables: Smaller companies are racing to build screenless AI devices, from pins that clip to your shirt to rings you can talk to. Some, like the Humane AI Pin, have already run into trouble. But others, including one backed by Pebble’s founder, are gearing up to launch in 2026. These products are trying to build a new category of always-there, voice-first tech.
- Design That Fades Into the Background: A big part of this shift is about reducing how much attention technology demands. Jony Ive, Apple’s former design chief, is helping shape OpenAI’s hardware strategy. His vision is to create tools that are less addictive, more ambient, and better integrated into everyday life. It’s a rethink of how tech should behave, not shouting for attention, just quietly helping when needed.
Go Deeper -> OpenAI bets big on audio as Silicon Valley declares war on screens – TechCrunch
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


