OpenAI is reportedly preparing to roll out its first internally designed AI chips, marking a significant step toward self-sufficiency in computing infrastructure. Co-developed with semiconductor powerhouse Broadcom, the chips will be manufactured in large volumes by 2026.
This initiative comes amid skyrocketing demand for compute power to train and run massive AI models such as GPT-5, which OpenAI plans to support by doubling its compute fleet over the next five months. The $10 billion partnership with Broadcom signals OpenAI’s long-term intention to diversify away from Nvidia, whose GPUs currently dominate the AI hardware market.
OpenAI’s chip production will be solely for internal operations and is not expected to be available commercially.
Reports from last year had already revealed that OpenAI and Broadcom were collaborating on their first custom chip, aiming to strike a balance between escalating infrastructure demands and the need to control rising costs.
Why It Matters: This development would represent a market expansion in the AI hardware landscape. As leading AI firms like OpenAI increasingly navigate toward developing proprietary chips, Nvidia’s centralized role in powering the AI boom is being challenged.
- OpenAI and Broadcom Finalize $10B Custom Chip Collaboration: OpenAI has reportedly committed to a multi-billion-dollar partnership with Broadcom to develop and mass-produce a custom AI chip tailored to its unique infrastructure needs. The chip, has been in development since at least last year and represents one of the largest known efforts by an AI company to transition from commercial off-the-shelf hardware to specialized silicon. Broadcom’s CEO confirmed a major new customer without naming OpenAI directly, but multiple sources have since identified the ChatGPT developer as the client.
- Proprietary Chip Will Be Used Internally, Not Commercialized: Unlike Nvidia’s GPUs or AMD’s AI accelerators, OpenAI’s chip will not be offered on the open market. Instead, the company will deploy the hardware within its own data centers to support increasingly intensive workloads, including training and running next-generation AI models. This mirrors the approach taken by hyperscalers like Google and Amazon, who have also built in-house chips (e.g., TPUs and Graviton) to control AI infrastructure and scale performance more efficiently.
- Diversifying Away from Nvidia Amid Supply and Cost Constraints: OpenAI’s move is partly driven by its long-standing reliance on Nvidia, whose high-performance chips have powered most of the recent AI explosion. However, as global demand for GPUs surges, costs have risen and availability has tightened. By designing its own silicon, OpenAI aims to bypass these constraints and create a more predictable supply chain with hardware tailored to its specific needs.
- Broadcom and TSMC to Play Key Roles in Manufacturing Pipeline: While Broadcom is leading the chip design process, the actual fabrication will be carried out by Taiwan Semiconductor Manufacturing Company (TSMC), the world’s leading foundry. This vertical partnership mirrors other tech giants’ efforts to take control over more aspects of their compute stack. TSMC’s role is critical for scaling up chip production to meet OpenAI’s aggressive deployment goals for 2026, particularly as the company doubles its compute footprint in the short term.
- Analyst Confidence in Broadcom’s AI Strategy Grows: Market analysts have expressed increased confidence in Broadcom’s custom chip division, forecasting that its growth could eventually outpace Nvidia’s in certain AI segments. The OpenAI partnership has contributed to a notable rise in Broadcom’s stock price, up nearly 9% in pre-market trading and over 30% for the year. Broadcom now counts four major custom chip clients, signaling an industry-wide pivot toward bespoke AI hardware solutions.
Go Deeper -> OpenAI set to start mass production of its own AI chips with Broadcom – Financial Times
OpenAI to launch its first AI chip in 2026 with Broadcom, FT reports – Reuters
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


