Revealing the Hidden Economics of Open Models in the AI Era
Frank Nagle | 19 November 2025
Artificial intelligence is reshaping economic systems at a pace we have rarely seen in modern technological history. Every sector—from finance to healthcare to manufacturing—is scrambling to understand how to harness AI safely, efficiently, and competitively. Yet amid the excitement, a crucial part of the story has been missing. Specifically, understanding the role that open models play in the AI economy, and how much value is being left on the table when organizations overlook open alternatives, are two topics requiring a closer look.
In our new working paper, “The Latent Role of Open Models in the AI Economy,” Daniel Yue (Georgia Tech) and I probe these questions using one of the most comprehensive datasets assembled to date on AI model usage, prices, and performance. The findings surprised even us—and they carry major implications for the Linux Foundation community and the global open source ecosystem to the tune of billions of dollars of possible savings on AI expenditure.
Closed models dominate, but not necessarily because they’re better
The first headline from our research is striking: closed models account for roughly 80% of model usage and 96% of revenue, even though they cost, on average, six times more than competing open models. This is based on data coming from OpenRouter, a widely used interface for using open and closed LLMs via API that captures nearly 1% of the global expenditure on LLM APIs. This dominance would be easy to explain if closed models consistently delivered vastly superior technical performance. But that simply isn’t true.
Open models routinely achieve 90% or more of the performance of closed models on widely used benchmarks. In fact, open models are now closing the performance gap within a few months after each new closed-model release. Importantly, the speed of this catch-up cycle continues to accelerate.
The result is a market where open models are cheaper, highly capable, and innovating faster, while simultaneously remaining underutilized.
Why aren’t open models used more?
If organizations were making decisions based solely on price and benchmarked capability, open models would command far more market share (and in narrower contexts open models may be more widely used). But we found the opposite. Even when an open model both performs better and costs less, developers continue to choose closed alternatives.
This is not a trivial inefficiency. It is a structural phenomenon with billions of dollars at stake.
Why does this happen? Several forces appear to be at play:
Switching costs: Teams have optimized workflows around specific model behaviors. Changing models creates friction.Brand trust and perceived safety: Many organizations feel more comfortable relying on established, well-funded model providers even if the evidence doesn’t fully justify the premium.
Information asymmetries: The market evolves so quickly that many engineering teams aren’t aware of the latest open models or assume, incorrectly, that open equates to being “less safe.”
Regulatory and liability considerations: Closed model providers may offer contractual assurances that open alternatives cannot. Further, geopolitical concerns may come into play as many powerful open models are coming out of China.
These dynamics mirror earlier patterns in open source software adoption—a familiar territory for Linux Foundation stakeholders. In the 1990s, the refrain was “no one ever got fired for buying Microsoft.” In 2025, it may well be “no one ever got fired for buying OpenAI or Anthropic.”
Quantifying underutilization in the billions
One of the most important contributions of our paper is quantifying the economic stakes of this underutilization. By simulating a counterfactual world where users choose the best observable model—based solely on price and performance—we estimate that the global AI economy could save $20–$48 billion per year.
Our preferred estimate is $24.8 billion in annual unrealized value, based on an extrapolation of Menlo Ventures’ 2025 estimate of the LLM inference market size and our observed underutilization rates.
To put that in context:
- That’s approximately 70% of all spending on LLM inference today.
- It’s more than the annual GDP of entire nations.
- It represents a substantial consumer savings that could be captured by organizations, developers, and downstream users.
For Linux Foundation stakeholders, including enterprises considering open model AI adoption, policymakers evaluating market competitiveness, and engineers building tooling atop open ecosystems, this is a critical insight. Open models are not just philosophically important, they are economically indispensable.
Why this matters to the Linux Foundation community
The Linux Foundation sits at the intersection of open technology, community governance, and industry-scale collaboration. Our findings reinforce several themes central to LF’s mission.
1. Open source continues to create massive, underrecognized valueIn addition to our main results, we also find that if open models disappeared, consumers would need to spend between $350 million to $1.23 billion more than they currently are on LLM inference. Although this is an order of magnitude less than the potential unrealized value, it echoes decades of research showing how open source software quietly creates trillions in economic value through cost reductions, complementarities, and innovation spillovers. Open model AI appears to follow the same pattern—but at a much faster cadence. Further, beyond open models, open source software underlies the entire AI economy. Open source projects like those housed in the LF AI & Data Foundation and the PyTorch Foundation are critical to the creation and usage of all AI models—closed and open.
2. Today’s underutilization mirrors early open source adoption patternsJust as enterprises once hesitated to adopt Linux or Apache due to uncertainty or risk aversion, organizations today hesitate to adopt open AI models, even when doing so is rational from a cost–performance standpoint. This reinforces the need for the Linux Foundation’s convening power to educate the market, provide governance assurance, support neutral benchmarking, and build trusted, community-driven infrastructure around open models.
3. The future of AI will be hybrid—and openness will be essentialClosed models will continue to play a crucial role. But open models are emerging as the competitive floor that disciplines pricing, accelerates innovation, and democratizes access. That competitive tension is healthy for the entire ecosystem.
Linux Foundation stakeholders are a diverse community ranging from cloud hyperscalers to regulated industries to academic institutions. Each has an interest in ensuring this open competitive substrate continues to thrive.
Our paper surfaces more questions than it answers—which is exactly what research on nascent markets and technologies should do. We still need better understanding of:
- how organizations weigh intangible attributes like safety and reliability
- why Chinese labs dominate open model development (and what other countries should do about it)
- what governance structures best support trustworthy deployment of open model AI
- how open models can unlock new innovation layers in the AI stack
But one thing is clear: Open models are playing a much larger role in the AI economy than most realize. And their unrealized potential is enormous. The Linux Foundation community is uniquely positioned to help convert that latent value into realized value through open collaboration, shared standards, and community leadership.
Frank Nagle is a research scientist at MIT and the advising chief economist at the Linux Foundation.

Frank Nagle
About the Author
Frank Nagle is an assistant professor in the Strategy Unit at Harvard Business School and the Advising Chief Economist at the Linux Foundation. Professor Nagle studies how competitors can collaborate on the creation of core technologies, while still competing on the products and services built on top of them – especially in the context of artificial intelligence. His research falls into the broader categories of the future of work, the economics of IT, and digital transformation and considers how technology is weakening firm boundaries. His work utilizes large datasets derived from online social networks, open source software repositories, financial market information, and surveys of enterprise IT usage. Professor Nagle’s work has been published in top academic journals as well as in practitioner-oriented publications like Harvard Business Review, MIT Sloan Management Review, and Brookings Institution TechStream. He has won awards and grants from AOM, NBER, SMS, INFORMS, EURAM, GitHub, the Sloan Foundation, and the Linux Foundation. He is a faculty affiliate of the Digital, Data and Design (D^3) Institute at Harvard, the Managing the Future of Work Project, and the Laboratory for Innovation Science at Harvard (LISH).