Last week, I had the pleasure of standing before a packed room at the Open Source in Finance Forum (OSFF), a FINOS-hosted event taking place at long last in my home town of Toronto. The audience was filled with familiar faces from Canadian and global banks, and the burgeoning fintech ecosystem, including a few cryptoasset pioneers. I was reminded of my own roots in the financial services sector, which began with obtaining my securities license through our regulator, the Ontario Securities Commission, and how the intersection of technology and regulation shapes so many of the conversations and activities within FINOS, except on a global regulatory scale.
At OSFF, it was as clear as ever that while innovation is often about the fastest code, if that innovation is taking place for buy-side and sell-side banks, it is as much about managing risk, compliance, and the audit trail as it is about creating value. Consequently, the culture and velocity of emerging technological adoption and deployment in the financial services sector is very much influenced by regulation.
Prior to OSFF, this sentiment was the focus of a recent roundtable I had the privilege of joining, part of the Linux Foundation’s San Francisco AI Forum. The regulated industries roundtable, hosted under Chatham House Rule and facilitated by FINOS ED Gabriele Columbro, was one of four high-level discussions exploring the pathways and challenges related to accelerating Agentic AI through open source, culminating in a new report, Open Source and the Future of AI.
This session brought together global leaders to tackle a singular, complex question: How do we bring the transformative power of Generative AI into sectors where the cost of failure is not just a bug, but a systemic risk?
The report stemming from these discussions offers insight for leaders operating equally in the "high-stakes" zones of finance, healthcare, and energy. Here are a few of the report’s key takeaways.
For those of us in financial services, the "black box" nature of proprietary AI is a non-starter. You cannot tell a regulator that a loan was denied or a trade was executed "because the model said so." Standardizing for decision classification is equally important in a healthcare context as it is for banking.
The roundtable findings highlighted that in regulated industries, transparency is the primary currency. We found that open source AI models are no longer just a cheaper alternative; they are a strategic necessity for the ability to view reasoning. Because the code, training data, and weights are accessible, institutions can perform the rigorous stress-testing and bias-auditing required by law. At OSFF Toronto, I shared data showing that the opportunity cost of not using open source AI models is high, and that similar approaches offer a more secure and auditable path for AI than closed-system counterparts.
One of the most resonant themes from the San Francisco discussions was the concept of Sovereign AI. For a country like Canada—and for global financial hubs like Toronto—relying entirely on foundational models hosted in a different jurisdiction creates immense geopolitical and operational risk. Separately, those models were not trained on local language, culture, and importantly, local regulations, when generating output.
Regulated industries need to own their infrastructure. The report underscores that open source enables "selective autonomy." It allows a bank in Toronto to take a global foundational model, tune it on sensitive, local data within their own firewalls, and maintain total control over the output. This isn't just about technology; it’s about ensuring that our financial and social systems are governed by our own values and regulations, not by the terms of service of a foreign provider.
Perhaps the most inspiring takeaway is that the "undifferentiated heavy lifting" of AI—the safety frameworks, the regulatory reporting schema, and the data cleaning tools—is being solved through mutualization.
At OSFF Toronto, I pointed to the success of projects within the FINOS (Fintech Open Source Foundation) community, in particular the AI Governance Framework. By collaborating on the "pre-competitive" layers of AI, and the regulatory risks around AI, firms are reducing their individual risk and speeding up the time-to-market for everyone. We are moving from a world of "cautious consumers" to "strategic contributors."
For the world to move quickly beyond the experimental phase of generative AI and into the era of industrial-grade deployment, we require not only collaboration on regulatory approaches, but robust orchestration and standardized connectivity that only open source can provide. The AI Forum Report highlights the specific tools making this transition possible: Ray is proving essential for scaling the massive compute workloads required for financial modeling, while Goose offers the developer-led framework needed to build reliable, autonomous agents. Perhaps most critically, the Model Context Protocol (MCP) is emerging as the universal connector that allows these AI systems to securely interface with the complex, siloed data environments typical of global banks. By championing these projects, we aren't just adopting new tech—we are building a modernized financial "ledger" where every AI interaction is transparent, interoperable, and fully compliant. The future of regulated AI isn't just open; it is a shared architecture of trust.
By championing these projects, we aren't just adopting new tech—we are building a modernized financial "ledger" where every AI interaction is transparent, interoperable, and fully compliant.
This transition requires more than just a "human in the loop." According to OSFF attendee and my former colleague and author Michael Casey, it requires humans in the driver’s seat.
As we move forward, the open source community must ensure that these tools empower the human as the ultimate guarantor—the one who defines the intent, sets the guardrails, and provides the final, non-delegable stamp of quality. The future of regulated AI isn't just open; it is a shared architecture of trust where human agency remains the steering force.
Hilary Carter is the SVP of Research at the Linux Foundation. You can find the full report from the San Francisco AI Forum here.