Introducing the Open Governance Network Model

Background

The Linux Foundation has long served as the home for many of the world’s most important open source software projects. We act as the vendor-neutral steward of the collaborative processes that developers engage in to create high quality and trustworthy code. We also work to build the developer and commercial communities around that code to sponsor each project’s members. We’ve learned that finding ways for all sorts of companies to benefit from using and contributing back to open source software development is key to the project’s sustainability. 

Over the last few years, we have also added a series of projects focused on lightweight open standards efforts — recognizing the critical complementary role that standards play in building the open technology landscape. Linux would not have been relevant if not for POSIX, nor would the Apache HTTPD server have mattered were it not for the HTTP specification. And just as with our open source software projects, commercial participants’ involvement has been critical to driving adoption and sustainability.

On the horizon, we envision another category of collaboration, one which does not have a well-established term to define it, but which we are today calling “Open Governance Networks.” Before describing it, let’s talk about an example.

Consider ICANN, the agency that arose after demands emerged from evolving the global domain name system (DNS) from its single-vendor control by Network Solutions. With ICANN, DNS became something more vendor-neutral, international, and accountable to the Internet community. It evolved to develop and manage the “root” of the domain name system, independent from any company or nation. ICANN’s control over the DNS comes primarily through its establishment of an operating agreement among domain name registrars that establishes rules for registrations, guarantees your domain names are portable, and a uniform dispute resolution protocol (the UDRP) for times when a domain name conflicts with an established trademark or causes other issues. 

ICANN is not a standards body; they happily use the standards for DNS developed at the IETF. They also do not create software other than software incidental to their mission, perhaps they also fund some DNS software development, but that’s not their core. ICANN is not where all DNS requests go to get resolved to IP addresses, nor even where everyone goes to register their domain name — that is all pushed to registrars and distributed name servers. In this way, ICANN is not fully decentralized but practices something you might call “minimum viable centralization.” Its management of the DNS has not been without critics, but by pushing as much of the hard work to the edge and focusing on being a neutral core, they’ve helped the DNS and the Internet achieve a degree of consistency, operational success, and trust that would have been hard to imagine building any other way. 

There are similar organizations that interface with open standards and software but perform governance functions. A prime example of this is the CA Browser Forum, who manages the root certificates for the SSL/TLS web security infrastructure.

Do we need such organizations? Can’t we go completely decentralized? While some cryptocurrency networks claim not to need formal human governance, it’s clear that there are governance roles performed by individuals and organizations within those communities. Quite a bit of governance is possible to automate via smart contracts (and repairing damage from exploiting them), promoting the platform’s adoption to new users, onboarding new organizations, or even coordinating hard fork upgrades still require humans in the mix. And this is especially important in environments where competitors need to participate in the network to succeed, but do not trust one competitor to make the decisions.

Network governance is not a solved problem

Network governance is not just an issue for the technical layers. As one moves up the stack into more domain-specific applications, it turns out that there are network governance challenges up here as well, which look very familiar.

Consider a typical distributed application pattern: supply chain traceability, where participants in the network can view, on a distributed database or ledger, the history of the movement of an object from source to destination, and update the network when they receive or send an object. You might be a raw materials supplier, or a manufacturer, or distributor, or retailer. In any case, you have a vested interest in not only being able to trust this distributed ledger to be an accurate and faithful representation of the truth. You also want the version you see to be the same ledger everyone else sees, be able to write to it fairly, and understand what happens if things go wrong. Achieving all of these desired characteristics requires network governance!

You may be thinking that none of this is strictly needed if only everyone agreed to use one organization’s centralized database to serve as the system of record. Perhaps that is a company like eBay, or Amazon, Airbnb, or Uber. Or perhaps, a non-profit charity or government agency can run this database for us. There are some great examples of shared databases managed by non-profits, such as Wikipedia, run by the Wikimedia Foundation. This scenario might work for a distributed crowdsourced encyclopedia, but would it work for a supply chain? 

This participation model requires everyone engaging in the application ecosystem to trust that singular institution to perform a very critical role — and not be hacked, or corrupted, or otherwise use that position of power to unfair ends. There is also a trust the entity will not become insolvent or otherwise unable to meet the community’s needs. How many Wikipedia entries have been hijacked or subject to “edit wars” that go on forever? Could a company trust such an approach for its supply chain? Probably not.

Over the last ten years, we’ve seen the development of new tools that allow us to build better-distributed data networks without that critical need for a centralized database or institution holding all the keys and trust. Most of these new tools use distributed ledger technology (“DLT”, or “blockchain”) to build a single source of truth across a network of cooperating peers, and embed programmatic functionality as “smart contracts” or “chaincode” across the network. 

The Linux Foundation has been very active in DLT, first with the launch of Hyperledger in December of 2015. The launch of the Trust Over IP Foundation earlier this year focused on the application of self-sovereign identity, and in many examples, usually using a DLT as the underlying utility network. 

As these efforts have focused on software, they left the development, deployment, and management of these DLT networks to others. Hundreds of such networks built on top of Hyperledger’s family of different protocol frameworks have launched, some of which (like the Food Trust Network) have grown to hundreds of participating organizations. Many of these networks were never intended to extend beyond an initial set of stakeholders, and they are seeing very successful outcomes. 

However, many of these networks need a critical mass of industry participants and have faced difficulty achieving their goal. A frequently cited reason is the lack of clear or vendor-neutral governance of the network. No business wants to place its data, or the data it depends upon, in the hands of a competitor; and many are wary even of non-competitors if it locks down competition or creates a dependency on a market participant. For example, what if the company doesn’t do well and decides to exit this business segment? And at the same time, for most applications, you need a large percentage of any given market to make it worthwhile, so addressing these kinds of business, risk, or political objections to the network structure is just as important as ensuring the software works as advertised.

In many ways, this resembles the evolution of successful open source projects, where developers working at a particular company realize that just posting their source code to a public repository isn’t sufficient. Nor even is putting their development processes online and saying “patches welcome.” 

To take an open source project to the point where it becomes the reference solution for the problem being solved and can be trusted for mission-critical purposes, you need to show how its governance and sustainability are not dependent upon a single vendor, corporate largess, or charity. That usually means a project looks for a neutral home at a place like the Linux Foundation, to provide not just that neutrality, but also competent stewarding of the community and commercial ecosystem.

Announcing LF Open Governance Networks

To address this need, today, we are announcing that the Linux Foundation is adding “Open Governance Networks” to the types of projects we host. We have several such projects in development that will be announced before the end of the year. These projects will operate very similarly to the Linux Foundation’s open source software projects, but with some additional key functions. Their core activities will include:

  • Hosting a technical steering committee to specify the software and standards used to build the network, to monitor the network’s health, and to coordinate upgrades, configurations, and critical bug fixes
  • Hosting a policy and legal committee to specify a network operating agreement the organizations must agree to for connecting their nodes to the network
  • Running a system for identity on the network, so participants to trust other participants who they say they are, monitor the network for health, and take corrective action if required.
  • Building out a set of vendors who can be hired to deploy peers-as-a-service on behalf of members, in addition to allowing members’ technical staff to run their own if preferred.
  • Convene a Governing Board composed of sponsoring members who oversee the budget and priorities.
  • Advocate for the network’s adoption by the relevant industry, including engaging relevant regulators and secondary users who don’t run their own peers.
  • Potentially manage an open “app store” approach to offering vetted re-usable deployable smart contracts of add-on apps for network users.

These projects will be sustained through membership dues set by the Governing Board on each project, which will be kept to what’s needed for self-sufficiency. Some may also choose to establish transaction fees to compensate operators of peers if usage patterns suggest that would be beneficial. Projects will have complete autonomy regarding technical and software choices – there are no requirements to use other Linux Foundation technologies. 

To ensure that these efforts live up to the word “open” and the Linux Foundation’s pedigree, the vast majority of technical activity on these projects, and development of all required code and configurations to run the software that is core to the network will be done publicly. The source code and documentation will be published under suitable open source licenses, allowing for public engagement in the development process, leading to better long-term trust among participants, code quality, and successful outcomes. Hopefully, this will also result in less “bike-shedding” and thrash, better visibility into progress and activity, and an exit strategy should the cooperation efforts hit a snag. 

Depending on the industry that it services, the ledger itself might or might not be public. It may contain information only authorized for sharing between the parties involved on the network or account for GDPR or other regulatory compliance. However, we will certainly encourage long term approaches that do not treat the ledger data as sensitive. Also, an organization must be a member of the network to run peers on the network, required to see the ledger, and particularly write to it or participate in consensus.

Across these Open Governance Network projects, there will be a shared operational, project management, marketing, and other logistical support provided by Linux Foundation personnel who will be well-versed in the platform issues and the unique legal and operational issues that arise, no matter which specific technology is chosen.

These networks will create substantial commercial opportunity:

  • For software companies building DLT-based applications, this will help you focus on the truly value-delivering apps on top of such a shared network, rather than the mechanics of forming these networks.
  • For systems integrators, DLT integration with back-office databases and ERP is expected to grow to be billions of dollars in annual activity.
  • For end-user organizations, the benefits of automating thankless, non-differentiating, perhaps even regulatorily-required functions could result in huge cost savings and resource optimization.

For those organizations acting as governing bodies on such networks today, we can help you evolve those projects to reach an even wider audience while taking off your hands the low margin, often politically challenging, grunt work of managing such networks.

And for those developers concerned before about whether such “private” permissioned networks would lead to dead cul-de-sacs of software and wasted effort or lost opportunity, having the Linux Foundation’s bedrock of open source principles and collaboration techniques behind the development of these networks should help ensure success.

We also recognize that not all networks should be under this model. We expect a diversity of approaches that will be long term sustainable, and encourage these networks to find a model that works for them. Let’s talk to see if it would be appropriate.

LF Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation. 

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at governancenetworks@linuxfoundation.org

Thanks!

Brian

The one-millionth commit: The search for the lucky Linux kernel contributor

This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor. 

Methodology:

The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).

At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)

>git checkout 92c59e126b21fd212195358a0d296e787e444087

>git log --oneline | wc

 936456 7483489 62991540


The next merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc passed that number, with 937,105

> git checkout 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc

> git log --oneline | wc

 937105 7489456 63037625

So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):

commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)

Merge: 92c59e126b21 f510ca05271b

Author: Linus Torvalds <torvalds@linux-foundation.org>

Date:   Mon Aug 3 19:19:34 2020 -0700


Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

At this point, we can simply list the 936,572nd commit in the log:

>git log --oneline | tail -936572 | head -1

85b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction

And the committer is…

git log -1 85b23fbc7d88

commit 85b23fbc7d88f8c6e3951721802d7845bc39663d

Author: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Date:   Sun Jul 26 21:31:29 2020 -0700

    x86/cpufeatures: Add enumeration for SERIALIZE instruction

Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.

For more about Intel Corporation’s Ricardo Neri, the one-millionth Linux Kernel code committer, please read and watch our interview, conducted by Swapnil Bhartiya on Linux.com.

The Linux Foundation would like to reiterate its statements and analysis of the application of US Export Control regulations to public, open collaboration projects (for example, open source software, open standards, open hardware, and open data) and the importance of open collaboration in the successful, global development of the world’s most important technologies.

Today’s announcement of prohibited transactions by the Department of Commerce regarding WeChat and TikTok in the United States confirms our initial impact analysis for open source collaboration. Nothing in the orders prevents or impacts our communities’ ability to openly collaborate with two valued members of our open source ecosystem, Tencent and ByteDance. From around the world, our members and participants engage in open collaboration because it is open and transparent, and those participants are clear that they desire to continue collaborating with their peers around the world.

As a reminder, we would like to point anyone with questions to our prior blog post on US export regulations, which links to our more detailed analysis of the topic. Both are available in English and Simplified Chinese for the convenience of our audiences.

The ACRN™ Open Source Hypervisor for IoT Development Announces ACRN v2.0 and Functional Safety Certification Concept Approval

New hybrid-mode architecture expands the scope of the project to include industrial IoT and edge device use cases, delivers new flexibility in resource sharing across virtual machines and new levels of real-time and functional safety

How Laird Connectivity leverages Zephyr RTOS to create social distancing trackers

Laird Connectivity’s Sentrius™ BT710 wearable tracker/multi-sensor, which is based on Zephyr RTOS, is a great way to automate and simplify the challenges of social distancing and contact tracing.

OpenAPI Initiative Welcomes Postman as Newest Member

Postman joins 35 current members on the fast-growing initiative that includes Atlassian, Google, Microsoft, Red Hat, and Bloomberg

LF Edge’s Fledge project announces release 1.8 that integrates with industry leaders like Google, Nokia, OSIsoft, ZEDEDA and Dianonic to enable open industrial edge software with AI/ML and Public Cloud Integration

Expanded community includes integrations and contributions from Google, Nokia, Flir, OSIsoft, Nexcom, RoviSys, Advantech, Wago, Zededa and Dianomic

LF Edge’s Akraino Project Release 3 Now Available, Unifying  OpenSource Blueprints Across MEC, AI, Cloud, and Telecom Edge

6 New R3 Blueprints (total of 20)  covering use cases across Telco, Enterprise, IoT, Cloud and more

[New White Paper] Sharpening the Edge: Overview of the LF Edge Taxonomy & Framework

This original, collaborative community-driven white paper details the new LF Edge taxonomy with the goal of clarifying market confusion by breaking the continuum down based on inherent technical and logistical tradeoffs rather than using ambiguous terms.

ONAP’s 6th Release, ‘Frankfurt,’ Available Now – Most Comprehensive, Secure and Collaborative Software to Accelerate 5G Deployments

Rich feature set including End-to-end 5G network slicing, security and deployment-ready automation anchored in Frankfurt

[New Guide] 5G Networking: An Introduction

Download this paper for an exploration of the business opportunities in 5G, the role of open source, Linux Foundation projects, and how to participate.

Data Plane Development Kit (DPDK) Publishes Defining White Paper

Produced by Avid Think and Converge! Network Digest with DPDK community support, the paper outlines the critical role DPDK plays in the evolution of networking infrastructure while dispelling a number of myths and misconceptions about the technology.

Virtual LFN Developer & Testing Forum: June 2020 Report

See the quick highlights from the June event and the LFN workstreams in motion.

Facebook’s Long History of Open Source Investments Deepens with Platinum-level Linux Foundation Membership

From its efforts to reshape computing through open source to its aggressive push to increase internet connectivity around the world, Facebook is a leader in open innovation. Perhaps more important today than ever, Facebook’s focus on democratizing access to technology enhances opportunity and scale for individuals and businesses alike. That’s why we’re so excited to announce the company is joining the Linux Foundation at the highest level.

Facebook’s sponsorship of open innovation through the Linux Foundation will help support the largest shared technology investment in history with an estimated $16B in development costs of the world’s 100+ leading open source projects and supports those project communities through governance, events and education. The company is also already the lead contributor of many Linux Foundation-hosted projects, such as Presto, GraphQL, Osquery and ONNX. It has been an active participant in Linux kernel development, employing key developers and maintainers across major kernel subsystems.

In addition to these efforts, Facebook has a long history of leveraging open source to unlock the potential of open innovation:

  • Through Facebook Connectivity and the open source Telecom Infra Project (TIP) Foundation, Facebook hopes to bring fast, reliable internet to those without it. Facebook’s Magma open source project allows telecom operators to easily deploy mobile networks in hard-to-reach areas — reducing the costs of building and maintaining telecom networks. Together, Facebook Connectivity and TIP have created hundreds of billions of dollars of value through open source collaboration.
  • Facebook created a unique dataset of over 100,000 videos and launched the Deepfake Detection Challenge in order to accelerate development of new ways to detect deepfake videos. This open, collaborative effort will help the industry and society at large meet the challenge presented by deepfake technology and help everyone better assess the legitimacy of content they see online.
  • Facebook’s Data for Good program enables geographic data to be shared with the aim of addressing some of the world’s greatest humanitarian issues, including COVID-19.
  • Facebook also leads the industry in open hardware, having founded the Open Compute Project (OCP), which uses open source to enable the creation of efficient, flexible, and scalable hardware designs for data centers.
  • By creating and sustaining an open source ecosystem around PyTorch, Facebook also accelerates the pace at which data scientists and developers can leverage the power of artificial intelligence and machine learning in computer vision, natural language processing, and other disciplines.
  • Facebook’s React.js library powers some of the world’s most popular websites and has become the standard for frontend web development due to its simplicity and flexibility.
  • In working with Github to sponsor the first-ever remote open source fellowship run by Major League Hacking, Facebook also hopes to create a trend of empowering a new generation of diverse open source contributors.

Facebook’s commitment to the open source community can be seen in both its multi-million dollar investments and its genuine passion for technology development. It is this combination that makes the company an incredible supporter of the open source developer community.

As a Platinum member of the Linux Foundation, Facebook’s Kathy Kam joins the LF board. Kathy is head of Open Source at Facebook where she manages the Open Source Engineering, Developer Advocacy, and Open Source Program Management teams. Kathy is a 20-year engineering, product management, and developer relations leader previously with Google and Microsoft.

Linux Foundation Blog Post Abstract Graphic

The Linux Foundation would like to reiterate its statements and analysis of the application of US Export Control regulations to public, open collaboration projects (e.g. open source software, open standards, open hardware, and open data) and the importance of open collaboration in the successful, global development of the world’s most important technologies. At this time, we have no information to believe recent Executive Orders regarding WeChat and TikTok will impact our analysis for open source collaboration. Our members and other participants in our project communities, which span many countries, are clear that they desire to continue collaborating with their peers around the world.

As a reminder, we would like to point anyone with questions to our prior blog post on US export regulations, which also links to our more detailed analysis of the topic. Both are available in English and Simplified Chinese for the convenience of our audiences.

Read blog post on open source and export controls

Healthcare industry proof of concept successfully uses SPDX as a software bill of materials format for medical devices

Overview

Software Package Data Exchange (SPDX) is an open standard for communicating software bill of materials (SBOM) information that supports accurate identification of software components, explicit mapping of relationships between components, and the association of security and licensing information with each component. The SPDX format has recently been submitted by the Linux Foundation and the Joint Development Foundation to the JTC1 committee of the ISO for international standards approval.

A group of eight healthcare industry organizations, composed of five medical device manufacturers and three healthcare delivery organizations (hospital systems), recently participated in the first-ever proof of concept (POC) of the SPDX standard for healthcare use.

 This blog post is a summary of the results of this initial trial.

Why do we care about SBOMs and the medical device industry?

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical in the medical device industry and within healthcare delivery organizations to adequately understand the operational and cyber risks of those software components from their originating supply chain.

Some cyber risks come from using components with known vulnerabilities. Known vulnerabilities are a widespread problem in the software industry, such as known vulnerabilities in the Top 10 Web Application Security Risks from the Open Web Application Security Project (OWASP). Known vulnerabilities are especially concerning in medical devices since the exploitation of those vulnerabilities could lead to loss of life or maiming. One-time reviews don’t help, since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of a medical device, similar to how food ingredients are made visible.

A measured path towards using SBOMs in the medical device industry

In June 2018, the National Telecommunications and Information Administration (NTIA) engaged stakeholders across multiple industries to discuss software transparency and to participate in a limited proof of concept (POC) to determine if SBOMs can be successfully produced by medical device manufacturers and consumed by healthcare delivery organizations. That initial POC was successfully concluded in the early fall of 2019. 

Despite the limited scope, the NTIA POC results demonstrated that industry-agnostic standard formats can be leveraged by the healthcare vertical and that industry-specific formats are unnecessary. 

Next, the participants in the NTIA POC explored whether a standardized SBOM format could be used for sharing information between medical device manufacturers and healthcare delivery organizations. For this next phase, the NTIA stakeholders engaged the Linux Foundation’s SPDX community to work with the NTIA Healthcare working group. The goal was to demonstrate through a proof of concept whether the open source SPDX SBOM format would be suitable for healthcare and medical device industry uses. The first phase of that trial was conducted in early 2020.

Objectives of the 2020 POC

The stated goals of this 2020 proof of concept (POC) were to prove the viability of the framing document created by the NTIA SBOM Working group (of which the Linux Foundation was a contributor) from their earlier POC for the medical device and healthcare industry. 

This NTIA framing document defines specific baseline data elements or fields that should be used to identify software components in any SBOM format, which can be mapped into corresponding field elements in SPDX:

NTIA BaselineSPDX
Supplier Name(3.5) PackageSupplier:
Component Name(3.1) PackageName:
Unique Identifier(3.2) SPDXID:
Version String(3.3) PackageVersion:
Component Hash(3.10) PackageChecksum;
Relationship(7.1) Relationship: CONTAINS
Author Name(2.8) Creator:

The 2020 POC conducted by NTIA working group had a stated objective to determine if SBOMs generated by Medical Device Manufacturers (MDMs) using SPDX could be ingested into SIEM (Security, Information and Event Management) solutions operated by the participating Healthcare Delivery Organizations (HDOs).

The MDMs included in this POC included Abbott, Medtronic, Philips, Siemens, and Thermo Fisher. The HDOs included Cedars-Sinai, Christiana Care, Mayo Clinic, Cleveland Clinic, Johns Hopkins, New York-Presbyterian, Partners/Mass General, and Sutter Health.

Execution and implementation of the SPDX SBOMs

  • The participating HDOs provided an inventory of the deployed medical devices in use within their organizations.
  • A best-effort approach was used to determine software identity as the names that software packages are known by are “ambiguous” and could be misinterpreted.
  • An example SPDX was created along with a guidance document for the MDMs to follow for use with the medical devices identified by the HDO inventory exercise.
  • The MDMs produced 17 distinct SPDX-based SBOMs manually and with generator tooling.
  • The SBOMs were delivered via secure transfer using enterprise Box accounts, simulating delivery via secure customer portals offered by each MDM.

Consumption of the SBOMs in the SPDX POC

As a result of the 2020 POC, all participating HDOs successfully ingested the SPDX SBOM into their respective SIEM solutions, immediately making the data searchable to identify security vulnerabilities across a fleet of products. This information can also be converted into a human-readable, tabular format for other data analysis systems.

Multiple HDOs are already collaborating with vendor partners to explore direct ingestion into medical device asset/risk management solutions as part of their device procurement. One of the HDOs is working with one of their vendor partners to explore direct ingestion into a healthcare Vendor Risk Management (VRM) solution, and another has developed a ”How-To Guide,” focusing on how to correctly parse out the Packages fields using regular expressions (regex). 

As a positive indicator of SPDX’s suitability when used with asset management systems, two HDOs have begun configuring their respective internal tracking systems to track software dependencies and subcomponents. Additionally, multiple HDOs are collaborating with vendor partners to manage devices into medical device asset/risk management solutions through the device’s life by allowing for periodic updates and an audit trail.

Ongoing considerations for SPDX-based SBOMs for medical devices in healthcare organizations

Risk management, vulnerability management, and legal considerations are ongoing at the participating HDOs related to the use of SPDX-based SBOMs.

Risk management

All of the responding HDOs are exploring vulnerability identification upon procurement (i.e., SIEM through initial ingestion of the SBOM) and on an on-going basis (i.e., SIEM, CMDB/CMMS, VRM). The participating HDOs intend to explore mitigation plan / compensating control exercises that will be performed to identify vulnerable components, measure exploitability, implement risk reduction techniques, and document this data alongside the SBOM.

The SPDX community intends to learn from these exercises and improve future versions of SPDX specification to include requested information determined to be needed to manage risk effectively.

Vulnerability management at HDOs

An HDO is already working with its Biomed team to manually perform vulnerability management processes on information extracted from SBOM data. 

Another is working with their Vulnerability Management team to evaluate correlated SBOM data to credentialed/non-credentialed scans of the same device, which may prove useful in an information audit use case. A second HDO is currently working with their Vulnerability Management team on leveraging the SBOM data to supplement regular scanning results.

Participating HDOs have been developing SBOM product security language to add cybersecurity safeguards to the contract documentation.

Conclusion

The original POC was able to validate the conclusions of the NTIA Working Group that proprietary SBOM formats specific to healthcare industry verticals are not needed. This 2020 POC showed that the SPDX standard could be used as an open format for SBOMs for use by healthcare industry providers. Additionally, the ability to import the SPDX format into SIEM solutions will help HDOs adequately understand the operational and cyber risks of medical device software components from their originating supply chain. 

There is work ahead to improve automation of SPDX-based SBOMs, including the automated identification of software components and determining which component vulnerabilities are exploitable in a given system. Participating HDOs intend to perform compensating control exercises to identify and implement risk reduction techniques building on this information. HDOs are also evaluating how SPDX can support other improvements to vulnerability management. In summary, this POC showed that SPDX could be an essential part of addressing today’s operational and cyber risks.