“Almost always, great new ideas don’t emerge from within a single person or function, but at the intersection of functions or people that have never met before.” — Clayton M. Christensen
As the pace of technology and innovation continues to accelerate, we’re seeing more security issues emerge that have a wider and wider scope of impact. At the RSA conference in April 2015, Amit Yoran started his keynote  with the statement, “We stand in the dark ages of Information Security. Things are getting worse not better.” He then challenged the audience to rethink how security is being done. In one of his closing thoughts, he pointed out:
“Threat intelligence is available to us, let’s leverage it in machine readable format for increased speed and agility. It should be operationalized into our environment, into your security operations and tailored to meet your organization’s needs. Align it with your organization’s assets and interests so the analysts can quickly respond and identify those threats which might matter most to the organization.”
One of the key reasons behind the pace of technology and innovation continuing to accelerate is the pervasive use of open source software. Open source software is able to build on the work of other projects due to the choice of using an open source license. This has spurred collaboration and the tremendous rate of innovation; as a result, however, the foundations and critical infrastructures are continually shifting and changing. Tracking these core and foundational pieces is a challenge. The Linux kernel is one such core package that is easily identified; however, there are many others whose roles are not as clear until something breaks. Layered on this environment is the problem of hidden dependencies between packages and different behaviors for specific versions of packages. Another challenge involves identifying developers who are able to fix security flaws and maintain software projects that were created in the past. The Core Infrastructure Initiative (CII) program at the Linux Foundation  was designed to address the identification of core projects and improve the transparency of the health of open source projects, but there are still problems to overcome as new technologies emerge.
Software or Information security has become its own specialized field, with its own language and processes, as well as documented best practices for identifying problems, finding fixes, and designing strategies to improve security. NIST’s National Vulnerability database  tracking the Common Vulnerabilities and Exposures (CVEs) , provides a key piece of infrastructure for coordinating the existing efforts and linking these vulnerabilities to specific products through the use of Common Platform Enumerations (CPEs) . Unfortunately, a motivated group of people is always looking for ways to exploit bugs and take advantage of gaps between open source components that make up products.
Proactive identification and avoidance of security issues at the product level is going to be needed. But let’s face it, there’s always going to be a bug that slips through and needs to be fixed once products are deployed in the field. A clear understanding of the provenance of ALL the software that makes up a product will be key to rapidly assessing what needs to be fixed and by whom. Consumer products today include software applications, the underlying operating system those applications run on, and the firmware that interfaces the system software to the hardware; and can be influenced by the software used to build specific instances.
In the manufacturing field, supply chain management for safety critical devices (such as, cars, medical, etc.), has many processes in place so that every hardware component can be traced back to its original source in an efficient manner. When problems occur, the key component can be isolated and a remediation process (recall, etc.) can be put into place. The trend to shift increasing amounts of functionality from hardware to software allows innovation to occur at a rapid pace but also creates challenges for accurate supply chain tracking.
Today, the processes for tracking software origins to this level have not been standardized effectively across the industry with a common language that permits identification of the dependencies and vulnerabilities to the needed level of detail. Most of the key information for understanding is contained in the build options when a binary is created, but what gets tracked is usually the binary itself (as a product). Reconstructing what specific version of the software sources was used, determining any software dependencies (libraries linked in, etc.), and knowing which compiler was used to do the build can be difficult once a problem is identified in a product and people are scrambling to find a solution quickly. Joshua Corman’s talk at Øredev conference in November 2014  provided some compelling examples to illustrate the argument that, from a security perspective, it’s time for a software supply chain.
On a similar note, open source licensing compliance is undergoing a similar set of problems in terms of not being able to keep up with the rate of change. Gartner estimates that “By 2016, at least 95 percent of IT organizations will leverage nontrivial elements of open source technology in their mission-critical IT portfolios, and fewer than 50 percent of organizations will have implemented an effective strategy for procuring and managing open source.” All too often, the developers creating an application, service, library are building on work done by others.
They may be unaware of the licenses and obligations and just want to get the functionality working by the deadline for the project. Agile development cycles, for example, focus on the next goal and fuel rapid innovation. Accurate tracking and investigation to figure out if the license is compatible with the use case can become an afterthought, if done at all.
These 3 areas (security, product manufacturing, and license compliance) all have a similar problem with rate of change, and the processes in place today are not keeping up. Some partial solutions are emerging now based on recognition made 6 years ago by the SPDX team that the tracking of licensing and copyright information at the software file level was needed. Being able to clearly articulate the relationship between sources and their binaries is important for drawing the connection. Information about aggregation of software that makes up a release, what patches have been applied after the initial release, and so forth are an important part of the solution as well.
Making it easy to share accurate licensing and security information through the supply chain needs to be the goal. Ensuring that the information can be automatically collected as well as accurate is going to be necessary to keep up with the rate of change. SPDX 2.0  is an open standard that has been developed by teams from the legal, business, and technical communities to help with this supply chain and license tracking problem. However, it has been missing a clean way to link into the rich language and infrastructure that is important for tracking security. Once a security problem is identified in a software component, tracking the scope of impact must be automated so that all products that may contain this component (even indirectly) can be notified. To help with this, SPDX 2.1 is looking at adding links to NIST’s CPE’s and other emerging security and software assurance standards that will permit accurate mining of that information as well.
If you have thoughts on how to help make this automatable tracking of security, licensing, and copyright information available to the supply chain, ideas are most welcome. We’ll be having a Supply Chain Mini-Summit  in Dublin on Oct. 8th, and those interested in exploring this further are welcome to attend.