Companies in diverse industries are increasingly building applications designed to run in the cloud at a massive, distributed scale. That means they are also seeking talent with experience deploying and managing such cloud native applications using containers in microservices architectures.

Kubernetes has quickly become the most popular container orchestration tool according to The New Stack, and thus is a hot new area for career development as the demand for IT practitioners skilled in Kubernetes has also surged. Apprenda, which runs its PaaS on top of Kubernetes, reported a spike in Kubernetes job postings this summer, and the need is only growing.

To meet this demand, The Linux Foundation and the Cloud Native Computing Foundation today announced they have partnered to provide training and certification for Kubernetes.  

The Linux Foundation will offer training through a free, massive open online course (MOOC) on edX as well as a self-paced, online course. The MOOC will cover the introductory concepts and skills involved, while the online course will teach the more advanced skills needed to create and configure a real-world working Kubernetes cluster.

The training course will be available soon, and the MOOC and certification program are expected to be available in 2017. The course is open now at the discounted price of $99 (regularly $199) for a limited time. Sign up here to pre-register for the course.

The course curriculum will also be open source and available on GitHub, Dan Kohn, CNCF Executive Director, said in his keynote today at CloudNativeCon in Seattle.

Certification will be offered by Kubernetes Managed Service Providers (KMSP) trained and certified by the CNCF. Nine companies with experience helping enterprises successfully adopt Kubernetes are committing engineers to participate in a CNCF working group that will develop the certification requirements. These early supporters include Apprenda, Canonical, Cisco, Container Solutions, CoreOS, Deis, Huawei, LiveWyer, and Samsung SDS. The companies are also interested in becoming certified KMSPs once the program is available next year.

Kubernetes is a software platform that makes it easier for developers to run containerized applications across diverse cloud infrastructures — from public cloud providers, to on-premise clouds and bare metal. Core functions include scheduling, service discovery, remote storage, autoscaling, and load balancing.

Google originally engineered the software to manage containers on its Borg infrastructure, but open sourced the project in 2014 and donated it earlier this year to the Cloud Native Computing Foundation at The Linux Foundation. It is now one of the most active open source projects on GitHub and has been one of the fastest growing projects of all time with a diverse community of contributors.

“Kubernetes has the opportunity to become the new cloud platform,” said Sam Ghods, a co-founder and Services Architect at Box, in his keynote at CloudNativeCon. “We have the opportunity to do what AWS did for infrastructure but this time in an open, universal, community-driven way.”

With more than 170 user groups worldwide, it’s already easy to hire people who are experts in Kubernetes, said Chen Goldberg, director of engineering for the Container Engine and Kubernetes team at Google, in her keynote at CloudNativeCon.

The training and certification from CNCF and The Linux Foundation will go even further to help develop the pool of Kubernetes talent worldwide.

Pre-register now for the online, self-paced Kubernetes Fundamentals course from The Linux Foundation and pay only $99 ($100 off registration)!

This week in Linux and open source news, Facebook announces new networking hardware technology, The Linux Foundation’s board expands with three new additions, and more! Get up to speed on the latest headlines with this weekly digest:



Facebook wants to change the data center hardware market and is one step closer after releasing its new voyager device. Learn more in Jonathan Vanian’s latest Fortune article.

1) Facebook announces creation of a new type of hardware to be used to send data “quickly across long distances and multiple data centers.”

Facebook Just Created Some Fancy New Networking Technology– Fortune

2) The Linux Foundation welcomes Erica Brescia (Bitnami,) Jeff Garzik (Bloq,) and Nithya A. Ruff (Western Digital) to board of directors

The Linux Foundation Adds Three New Members to Board of Directors– EconoTimes

3) Amazon Web Services “is becoming more open to private and hybrid cloud scenarios.”

 AWS Reveals On-Premises Linux Test Environment– Computer Business Review

4) Ubuntu Core 16 for IoT features Linux self-patching.

Ubuntu Core Snaps Door Shut on Linux’s New Dirty COWs– The Register

5) New Linux Foundation course on edX focuses on “three basic principles of DevOps.”

Linux Foundation Launches Online DevOps Course in Move to Increase Experience– RCR Wireless News

The Linux Foundation’s Hadoop project, ODPi, and Enterprise Strategy Group (ESG) are teaming up on November 7 for a can’t miss webinar for Chief Data Officers and their Big Data Teams.


Big Data report

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.

Join ESG analyst Nik Rouda and ODPi Director John Mertic for “Taking the Complexity out of Hadoop and Big Data” to learn:

  1. How ODPi pulls complexity out of Hadoop, freeing enterprises and their vendors to innovate in the application space

  2. How CDOs and app vendors port apps easily across cloud, on prem and Hadoop distros. Nik revels ESG’s latest research on where enterprises are deploying net new Hadoop installs across on-premise, public, private and hybrid cloud

  3. What big data industry leaders are focusing on in the coming months

Removing Complexity

As ESG’s Nik Rouda observes, “Hadoop is not one thing, but rather a collection of critical and complementary components. At its core are MapReduce for distributed analytics jobs processing, YARN to manage cluster resources, and the HDFS file system. Beyond those elements, Hadoop has proven to be marvelously adaptable to different data management tasks. Unfortunately, too much variety in the core makes it harder for stakeholders (and in particular, their developers) to expand their Hadoop-enhancing capabilities.”
The ODPI Compliant certification program ensures greater simplicity and predictability for everyone downstream of Hadoop Core – SIs, app vendors and end users.

Application Portability

ESG reveals their latest findings on how enterprises are deploying Hadoop, and you may be surprised at the percent moving to the cloud. Find out who’s deploying on premise (dedicated and shared), who’s using pre-configured on-prem infrastructure, what percent are moving to private, public and hybrid cloud.

Where Industry Leaders are Headed

ESG interviewed leaders like Capgemini, VMWare, and more as part of this ODPi research – let their thinking light your way as you develop your Hadoop and Big Data Strategy.

Reserve your spot for this informative webinar. 

As a bonus, all registrants will receive a free copy of Nik’s latest Big Data report.

Open source careers may be even more in demand and rewarding in Europe than the rest of the world, according to new data from the 2016 Open Source Jobs Report released today by The Linux Foundation and Dice. European open source pros are more confident in the job market, get more incentives from employers, and more calls from recruiters than their counterparts worldwide, according to the data.

The full report, released earlier this year, analyzed trends for open source careers and the motivations of professionals in the industry. Now, the data have been broken down to focus specifically on responses from more than 1,000 open source professionals in Europe, and how they compare to respondents from around the world.

“European technology professionals, government organizations and corporations have long embraced open source,” said Jim Zemlin, executive director at The Linux Foundation, in a press release. “The impressive levels of adoption of and respect for open source clearly have translated into more demand for qualified open source professionals, providing strong opportunities for developers, DevOps professionals, and others.”

Europeans are more confident than their global counterparts in the open source job market, according to the data. Sixty percent of open source pros in Europe believe it would be fairly or very easy to find a new position this year, as opposed to only 50 percent elsewhere in the world.

Employers in Europe are also offering more incentives to hold onto staff. Forty percent of European open source professionals report that in the past year they have received a raise, 27 percent report improved work-life balance, and 24 percent report more flexible schedules. This compares to 31 percent globally reporting raises, and 20 percent globally reporting either a better work-life balance or more flexible work schedules. Overall, only 26 percent of Europeans stated their employer had offered them no new incentives this year, compared to 33 percent globally.

And recruiters are more active in seeking open source talent in Europe. 50 percent of Europeans reported receiving more than 10 calls from recruiters in the six months prior to the survey, while only 22 percent of respondents worldwide reported that many calls. While worldwide 27 percent of respondents received no calls from recruiters, only five percent of Europeans said the same.

Application development and DevOps skills are in high demand in Europe, similar to the rest of the world. Only in Europe, app development was in higher demand with 23 percent of European open source professionals reporting it as the most in-demand skill, compared with 11 percent of professionals elsewhere.  DevOps was the highest in-demand skill worldwide, at 13 percent, but second among Europeans at 12 percent.

Regardless of where they live in the world, however, all open source professionals said they enjoy working on interesting projects more than anything. Thirty-four percent in Europe, compared with 31 percent globally, agreed this was the best thing about their jobs. However, while respondents around the world said the next best things were working with cutting-edge technology (18 percent) and collaboration with a global community (17 percent), European professionals selected job opportunities second at 17 percent, followed by both cutting-edge technologies and collaboration tied at 16 percent each. Five percent of European respondents said money and perks were the best part of their job, more than double the two percent who chose this response worldwide.

For more information about the worldwide open source jobs market, download the free 2016 Open Source Jobs Report.


Leading open source technologists from Cloudera, Hortonworks, Uber, Red Hat, and more are set to speak at Apache: Big Data and ApacheCon Europe, taking place Nov. 14-18 in Seville, Spain. The Linux Foundation today announced keynote speakers and sessions for the co-located events.

Apache: Big Data Europe, Nov. 14-16, gathers the Apache projects, people, and technologies working in Big Data, ubiquitous computing and data engineering, and science to educate, collaborate, and connect in a completely project-neutral environment; it is the only event that brings together the full suite of Big Data open source projects including Apache Hadoop, Cassandra, CouchDB, Spark, and more.

The event will feature more than 100 sessions covering the issues, technologies, techniques, and best practices that are shaping the data ecosystem across a wide range of industries including finance, business, manufacturing, government and academia, media, energy, and retail.

Keynote speakers at Apache: Big Data include:

  • Mayank Bansal, Senior Engineer, Big Data, Uber

  • Stephan Ewan, CTO, Data Artisans

  • Alan Gates, Co-Founder, Hortonworks

  • John Mertic, Director, Program Management, ODPi

  • Sean Owen, Director of Data Science, Cloudera

View the full Apache Big Data schedule.

Registration for Apache: Big Data Europe is discounted to $499 through October 3. Register Now! Those interested in also attending ApacheCon can add that to their Apache: Big Data registration for only $399. Diversity and needs-based scholarship applications are also being accepted. Apply now for a scholarship.


ApacheCon, Nov. 16-18, is the annual conference of The Apache Software Foundation and brings together the Apache and open source community to learn about and collaborate on the technologies and projects driving the future of open source, web technologies and cloud computing.

The event will contain tracks and mini-summits dedicated to specific Apache projects organized by their respective communities. In addition, ApacheCon Europe will host complimentary tracks, including Apache Incubator/Innovation, Future of Web, and Community, as well as hackathons, lightning talks, and BarCampApache.

Session highlights include:

  • Building a Container Solution on Top of Apache CloudStack – Paul Angus, VP Technology & Cloud Architect, ShapeBlue

  • Practical Trademark Law For FOSS Projects – Shane Curcuru, VP Brand Management, The Apache Software Foundation

  • Building Inclusive Communities – Jan Lehnardt, Vice President, Apache CouchDB

  • Building Apache HTTP Server; from Development to Deployment – William Rowe, Jr., Staff Engineer, Pivotal

  • If You Build It, They Won’t Come – Ruth Suehle, Community Marketing Manager, Red Hat

View the full lineup of ApacheCon sessions.

Registration for ApacheCon is discounted to $499 through Oct. 3. Register Now! Or Apply for diversity and needs-based scholarships. Those interested in also attending Apache: Big Data can add on that event for an additional $399.

Organizations use open source software to gain competitive advantage in many ways: to speed up software delivery, save money on development, to stay flexible, and to stay on the leading edge of technology.

But using open source software, and especially integrating and redistributing it in products and services, carries with it added complexity and risk. Code coming in from multiple sources, under different licenses and with varied quality and maturity levels, can expose organizations to issues with security, integration, support and management — not to mention legal action — if the code is not properly managed.

That’s why companies that successfully leverage open source for business advantage, have established programs to manage their open source development processes.

“When open source is business critical, it predicates the use of professional open source management,” said Bill Weinberg, senior director and analyst of open source strategy at The Linux Foundation. “You need a clear management strategy that aligns with your business goals. And you need efficient processes to ensure that compliance does not discourage participation.”

Professional open source management requires a clear strategy, driven by your organization’s business objectives. It includes well-defined policies and a set of efficient processes that help an organization deliver consistent results with open source software. Below are the seven dimensions of a good corporate open source policy and processes, provided by Weinberg and Greg Olson, senior director of open source consulting services at The Linux Foundation.

Want to learn more about professional open source management? Watch a free replay of Bill Weinberg and Greg Olson’s recent webinar, “Open Source Professional Management  – When Open Source becomes Mission-Critical.” Watch Now.

7 dimensions of open source management

1. Discovery – Provide guidance for developers on how to find and evaluate open source software for use in their work.

2. Review and Approval – A checkpoint to review architectural compatibility, code quality and maturity, known bugs and security vulnerabilities, availability of required support, and license compatibility.

3. Procurement practices – Review and approval for code that enters through commercial procurement, rather than downloading from the Internet.

4. Code management and maintenance – ensures that open source is reliably tracked and archived and that it is supported and maintained at a level appropriate for each application.

5. Community interaction – clear guidelines for developers who interact with outside community members and an approval process for contributions to open source communities.

6. Compliance program – ensures that OSS elements subject to license requirements are identified and implemented.

7. Executive oversight – important for long-term success. Executives should review OSS management operations, participate in and approve open source management policy and approve policy exceptions and significant contributions to community projects. Legal executives should review all new OSS licenses and any licensing policy exceptions.

The first cloud native-focused event hosted by The Cloud Native Computing Foundation will gather leading technologists from open source cloud native communities in Toronto on Aug. 25, 2016, to further the education and advancement of cloud native computing.

Co-located with LinuxCon and ContainerCon North America, CloudNativeDay will feature talks from IBM, 451 Research, CoreOS, Red Hat, Cisco and more. For a sneak peek at the event’s speakers and their presentations, read on.

For readers only; get 20% off your CloudNativeDay tickets with code CND16LNXCM. Register now.

Scaling Containers from Sandbox to Production

There is an industry IT renaissance occurring as we speak around cloud, data and mobile technology and it’s driven by open source code, community and culture.

IBM’s VP Cloud Architecture & Technology, Dr. Angel Diaz, opens up CloudNativeDay with a keynote on “Scaling Containers from Sandbox to Production,” where he will discuss how the digital disruption in today’s market is largely driven by containers and other open technologies. With a container-centric approach, developers are able to quickly stand up containers, iterate, and change their architectures. Dr. Diaz will provide insight on how enterprises are able to transform the way they grow, maintain, and rapidly expand container and microservice-based applications across multiple clouds. Dr. Diaz will also discuss the role of CNCF in creating a new set of common container management technologies informed by technical merit and end user value.

Real-World Examples of Containers and Microservices Architectures

Enabling DevOps are two of the fastest-growing trends in technology: containers and microservices. With rapid growth comes rapid confusion. Who is using the technology? How did they build their architectures? What is the ROI of the technology?

Having real-world examples of how leading-edge companies are building containers and microservices architectures will help answer these burning questions. 451 Research’s Development, DevOps, & IT Ops channel Research Director, Donnie Berkholz will provide these examples in his talk Cloud Native in the Enterprise: Real-World Data on Container and Microservice Adoption.”

Berkholz’s current research is steeped in the latest innovative technologies employed for software development and software life cycle management to drive business growth. His research will shape this session exploring the state of cloud-native prerequisites in the enterprise, the container ecosystem including current adoption, and data on companies moving to cloud-native platforms.

When Security and Cloud Native Collide

In one world, the cloud native approach is redefining how applications are architected, throwing many traditional assumptions out of the window. In the other world, traditional security teams ensure projects in the enterprise meet a rigid set of security rules in order to proceed. What happens when these two worlds collide?

Apprenda Senior Director Joseph Jacks, Box Sight Reliable Engineer Michael Ansel, Tigera Founder and CEO Christopher Liljenstolpe join forces to discuss “Whither Security in a Cloud-Native World?

This panel will diving into how applications will be secured, who will define security policies, and how these policies will be enforced across hybrid environments – both private and public clouds, and traditional bare metal / VM and cloud-native, containerized workloads.

Peek Inside The Cloud Foundry Service Broker API

Services are integral to the success of a platform. For Cloud Foundry, the ability to connect to and manage services is a crucial piece of its platform.

Abby Kearns, VP of industry strategy for Cloud Foundry Foundation, will discuss why they created a cross-foundation working group with The Cloud Native Computing Foundation to determine how the Cloud Foundry Service Broker API can be opened up and leveraged as an industry-standard specification for connecting services to platforms.

In her presentation, “How Cloud Foundry Foundation & Cloud Native Computing Foundation Are Collaborating to Make the Cloud Foundry Service Broker API the Industry Standard,” Kearns will share the latest progress on a proof of concept that allows services to write against a single API, and be accessible to a variety of platforms.

Innovative Open Source Strategies Key to Cloud Native in the Enterprise

As IT spending on cloud services reaches $114 billion this year and grows to $216 billion in the year 2020 (according to a report released by Gartner), cloud-native apps are becoming commonplace across enterprises of all sizes.

Enterprises are investing in people and process to enable cloud native technologies.  Adoption of collaborative and innovative open source technologies have become a key factor in their success, according to Vice President and Chief Technologist of Red Hat, Chris Wright.

Wright’s closing keynote at CloudNativeDay, “Bringing Cloud Native Innovations into the Enterprise,” will discuss the open source strategies and organizations driving this success. After more than a decade serving as a Linux kernel developer working on security and virtualization, Wright understands the importance of ensuring industry collaboration on common code bases, standardized APIs, and interoperability across multiple open hybrid clouds.


Read more on CloudNativeDay. Save 20% when using code CND16LNXCM and register now.


The Xen Project’s code contributions have grown more than 10 percent each year. Although growth is extremely healthy to the project as a whole, it has its growing pains. For the Xen Project, it led to issues with its code review process: maintainers believed that their review workload increased and a number of vendors claimed that it took significantly longer for contributions to be upstreamed, compared to the past.

The project developed some basic scripts that correlated development list traffic with Git commits, which showed indeed that it took longer for patches to be committed. In order to identify possible root causes, the project initially ran a number of surveys to identify possible causes for the slow down. Unfortunately, many of the observations made by community members contradicted each other, and were thus not actionable. To solve this problem, the Xen Project worked with Bitergia, a company that focuses on analyzing community software development processes, to better understand and address the issues at hand.

We recently sat down with Lars Kurth, who is the chairperson for the Xen Project, to discuss the overall growth of the Xen Project community as well as how the community was able to improve its code review process through software development analytics.

Like many FOSS projects, the Xen project code review process uses a mailing list-based review process, and this could be a good blueprint for projects that are finding themselves in the same predicament. Why has there been so much growth in the Xen Project Community?

Lars Kurth: The Xen Project hypervisor powers some of the biggest cloud computing companies in the world, including Alibaba’s Aliyun Cloud Services, Amazon Web Services, IBM Softlayer, Tencent, Rackspace and Oracle (to name a few).

It is also being increasingly used in new market segments such as the automotive industry, embedded and mobile as well as IoT. It is a platform of innovation that is consistently being updated to fit the new needs of computing with commits coming from developers across the world. We’ve experienced incredible community growth of 100 percent in the last five years. A lot of this growth has come from new geographic locations — most of the growth is from China and Ukraine. How did the project notice that there might be an issue and how did people respond to this?

Lars Kurth: In mid 2014, maintainers started to notice that their review workload increased. At the same time, some contributors noticed that it took longer to get their changes upstreamed. We first developed some basic scripts to prove that the total elapsed time from first code review to commit had indeed increased. I then ran a number of surveys, to be able to form a working thesis on the root causes.

In terms of response, there were a lot of differing opinions on what exactly was causing the process to slow down. Some thought that we did not have enough maintainers, some thought we did not have enough committers, others felt that the maintainers were not coordinating reviews well enough, while others felt that newcomers wrote lower quality code or there could be cultural and language issues.

Community members made a lot of assumptions based on their own worst experiences, without facts to support them. There were so many contradictions among the group that we couldn’t identify a clear root cause for what we saw. What were some of your initial ideas on how to improve this and why did you eventually choose to work with Bitergia for open analytics of the review process?

Lars Kurth: We first took a step back and looked at some things we could do that made sense without a ton of data. For example, I developed a training course for new contributors. I then did a road tour (primarily to Asia) to build personal relationships with new contributors and to deliver the new training.

In the year before, we started experimenting with design and architecture reviews for complex features. We decided to encourage these more without being overly prescriptive. We highlighted positive examples in the training material.

I also kicked off a number of surveys around our governance, to see whether maybe we have scalability issues. Unfortunately, we didn’t have any data to support this, and as expected different community members had different views. We did change our release cadence from 9-12 months to 6 months, to make it less painful for contributors if a feature missed a release.

It became increasingly clear that to make true progress, we would need reliable data. And to get that we needed to work with a software development analytics specialist. I had watched Bitergia for a while and made a proposal to the Xen Project Advisory Board to fund development of metrics collection tools for our code review process. How did you collect the data (including what tools you used) to get what you needed from the mailing list and Git repositories?

Lars Kurth: We used existing tools such as MLStats and CVSAnalY to collect mailing list and Git data. The challenge was to identify the different stages of a code review in the database that was generated by MLStats and to link it to the Git activity database generated by CVSAnalY. After that step we ended up with a combined code review database and ran statistical analysis over the combined database. Quite a bit of plumbing and filtering had to be developed from scratch for that to take place. Were there any challenges that you experienced along the way?

Lars Kurth: First we had to develop a reasonably accurate model of the code review process. This was rather challenging, as e-mail is essentially unstructured. Also, I had to act as bridge between Bitergia, which implemented the tools and the community. This took a significant portion of time. However, without spending that time, it would have been quite likely that the project would fail.

To de-risk the project, we designed it in two phases: the first phase focused on statistical analysis that allowed us to test some theories; the second phase focused on improving accuracy of the tools and making the data accessible to community stakeholders. What were your initial results from the analysis?

Lars Kurth: There were three key areas that we found were causing the slow down:

  • Huge growth in comment activity from 2013 to 2015

  • The time it took to merge patches (=time to merge) increased significantly from 2012 to the first half of 2014. However, from the second half of 2014  time to merge moved back to its long-term average. This was a strong indicator that the measures we took actually had an effect.

  • Complex patches were taking significantly longer to merge than small patches. As it turns out, a significant number of new features were actually rather complex. At the same time, the demands on the project to deliver better quality and security had also raised the bar for what could be accepted. How did the community respond to your data? How did you use it to help you make decisions about what was best to improve the process?

Lars Kurth: Most people were receptive to the data, but some were concerned that we were only able to match 60 percent of the code reviews to Git commits. For the statistical analysis, this was a big enough sample.

Further investigation showed that the main factor for this low match rate was caused by cross-posting of patches across FOSS communities. For example, some QEMU and Linux patches cross-posted for review on the Xen Project mailing lists, but the code did not end up in Xen. Once this was understood, a few key people in the community started to see the potential value of the new tools.

This is where stage two of the project came in. We defined a set of use cases and supporting data that broadly covered three areas:

  • Community use cases to encourage desired behavior: this would be metrics such as real review contributions (not justed  ACKed-by and Reviewed-by flags), comparing review activity against contributions.

  • Performance use cases that would allow us to spot issues early: these would allow us to filter time-related metrics by a number of different criteria such as complexity of a patch series

  • Backlog use cases to optimize process and focus: the intention here was to give contributors and maintainers tools to see what reviews are active, nearly complete, complete or stale. How have you made improvements based on your findings and what have been the end results for you?

Lars Kurth: We had to iterate the use cases, the data supporting them and how the data is shown. I expect that that process will continue, as more community members use the tools. For example, we realized that the code review process dashboard that was developed as part of the project is also useful for vendors to estimate how long it will take to get something upstreamed based on past performance.

Overall, I am very excited about this project, and although the initial contract with Bitergia has ended, we have an Outreachy intern working with Bitergia and me on the tools over the summer. How can this analysis support other projects with the similar code review processes?

Lars Kurth: I believe that projects like the Linux kernel and others that use e-mail based code review processes and Git should be able to use and build on our work. Hopefully, we will be able to create a basis for collaboration that helps different projects become more efficient and ultimately improve what we build.


Co-authored by Dr. David A. Wheeler

Everybody loves getting badges.  Fitbit badges, Stack Overflow badges, Boy Scout merit badges, and even LEED certification are just a few examples that come to mind.  A recent 538 article Even psychologists love badges” publicized the value of a badge.


CII badge

Core Infrastructure Initiative Best Practices

GitHub now has a number of specific badges for things like test coverage and dependency management, so for many developers they’re desirable. IBM has a slew of certifications for security, analytics, cloud and mobility, Watson Health and more. 

Recently The Linux Foundation joined the trend with the Core Infrastructure Initiative (CII) Best Practices Badges Program

The free, self-service Best Practices Badges Program was designed with the open source community. It provides criteria and an automated assessment tool for open source projects to demonstrate that they are following security best practices.

It’s a perfect fit for CII, which is comprised of technology companies, security experts and developers, all of whom are committed to working collaboratively to identify and fund critical open source projects in need of assistance. The badging project is an attempt to “raise all boats” in security, by encouraging projects to follow best practices for OSS development.  We believe projects that follow best practices are more likely to be healthy and produce secure software. 

Here’s more background on the program and some of the questions we’ve recently been asked.

Q: Why badges?

A: We believe badges encourage projects to follow best practices, to hopefully produce better results. The badges will:

  • Help new projects learn what those practices are (think training in disguise).

  • Help users know which projects are following best practices (so users can prefer such projects).  

  • Act as a signal to users. If a project has achieved a number of badges, it will inspire a certain level of confidence among users that the project is being actively maintained and is more likely to consistently produce good results.

Q: Who gets a badge?  Is this for individuals, projects, sites?

A: The CII best practices badge is for a project, not for an individual.  When you’re selecting OSS, you’re picking the project, knowing that some of the project members may change over time.

Q: Can you tell us a little about the “BadgeApp” web application that implements this?

A: “BadgeApp” is a simple web application that implements the criteria (fill in form).  It’s OSS, with an MIT license.  All the required libraries are OSS & legal to add; we check this using license_finder.

Our overall approach is that we proactively counter mistakes.  Mistakes happen, so we use a variety of tools, an automated test suite, and other processes to counter them.  For example, we use rubocop to lint our Ruby, and ESLint to lint our Javascript.  The test suite currently has 94% statement coverage with over 3000 checked assertions, and our project has a rule that the test suite must be at least 90%.

Please contribute!  See our file for more.

Q: What projects already have a badge?
A: Examples of OSS projects that have achieved the badge include the Linux kernel, Curl, GitLab, OpenBlox, OpenSSL, Node.js, and Zephyr.  We specifically reached out to both smaller projects, like curl, and bigger projects, like the Linux kernel, to make sure that our criteria made sense for many different kinds of projects. It’s designed to handle both front-end and infrastructure projects.

Q: Can you tell us more about the badging process itself? What does it cost?

A: It doesn’t cost anything to get a badge.  Filling out the web form takes about an hour.  It’s primarily self-assertion, and the advantage of self-assertion systems is that they can scale up.

There are known problems with self-assertion, and we try to counter their problems.  For example, we do perform some automation, and, in some cases, the automation will override unjustified claims.  Most importantly, the project’s answers and justifications are completely public, so if someone gives false information, we can fix it and thus revoke the badge.

Q: How were the criteria created?

A: We developed the criteria, and the web application that implements them, as an open source software project.  The application is under the MIT license; the criteria are dual-licensed under MIT and CC-BY version 3.0 or later.  David A. Wheeler is the project lead, but the work is based on comments from many people.

The criteria were primarily based on reviewing a lot of existing documents about what OSS projects should do.  A good example is Karl Fogel’s book Producing Open Source Software, which has lots of good advice. We also preferred to add criteria if we could find at least one project that didn’t follow it.  After all, if everyone does it without exception, it’d be a waste of time to ask if your project does it too. We also worked to make sure that our own web application would get its own badge, which helped steer us away from impractical criteria.

Q: Does the project have to be on GitHub?

A: We intentionally don’t require or forbid any particular services or programming languages.  A lot of people use GitHub, and in those cases we fill in some of the form based on data we extract from GitHub, but you do not have to use GitHub.

Q: What does my project need to do to get a badge?

A: Currently there are 66 criteria, and each criterion is in one of three categories: MUST, SHOULD, or SUGGESTED. The MUST (including MUST NOTs) are required, and 42/66 criteria are MUST.  The SHOULD (NOT) are sometimes valid to not do; 10/66 criteria are SHOULDs.  The SUGGESTED criteria have common valid reasons to do them, but we want projects to at least consider them.  14/66 are SUGGESTED.  People don’t like admitting that they don’t do something, so we think that having criteria listed as SUGGESTED are helpful because they’ll nudge people to do them.

To earn a badge, all MUST criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be explicitly marked as met OR unmet (since we want projects to at least actively consider them). You can include justification text in markdown format with almost every criterion. In a few cases, we require URLs in the justification, so that people can learn more about how the project meets the criteria.

We gamify this – as you fill in the form you can see a progress bar go from 0% to 100%.  When you get to 100%, you’ve passed!

Q: What are the criteria?

A: We’ve grouped the criteria into 6 groups: basics, change control, reporting, quality, security, and analysis.  Each group has a tab in the form.  Here are a few examples of the criteria:


The software MUST be released as FLOSS. [floss_license]

Change Control

The project MUST have a version-controlled source repository that is publicly readable and has a URL.


The project MUST publish the process for reporting vulnerabilities on the project site.


If the software requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code.

The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).


At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.


At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.

Q: Are these criteria set in stone for all time?

A: The badge criteria were created as an open source process, and we expect that the list will evolve over time to include new aspects of software development. The criteria themselves are hosted on GitHub, and we actively encourage the security community to get involved in developing them. We expect that over time some of the criteria marked as SUGGESTED will become SHOULD, some SHOULDs will become MUSTs, and new criteria will be added.


Q: What is the benefit to a project for filling out the form?  Is this just a paperwork exercise? Does it add any real value?

A: It’s not just a paperwork exercise; it adds value.

Project members want their project to produce good results.  Following best practices can help you produce good results – but how do you know that you’re following best practices?  When you’re busy getting specific tasks done, it’s easy to forget to do important things, especially if you don’t have a list to check against.

The process of filling out the form can help your project see if you’re following best practices, or forgetting to do something.  We’ve had several cases during our alpha stage where projects tried to fill out the form, found they were missing something, and went back to change their project.  For example, one project didn’t explain how to report vulnerabilities – but they agreed that they should.  So either a project finds out that they’re following best practices – and know that they are – or will realize they’re missing something important, so the project can then fix it.

There’s also a benefit to potential users.  Users want to use projects that are going to produce good work and be around for a while.  Users can use badges like this “best practices” badge to help them separate well-run projects from poorly-run projects.

Q: Does the Best Practices Badge compete with existing maturity models or anything else that already exists?

A: The Best Practices Badge is the first program specifically focused on criteria for an individual OSS project. It is free and extremely easy to apply for, in part because it uses an interactive web application that tries to automatically fill in information where it can.  

This is much different than maturity models, which tend to be focused on activities done by entire companies.

The BSIMM (pronounced “bee simm”) is short for Building Security In Maturity Model. It is targeted at companies, typically large ones, and not on OSS projects.

OpenSAMM, or just SAMM, is the Software Assurance Maturity Model. Like BSIMM, they’re really focusing on organizations, not specific OSS projects, and they’re focused on identifying activities that would occur within those organizations.  

Q: Does the project’s websites have to support HTTPS?

A: Yes, projects have to support HTTPS to get a badge. Our criterion sites_https now says: “The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. You can get free certificates from Let’s Encrypt.” HTTPS doesn’t counter all attacks, of course, but it counters a lot of them quickly, so it’s worth requiring.   At one time HTTPS imposed a significant performance cost, but modern CPUs and algorithms have basically eliminated that.  It’s time to use HTTPS and TLS.

Q: How do I get started or get involved?

A: If you’re involved in an OSS project, please go get your badge from here:

If you want to help improve the criteria or application, you can see our GitHub repo:

We expect that there will need to be improvements over time, and we’d love your help.

But again, the key is, if you’re involved in an OSS project, please go get your badge:

Dr. David A. Wheeler is an expert on developing secure software and on open source software. His works include Software Inspection: An Industry Best Practice, Ada 95: The Lovelace Tutorial, Secure Programming for Linux and Unix HOWTO, Fully Countering Trusting Trust through Diverse Double-Compiling (DDC), Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers!, and How to Evaluate OSS/FS Programs. Dr. Wheeler has a PhD in Information Technology, a Master’s in Computer Science, a certificate in Information Security, and a B.S. in Electronics Engineering, all from George Mason University (GMU). He lives in Northern Virginia.

Emily Ratliff is Sr. Director of Infrastructure Security at The Linux Foundation, where she sets the direction for all CII endeavors, including managing membership growth, grant proposals and funding, and CII tools and services. She brings a wealth of Linux, systems and cloud security experience, and has contributed to the first Common Criteria evaluation of Linux, gaining an in-depth understanding of the risk involved when adding an open source package to a system. She has worked with open standards groups, including the Trusted Computing Group and GlobalPlatform, and has been a Certified Information Systems Security Professional since 2004.


By Benjamin VanEvery

I ran into several folks this past week at OSCON who expressed a keen interest in creating a dedicated role for Open Source at their respective companies. So what was stopping them? One simple thing: every single one of them was struggling to define exactly what that role means. Instinctively we all have a feeling of what an employee dedicated to Open Source might do, but when it comes time to write it down or try to convince payroll, it can be challenging. Below I have included a starting point for a job description of what a dedicated Open Source manager might do. If you are in this boat, I’d highly recommend that you also check out the slides from our talk at OSCON this year. In addition, the many blog posts we’ve published about why our respective companies run Open Source.

Also, on top of reusing what is below, we are collecting open source office job descriptions on GitHub from the industry that you can learn from.

The Job Posting Template

Side note: if you use this template, try running it through analysis on first.

The Mission

Our open source effort is currently lead by a multi-functional group of engineers and we are looking for a motivated, visionary individual to lead this effort and take Company Open Source to the next level.

In this role, you’ll work with our Engineering (Dev & Ops), Legal, Security, Business Ops, and Public Relations teams to help define what Open Source at Company means and build our open source community. Your day to day responsibilities will alternate between programming and several forms of program management. This is an exciting opportunity to work with all levels of the organization and leave a lasting impact here and on the engineering community at large.

A good match might have (a)…

  • 8 years experience coding in or leading software engineering environments
  • Experience working on at least one successful and widely recognized open source project
  • Excellent communication and organizational skills
  • Familiarity with GitHub and open source CI tooling (Travis CI, Coveralls, etc)
  • Understanding of open source licenses
  • Experience and familiarity with multiple programming languages
  • Real passion for quality and continuous improvement

Some things you might find yourself doing

  • You will lead and streamline all aspects of the outgoing open source process. This encompasses people processes to tooling automation.
  • You will own and handle our open source presence and reputation on GitHub and beyond
  • You will steer involvement and recognition of the open source program internally
  • You will work alongside product and business leadership to integrate Open Source goals with company goals. Overall, working to build Open Source mentality into our DNA.
  • You will build awareness of Company Open Source externally and increase overall involvement in the open source community.
  • You will establish Company as an actively contributing member of industry-leading Open Source initiatives. This involves taking active parts in TODO Group initiatives.
  • You will run our process for evaluating incoming open source code for use in our product.

This article originally appeared at TODO