This week in Linux and open source news, R3 has made its blockchain platform’s code public, a newly identified vulnerability threatens Android phones, and more! Keep your finger on the pulse of OSS with this weekly digest.

1) Corda’s code will be contributed to the Hyperledger Project.

R3 Blockchain Code Goes Open Source– Banking Technology

2) Rowhammer attack targets an Android phone’s dynamic random access memory.

Elegant Physics (and Some Down and Dirty Linux Tricks) Threaten Android Phones– WIRED

3) “SUSE announces plans for server and storage versions of Linux supporting 64-bit ARM SoCs.”

SUSE Preps Linux for ARM Servers– EE TImes

4) Dirty COW: A nine-year-old bug in the Linux kernel has been recently revealed.

“Dirty COW” Is The Most Dangerous Linux Privilege-Escalation Bug Ever, Experts Say– FOSSbytes

5) “The same internal, deep learning tools that Microsoft engineers used to build its human-like speech recognition engine, as well as consumer products like Skype Translator and Cortana, are now available for public use.”

Microsoft makes its deep learning tools available to all– Engadget

Let’s Encrypt was awarded a grant from The Ford Foundation as part of its efforts to financially support its growing operations. This is the first grant that has been awarded to the young nonprofit, a Linux Foundation project which provides free, automated and open SSL certificates to more than 13 million fully-qualified domain names (FQDNs). 

The grant will help Let’s Encrypt make several improvements, including increased capacity to issue and manage certificates. It also covers costs of work recently done to add support for Internationalized Domain Name certificates. 

“The people and organizations that Ford Foundation serves often find themselves on the short end of the stick when fighting for change using systems we take for granted, like the Internet,” Michael Brennan, Internet Freedom Program Officer at Ford Foundation, said. “Initiatives like Let’s Encrypt help ensure that all people have the opportunity to leverage the Internet as a force for change.”

We talked with Brennan and Josh Aas, Executive Director of Let’s Encrypt about what this grant means for the organization. What is it about Let’s Encrypt that is attractive to The Ford Foundation? 

Michael Brennan: The Ford Foundation believes that all people, especially those who are most marginalized and excluded, should have equal access to an open Internet, and enjoy legal, technical, and regulatory protections that promote transparency, equality, privacy, free expression, and access to knowledge. A system for acquiring digital certificates to enable HTTPS for websites is a fundamental piece of infrastructure towards this goal. As a free, automated and open certificate authority, Let’s Encrypt is a model for how the Web can be more accessible and open to all. What is the problem that Let’s Encrypt is trying to solve? 

Josh Aas: As the Web becomes more central to our everyday lives, more of our personal identities are revealed through unencrypted communications. The job of Let’s Encrypt is to help those who have not encrypted their communications, especially those who face a financial or technical barrier to doing so. Let’s Encrypt offers free domain validation (DV) certificates to people in every country in a highly automated way. Over 90% of the certificates we issue go to domains that were previously unencrypted or not otherwise not using publicly trusted certificates. How does Let’s Encrypt further the goals of The Ford Foundation? 

Michael Brennan: We think a lot about the digital infrastructure needs of the open Web. This is a massive area of exploration with numerous challenges, so how and where can the Ford Foundation make a meaningful impact? One of the ways we believe we can help is by supporting initiatives that broadly scale access to security and help introduce those efforts to civil society organizations fighting for social justice. Let’s Encrypt fits perfectly into this goal by both serving critical Web security needs of civil society organizations and doing so in a way that is massively scalable. From your perspective at The Ford Foundation, what population of people is Let’s Encrypt serving? 

Michael Brennan: The Internet Freedom team recently took on a trip to visit the Ford Foundation office in Johannesburg, South Africa. While we were there we met with a number of organizations leveraging the Internet to promote social justice. One of the organizations we met was building a tool to serve the needs of local communities. They were thrilled to hear we were supporting Let’s Encrypt because prior to its existence they could only afford to secure their production server, not their development or testing servers.

Let’s Encrypt is changing security on the Web on a massive scale so it can be easy to overlook small victories like this. The people and organizations that Ford Foundation serves often find themselves on the short end of the stick when fighting for change using systems we take for granted, like the Internet. Initiatives like Let’s Encrypt help ensure that all people have the opportunity to leverage the Internet as a force for change. What can Let’s Encrypt users expect as a result of this grant? 

Josh Aas: We will make several improvements through this grant, including our recently added support for Internationalized Domain Name certificates. We will also use these funds to increase capacity to keep up with the growing number of certificates we issue and manage. What other fundraising initiatives are you pursuing? 

Josh Aas: We run a pretty financially lean operation — next year, we expect to be managing certificates covering well over 20 million domains an operating cost of $2.9M. We have funding agreements in place with a number of sponsors, including Cisco, Akamai, OVH, Mozilla, Google Chrome, and Facebook. Some of those agreements are multi-year. These agreements provide a strong financial foundation but we will continue to seek new corporate sponsors and grant partners in order to meet our goals. We will also be running a crowdfunding campaign in November so individuals can contribute. How can people financially support Let’s Encrypt today? 

Josh Aas: We accept donations through PayPal. Any companies interested in sponsoring us can email us at Financial support is critical to our ability to operate, so we appreciate contributions of any size. How can developers and website admins get started with Let’s Encrypt?

Josh Aas: It’s designed to be pretty easy. In order to get a certificate, users need to demonstrate control over their domain. With Let’s Encrypt, you do this using software that uses the ACME protocol, which typically runs on your web host.

We have a Getting Started page with easy-to-follow instructions that should work for most people.

We have an active community forum that is very responsive in answering questions that come up during the install process.

“Dirty COW” is a serious Linux kernel vulnerability that was recently discovered to have been lurking in the code for more than nine years. It is pretty much guaranteed that if you’re using any version of Linux or Android released in the past decade, you’re vulnerable. But what is this vulnerability, exactly, and how does it work? To understand this, it’s helpful to illustrate it using a popular tourist scam.

The con

Have you ever played the game of shells? It’s traditionally played with a pea and three walnut shells — hence the name — and it is found on touristy street corners all over the world. Besides the shells themselves, it also involves the gullible “mark” (that’s you), the con artist (that’s the person moving the shells), and, invariably, one or several scammer’s assistants in the crowd pretending to be fellow tourists. At first, the accomplices “bait” the crowd by winning many bets in a row, so you assume the game is pretty easy to win — after all, you can clearly see the ball move from shell to shell, and it’s always revealed right where you thought it would be.

So, you step forward, win a few rounds, and then decide to go for a big bet, usually goaded by the scammers. At just the right time you’re momentarily distracted by the scammer’s assistants, causing you to look away for a mere fraction of a second — but that’s enough for the scammer to palm the pea or quickly move it to another shell. When you call your bet, the empty shell is triumphantly revealed and you walk away relieved of your cash.

The race

In computing terms, you just experienced a “race condition.” You saw the ball go under a specific shell (checked for required condition), and therefore that’s the one you pointed at (performed the action). However, unknown to you, between the check and the action the situation has changed, causing the initial condition to no longer be true. In real life, you were probably only out of a couple of hundred bucks, but in computing world race conditions can lead to truly bad outcomes.

Race conditions are usually solved by requiring that the check and the action are performed as part of an atomic transaction, locking the state of the system so that the initial condition cannot be modified until the action is completed. Think of it as putting your foot on the shell right after you see the pea go under it — to prevent the scammer from palming or moving it while you are distracted (though I don’t suggest you try this unless you’re ready to get into a fistfight with the scammer and their accomplices).


Unfortunately, one such race condition was recently discovered in the part of the Linux Kernel that is responsible for memory mapping. Linux uses the “Change on Write” (COW) approach to reduce unnecessary duplication of memory objects. If you are a programmer, imagine you have the following code:

a = ‘COW’

b = a

Even though there are two variables here, they both point at the same memory object — since there is no need to take up twice the amount of RAM for two identical values. Next, the OS will wait until the value of the duplicate object is actually modified:

b += ‘ Dirty’

At this point, Linux will do the following (I’m simplifying for clarity):

  1. allocate memory for the new, modified version of the object

  2. read the original contents of the object being duplicated (‘COW’)

  3. perform any required changes to it (append ‘ Dirty’)

  4. write modified contents into the newly allocated area of memory

Unfortunately, there is a race condition between step 2 and step 4 which tricks the memory mapper to write the modified contents into the original memory range instead of the newly allocated area, such that instead of modifying memory belonging to “b” we end up modifying “a”.

The paydirt

Just like any other POSIX system, Linux implements “Discretionary Access Controls” (DAC), which relies on a framework of users and groups to grant or deny access to various parts of the OS. The grant permission can be read-only, or read-write. For example, as a non-privileged user you should be able to read/bin/bash” in order to start a shell session when you log in, but not write to it. Only a privileged account (e.g. “root”) should be able to modify this file — otherwise any malicious user could replace the bash binary with a modified version that, for example, logs all passwords or starts up a backdoor.

The race condition described above allows the attacker to bypass this permissions framework by tricking the COW mechanism to modify the original read-only objects instead of their copies. In other words, a carefully crafted attack can indeed replace “/bin/bash” with a malicious version by an unprivileged user. This vulnerability has been assigned both the boring name (“CVE-2016-5195”), and the now-customary branded name of “Dirty COW.”

The really bad news is that this race condition has been present in the kernel for over 9 years, which is a very long time when it comes to computing. It is pretty much guaranteed that if you’re using any version of Linux or Android released in the past decade, you’re vulnerable.

The fix

Triggering this exploit is not as trivial as running a simple “cp” operation and putting any kind of modified binary in place. That said, given enough time and perseverance, we should assume that attackers will come up with cookie-cutter exploits that will allow them to elevate privileges (i.e. “become root”) on any unpatched system where they are able to freely execute arbitrary code. It is imperative that all Linux systems are patched as soon as possible — and a full reboot will be required, unless you have some kind of live patching solution available to you (if you don’t already know whether you can live-patch, then you probably cannot, as it’s not a widely used technology yet).

There is a fix available in the upstream kernel, and, at the time of writing this article, the distributions are starting to release updated packages. You should be closely monitoring your distribution’s release alerts and apply any outstanding kernel errata as soon as it becomes available. The same applies to any Android devices you may have.

If you cannot update and reboot your system right away, there are some mitigation mechanisms available to you while you wait (see this Red Hat Bugzilla entry for more details). It is important to note that the STAP method will only mitigate against known proof of concept exploits and is not generic enough to be considered a good long-term fix. Unfortunately, “Dirty COW” is not the kind of bug that can be prevented (much) by SELinux, AppArmor or any other RBAC solution, nor is it mitigated by PaX/GrSecurity hardening patches.

The takeaway

As I said earlier, in order to exploit the “Dirty COW” bug, the attacker must first be able to execute arbitrary code on the system. This, in itself, is bad enough — even if an attacker is not able to gain immediate root-level privilege, being able to execute arbitrary code gives them a massive foothold on your infrastructure and allows them a pivot point to reach your internal networks.

In fact, you should always assume that there are bad bugs lurking in the kernel that we do not yet know about (but the attackers do). Kees Cook in his blog about security bug lifetimes points out that vulnerabilities are usually fixed long after they are first introduced — many of them lurking in the code for years. Really bad bugs the caliber of the “Dirty COW” are worth hundreds of thousands of dollars on the black market, and you should always assume that an attacker who is able to execute arbitrary code on your systems will eventually be able to escalate their privileges and gain root access. Efforts like the “Kernel Self Protection Project” can help reduce the impact of some of these lurking bugs, but not all of them — for example, race conditions are particularly tricky to guard against and can be devastating in their scope of impact.

Therefore, any mitigation for the “Dirty COW” and other privilege escalation bugs should really be considered a part of a comprehensive defense-in-depth strategy that would work to keep attackers as far away as possible from being able to execute arbitrary code on your systems. Before they even get close to the kernel stack, the attackers should have to first defeat your network firewalls, your intrusion prevention systems, your web filters, and the RBAC protections around your daemons.

Taken altogether, these technologies will provide your systems with a great deal of herd immunity to ensure that no single exploit like the “Dirty COW” can bring your whole infrastructure to its tipping point.

Learn more about how to secure Linux systems through The Linux Foundation’s online, self-paced course Linux Security Fundamentals.

This week in open source and Linux news, IBM’s CEO explains the importance of blockchain at SWIFT’s Sibos conference, and more! Get up to speed with this handy, weekly digest: 

1) IBM CEO says blockchain initiatives, like The Linux Foundation’s Hyperledger Project, play a “key role” in the company’s revenue.

IBM’s Ginni Rometty Tells Bankers Not To Rest On Their Digital Laurels– Forbes

2) “The problem with open source standards aren’t that they’re boring; it’s that they’re largely the same as the proprietary standards that preceded them.” writes Matt Asay.

Open source is Not to Blame For a Lack of Industry Standards– TechRepublic

3) The Linux Foundation’s new OpenStack MOOC is offered for free via edX.

The Linux Foundation and edX Roll Out a Free OpenStack Course– OStatic

4) “The Linux Foundation’s OPNFV project claims its third platform release targets accelerating development of NFV apps and services.”

OPNFV Colorado Platform Bolsters Open Source NFV Efforts– RCRWireless

5) Linux Foundation sysadmin weighs in on why the system needs a “total rethink.”

Unsafe at Any Clock Speed: Linux Kernel Security Needs a Rethink

1) The White House released federal source code policy, requiring agencies to release 20% of new code they commission as open source. 

Open Source Won. So, Now What?– WIRED

2) A flaw in the Transmission Control Protocol poses a threat to Internet users, whether they use Linux directly or not.

Use the Internet? This Linux Flaw Could Open You Up to Attack– PCWorld

3) New Trojan targets Linux servers and is exploiting servers running the Redis NoSQL database to use them for bitcoin mining.

Linux.Lady Trojan Turns Linux Servers into Bitcoin Miners– The Inquirer

4) “Will Microsoft’s acquisition of LinkedIn slow down the social networking company’s cadence of open-sourcing core technology for developers?”

LinkedIn: Open-Sourcing Under the Microsoft Regime– eWeek

5) Today’s CEOs increasingly have impressive technical backgrounds and open source is more valuable than ever. 

2046 is the Last Year Your CEO Has a Business Major– VentureBeat

1) The Linux Foundation’s Automotive Grade Linux project announces release of Unified Code Base 2.0.

Open-Source Linux a Step Closer to Automotive Use– CNet

2) Though the use of 3rd-party code in enterprise software projects grows, the code still often has open flaws.

Enterprise Software Developers Continue to Use Flawed Code in Apps– ComputerWorld

3) Anyone using a Chromebook/Chrome on Linux can visit to make one-to-one and group voice calls.

Linux Users Can Now Make Skype Calls From the Web in Chrome– TechCrunch

4) AT&T to release virtualisation automation software, amounting to over eight million lines of code.

AT&T’s ECOMP Code to Land Soon at Linux Foundation– The Register

5) New IBM innovation center to deliver tech pilots based on blockchain for finance and trade.

IBM to Open Blockchain Innovation Centre in Singapore– ZDNet

Co-authored by Dr. David A. Wheeler

Everybody loves getting badges.  Fitbit badges, Stack Overflow badges, Boy Scout merit badges, and even LEED certification are just a few examples that come to mind.  A recent 538 article Even psychologists love badges” publicized the value of a badge.


CII badge

Core Infrastructure Initiative Best Practices

GitHub now has a number of specific badges for things like test coverage and dependency management, so for many developers they’re desirable. IBM has a slew of certifications for security, analytics, cloud and mobility, Watson Health and more. 

Recently The Linux Foundation joined the trend with the Core Infrastructure Initiative (CII) Best Practices Badges Program

The free, self-service Best Practices Badges Program was designed with the open source community. It provides criteria and an automated assessment tool for open source projects to demonstrate that they are following security best practices.

It’s a perfect fit for CII, which is comprised of technology companies, security experts and developers, all of whom are committed to working collaboratively to identify and fund critical open source projects in need of assistance. The badging project is an attempt to “raise all boats” in security, by encouraging projects to follow best practices for OSS development.  We believe projects that follow best practices are more likely to be healthy and produce secure software. 

Here’s more background on the program and some of the questions we’ve recently been asked.

Q: Why badges?

A: We believe badges encourage projects to follow best practices, to hopefully produce better results. The badges will:

  • Help new projects learn what those practices are (think training in disguise).

  • Help users know which projects are following best practices (so users can prefer such projects).  

  • Act as a signal to users. If a project has achieved a number of badges, it will inspire a certain level of confidence among users that the project is being actively maintained and is more likely to consistently produce good results.

Q: Who gets a badge?  Is this for individuals, projects, sites?

A: The CII best practices badge is for a project, not for an individual.  When you’re selecting OSS, you’re picking the project, knowing that some of the project members may change over time.

Q: Can you tell us a little about the “BadgeApp” web application that implements this?

A: “BadgeApp” is a simple web application that implements the criteria (fill in form).  It’s OSS, with an MIT license.  All the required libraries are OSS & legal to add; we check this using license_finder.

Our overall approach is that we proactively counter mistakes.  Mistakes happen, so we use a variety of tools, an automated test suite, and other processes to counter them.  For example, we use rubocop to lint our Ruby, and ESLint to lint our Javascript.  The test suite currently has 94% statement coverage with over 3000 checked assertions, and our project has a rule that the test suite must be at least 90%.

Please contribute!  See our file for more.

Q: What projects already have a badge?
A: Examples of OSS projects that have achieved the badge include the Linux kernel, Curl, GitLab, OpenBlox, OpenSSL, Node.js, and Zephyr.  We specifically reached out to both smaller projects, like curl, and bigger projects, like the Linux kernel, to make sure that our criteria made sense for many different kinds of projects. It’s designed to handle both front-end and infrastructure projects.

Q: Can you tell us more about the badging process itself? What does it cost?

A: It doesn’t cost anything to get a badge.  Filling out the web form takes about an hour.  It’s primarily self-assertion, and the advantage of self-assertion systems is that they can scale up.

There are known problems with self-assertion, and we try to counter their problems.  For example, we do perform some automation, and, in some cases, the automation will override unjustified claims.  Most importantly, the project’s answers and justifications are completely public, so if someone gives false information, we can fix it and thus revoke the badge.

Q: How were the criteria created?

A: We developed the criteria, and the web application that implements them, as an open source software project.  The application is under the MIT license; the criteria are dual-licensed under MIT and CC-BY version 3.0 or later.  David A. Wheeler is the project lead, but the work is based on comments from many people.

The criteria were primarily based on reviewing a lot of existing documents about what OSS projects should do.  A good example is Karl Fogel’s book Producing Open Source Software, which has lots of good advice. We also preferred to add criteria if we could find at least one project that didn’t follow it.  After all, if everyone does it without exception, it’d be a waste of time to ask if your project does it too. We also worked to make sure that our own web application would get its own badge, which helped steer us away from impractical criteria.

Q: Does the project have to be on GitHub?

A: We intentionally don’t require or forbid any particular services or programming languages.  A lot of people use GitHub, and in those cases we fill in some of the form based on data we extract from GitHub, but you do not have to use GitHub.

Q: What does my project need to do to get a badge?

A: Currently there are 66 criteria, and each criterion is in one of three categories: MUST, SHOULD, or SUGGESTED. The MUST (including MUST NOTs) are required, and 42/66 criteria are MUST.  The SHOULD (NOT) are sometimes valid to not do; 10/66 criteria are SHOULDs.  The SUGGESTED criteria have common valid reasons to do them, but we want projects to at least consider them.  14/66 are SUGGESTED.  People don’t like admitting that they don’t do something, so we think that having criteria listed as SUGGESTED are helpful because they’ll nudge people to do them.

To earn a badge, all MUST criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be explicitly marked as met OR unmet (since we want projects to at least actively consider them). You can include justification text in markdown format with almost every criterion. In a few cases, we require URLs in the justification, so that people can learn more about how the project meets the criteria.

We gamify this – as you fill in the form you can see a progress bar go from 0% to 100%.  When you get to 100%, you’ve passed!

Q: What are the criteria?

A: We’ve grouped the criteria into 6 groups: basics, change control, reporting, quality, security, and analysis.  Each group has a tab in the form.  Here are a few examples of the criteria:


The software MUST be released as FLOSS. [floss_license]

Change Control

The project MUST have a version-controlled source repository that is publicly readable and has a URL.


The project MUST publish the process for reporting vulnerabilities on the project site.


If the software requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code.

The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).


At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.


At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.

Q: Are these criteria set in stone for all time?

A: The badge criteria were created as an open source process, and we expect that the list will evolve over time to include new aspects of software development. The criteria themselves are hosted on GitHub, and we actively encourage the security community to get involved in developing them. We expect that over time some of the criteria marked as SUGGESTED will become SHOULD, some SHOULDs will become MUSTs, and new criteria will be added.


Q: What is the benefit to a project for filling out the form?  Is this just a paperwork exercise? Does it add any real value?

A: It’s not just a paperwork exercise; it adds value.

Project members want their project to produce good results.  Following best practices can help you produce good results – but how do you know that you’re following best practices?  When you’re busy getting specific tasks done, it’s easy to forget to do important things, especially if you don’t have a list to check against.

The process of filling out the form can help your project see if you’re following best practices, or forgetting to do something.  We’ve had several cases during our alpha stage where projects tried to fill out the form, found they were missing something, and went back to change their project.  For example, one project didn’t explain how to report vulnerabilities – but they agreed that they should.  So either a project finds out that they’re following best practices – and know that they are – or will realize they’re missing something important, so the project can then fix it.

There’s also a benefit to potential users.  Users want to use projects that are going to produce good work and be around for a while.  Users can use badges like this “best practices” badge to help them separate well-run projects from poorly-run projects.

Q: Does the Best Practices Badge compete with existing maturity models or anything else that already exists?

A: The Best Practices Badge is the first program specifically focused on criteria for an individual OSS project. It is free and extremely easy to apply for, in part because it uses an interactive web application that tries to automatically fill in information where it can.  

This is much different than maturity models, which tend to be focused on activities done by entire companies.

The BSIMM (pronounced “bee simm”) is short for Building Security In Maturity Model. It is targeted at companies, typically large ones, and not on OSS projects.

OpenSAMM, or just SAMM, is the Software Assurance Maturity Model. Like BSIMM, they’re really focusing on organizations, not specific OSS projects, and they’re focused on identifying activities that would occur within those organizations.  

Q: Does the project’s websites have to support HTTPS?

A: Yes, projects have to support HTTPS to get a badge. Our criterion sites_https now says: “The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. You can get free certificates from Let’s Encrypt.” HTTPS doesn’t counter all attacks, of course, but it counters a lot of them quickly, so it’s worth requiring.   At one time HTTPS imposed a significant performance cost, but modern CPUs and algorithms have basically eliminated that.  It’s time to use HTTPS and TLS.

Q: How do I get started or get involved?

A: If you’re involved in an OSS project, please go get your badge from here:

If you want to help improve the criteria or application, you can see our GitHub repo:

We expect that there will need to be improvements over time, and we’d love your help.

But again, the key is, if you’re involved in an OSS project, please go get your badge:

Dr. David A. Wheeler is an expert on developing secure software and on open source software. His works include Software Inspection: An Industry Best Practice, Ada 95: The Lovelace Tutorial, Secure Programming for Linux and Unix HOWTO, Fully Countering Trusting Trust through Diverse Double-Compiling (DDC), Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers!, and How to Evaluate OSS/FS Programs. Dr. Wheeler has a PhD in Information Technology, a Master’s in Computer Science, a certificate in Information Security, and a B.S. in Electronics Engineering, all from George Mason University (GMU). He lives in Northern Virginia.

Emily Ratliff is Sr. Director of Infrastructure Security at The Linux Foundation, where she sets the direction for all CII endeavors, including managing membership growth, grant proposals and funding, and CII tools and services. She brings a wealth of Linux, systems and cloud security experience, and has contributed to the first Common Criteria evaluation of Linux, gaining an in-depth understanding of the risk involved when adding an open source package to a system. She has worked with open standards groups, including the Trusted Computing Group and GlobalPlatform, and has been a Certified Information Systems Security Professional since 2004.