Posts

A guest blog post by Mike Goodwin.

What is threat modeling?

Application threat modeling is a structured approach to identifying ways that an adversary might try to attack an application and then designing mitigations to prevent, detect or reduce the impact of those attacks. The description of an application’s threat model is identified as one of the criteria for the Linux CII Best Practises Silver badge.

Why threat modeling?

It is well established that defense-in-depth is a key principle for network security and the same is true for application security. But although most application developers will intuitively understand this as a concept, it can be hard to put it into practice. After many years and sleepless nights, worrying and fretting about application security, one thing I have learned is that threat modeling is an exceptionally powerful technique for building defense-in-depth into an application design. This is what first attracted me to threat modeling. It is also great for identifying security flaws at design time where they are cheap and easy to correct. These kinds of flaws are often subtle and hard to detect by traditional testing approaches, especially if they are buried in the innards of your application.

Three stages of threat modeling

There are several ways of doing threat modeling ranging from formal methodologies with nice acronyms (e.g. PASTA) through card games (e.g. OWASP Cornucopia) to informal whiteboard sessions. Generally though, the technique has three core stages:

Decompose your application – This is almost always done using some kind of diagram. I have seen successful threat modeling done using many types of diagrams from UML sequence diagrams to informal architecture sketches. Whatever format you choose, it is important that the diagram shows how different internal components of your application and external users/systems interact to deliver its functionality. My preferred type of diagram is a Data Flow Diagram with trust boundaries:

Identify threats – In this stage, the threat modeling team ask questions about the component parts of the application and (very importantly) the interactions or data flows between them to guess how someone might try to attack it. The answers to these questions are the threats. Typical questions and resulting threats are:

Question Threat
What assumptions is this process making about incoming data? What if they are wrong? An attacker could send a request pretending to be another person and access that person’s data.
What could an attacker do to this message queue? An attacker could place a poison message on the queue causing the receiving process to crash.
Where might an attacker tamper with the data in the application? An attacker could modify an account number in the database to divert payment to their own account.

Design mitigations – Once some threats have been identified the team designs ways to block, avoid or minimize the threats. Some threats may have more than one mitigation. Some mitigations might be preventative and some might be detective. The team could choose to accept some low-risk threats without mitigations. Of course, some mitigations imply design changes, so the threat model diagram might have to be revisited.

Threat Mitigation
An attacker could send a request pretending to be another person and access that person’s data. Identify the requestor using a session cookie and apply authorization logic.
An attacker could place a poison message on the queue causing the receiving process to crash. Digitally sign message on the queue and validate their signature before processing.
Maintain a retry count on message and discard them after three retries.
An attacker could modify an account number in the database to divert payment to their own account. Preventative: Restrict access to the database using a firewall.
Detective: Log all changes to bank account numbers and audit the changes.

OWASP Threat Dragon

Threat modeling can be usefully done with a pen, whiteboard and one or more security-aware people who understand how their application is built, and this is MUCH better than not threat modeling at all. However, to do it effectively with multiple people and multiple project iterations you need a tool. Commercial tools are available, and Microsoft provides a free tool for Windows only, but established, free, open-source, cross-platform tools are non-existent. OWASP Threat Dragon aims to fill this gap. The aims of the project are:

  • Great UX – Using Threat Dragon should be simple, engaging and fun
  • A powerful threat/mitigation rule engine – This will lower the barrier to entry for teams and encourage non-specialists to contribute
  • Integration with other development lifecycle tools – This will ensure that models slot easily into the developer workflows and remain relevant as the project evolves
  • To always be free, open-source (like all OWASP projects) and cross-platform. The full source code is available on GitHub

The tool comes in two variants:

End-user documentation is available for both variants and, most importantly, it has a cute logo called Cupcakes…

Threat Dragon is an OWASP Incubator Project – so it is still early stage but it can already support effective threat modeling. The near-term roadmap for the tool is to:

  • Achieve a Linux CII Best Practices badge for the project
  • Implement the threat/mitigation rule engine
  • Continue to evolve the usability of the tool based on real-world feedback from users
  • Establish a sustainable hosting model for the web application

If you want to harden your application designs you should definitely give threat modeling a try. If you want a tool to help you, try OWASP Threat Dragon! All feedback, comments, issue reports and pull requests are very welcome.

About the author: Mike Goodwin is a full-time security professional at the Sage Group where he leads the team responsible for product security. Most of his spare time is spent working on Threat Dragon or co-leading his local OWASP chapter.

This article originally appeared on the Core Infrastructure Initiative website.

Since its inception the CII has considered network time, and implementations of the Network Time Protocol, to be “core infrastructure.” Correctly synchronising clocks is critical both to the smooth functioning of many services and to the effectiveness of numerous security protocols; as a result most computers run some sort of clock synchronization software and most of those computers implement either the Network Time Protocol (NTP, RFC 5905) or the closely related but slimmed down Simple Network Time Protocol (SNTP, RFC 4330).

There are several different implementations of NTP and SNTP, including both open source and proprietary versions. For many years the canonical open source implementation has been ntpd, which was started by David Mills and is now developed by Harlan Stenn at the Network Time Foundation. Parts of the ntpd code date back at least 25 years and the developers pride themselves in having the most complete implementation of the protocol and having a wide set of supported platforms. Over the years forks of the ntpd code have been made, including the NTPSec project that seeks to remove much of the complexity of the ntpd code base, at the expense of completeness of the more esoteric NTP features and breadth of platform support. Others have reimplemented NTP from scratch and one of the more complete open source alternatives is Chrony, originally written by Richard Curnow and currently maintained by Miroslav Lichvar.

The CII recently sponsored a security audit of the Chrony code, carried out by the security firm Cure53 (here is the report). In recent years, the CII has also provided financial support to both the ntpd project and the NTPSec project. Cure53 carried out security audits of both ntpd and NTPSec earlier this year and Mozilla Foundation’s Secure Open Source (SOS) project funded those two audits. SOS also assisted the the CII with the execution of the Chrony audit.

Since the CII has offered support to all three projects and since all three were reviewed by the same firm, close together in time, we thought it would be useful to present a direct comparison of their results.

ntpd

Full report PDF

The ntpd code base is the largest and most complex of the three and it carries a lot of legacy code. As a result, unsurprisingly, it fared the worst of the three in security testing with the report listing 1 Critical, 2 High, 1 Medium and 8 Low severity issues along with 2 Informational comments. It should be noted that these issues were largely addressed in the 4.2.8p10 release back in March 2017. That said, the commentary in the report is informative, with the testers writing:

“The general outcome of this project is rooted in the fact that the code has been left to grow organically and had aged somewhat unattended over the years. The overall structure has thus become very intricate, while also yielding a conviction that different styles and approaches were used and subsequently altered. The seemingly uncontrolled inclusion of variant code via header files and complete external projects engenders a particular problem. Most likely, it makes the continuous development much more difficult than necessary.”

As a result, it seems quite likely that there are more lurking issues and that it will be difficult for the authors to avoid introducing new security issues in the future without some substantial refactoring of the code.

As mentioned above, ntpd is the most complete implementation of NTP and as a result is the most complex. Complexity is the enemy of security and that shows up in this report.

NTPSec

Full report PDF

As mentioned previously, the NTPSec project started as a fork of ntpd with the specific aim of cleaning up a lot of the complexity in ntpd, even if that meant throwing out some of the less-used features. The NTPSec project is still in its early days; the team has not yet made a version 1.0 release, but has already thrown out nearly 75% of the code from ntpd and refactored many other parts. Still, the security audit earlier this year yielded 3 High, 1 Medium and 3 Low severity issues as well as raising 1 Informational matter. The testers comments again were telling:

“On the one hand, much cruft has been removed successfully, yet, on the other hand, the code shared between the two software projects bears tremendous similarities. The NTPsec project is still relatively young and a major release has not yet occurred, so the expectations are high for much more being done beforehand in terms of improvements. It must be mentioned, however, that the regression bug described in NTP-01-015 is particularly worrisome and raises concerns about the quality of the actions undertaken.

In sum, one can clearly discern the direction of the project and the pinpoint the maintainers’ focus on simplifying and streamlining the code base. While the state of security is evidently not optimal, there is a definite room for growth, code stability and overall security improvement as long as more time and efforts are invested into the matter prior to the official release of NTPsec.”

The NTPSec has made some significant technical progress but there is more work to do before the developers get to an official release. Even then, the history of the code may well haunt them for some time to come.

Chrony

Full report PDF

Unlike NTPSec, Chrony is not derived from the ntpd code but was implemented from scratch. It implements both client and server modes of the full NTPv4 protocol (as opposed to the simplified SNTP protocol), including operating as a Stratum 1 reference server, and was specifically designed to handle difficult conditions such as intermittent network connections, heavily congested networks and systems that do not run continuously (like laptops) or which run on a virtual machine. The development is currently supported by Red Hat Software and it is now the default NTP implementation on their distributions.

In the 20+ years that I’ve worked in the security industry I’ve read many security audits. The audit that the CII sponsored for Chrony was the first time that I’d used Cure53, and I had not seen any previous reports from them, so when I received the report on Chrony I was very surprised. So surprised that I stopped to email people who had worked with Cure53 to question their competence. When they assured me that the team was highly skilled and capable, I was astounded. Chrony withstood three skilled security testers for 11 days of solid testing and the result was just 2 Low severity issues (both of which have since been fixed). The test report stated:

“The overwhelmingly positive result of this security assignment performed by three Cure53 testers can be clearly inferred from a marginal number and low-risk nature of the findings amassed in this report. Withstanding eleven full days of on-remote testing in August of 2017 means that Chrony is robust, strong, and developed with security in mind. The software boasts sound design and is secure across all tested areas. It is quite safe to assume that untested software in the Chrony family is of a similarly exceptional quality. In general, the software proved to be well-structured and marked by the right abstractions at the appropriate locations. While the functional scope of the software is quite wide, the actual implementation is surprisingly elegant and of a minimal and just necessary complexity. In sum, the Chrony NTP software stands solid and can be seen as trustworthy.”

The head of Cure53, Dr. Mario Heiderich, indicated that it was very rare for the firm to produce a report with so few issues and that he was surprised that the software was so strong.

Of course just because the software is strong does not mean that it is invulnerable to attack, let alone free from bugs. What it does mean however is that Chrony is well designed, well implemented, well tested and benefits from the hindsight of decades of NTP implementation by others without bearing the burden of legacy code.

Conclusions

From a security standpoint (and here at the CII we are security people), Chrony was the clear winner between these three NTP implementations. Chrony does not have all of the bells and whistles that ntpd does, and it doesn’t implement every single option listed in the NTP specification, but for the vast majority of users this will not matter. If all you need is an NTP client or server (with or without reference clock), which is all that most people need, then its security benefits most likely outweigh any missing features.

Acknowledgements

The security audit on Chrony was funded by the CII but the Mozilla SOS project handled many of the logistics of getting the audit done and we are very grateful to Gervase Markham for his assistance. Mozilla SOS funded the audits of ntpd and NTPSec. All three audits were performed by Cure53.

This article originally appeared on the Core Infrastructure Initiative (CII) website.

There has been some public discussion in the last week regarding the decision by Open Source Security Inc. and the creators of the Grsecurity® patches for the Linux kernel to cease making these patches freely available to users who are not paid subscribers to their service. While we at the Core Infrastructure Initiative (CII) would have preferred them to keep these patches freely available, the decision is absolutely theirs to make.

From the point of view of the CII, we would much rather have security capabilities such as those offered by Grsecurity® in the main upstream kernel rather than available as a patch that needs to be applied by the user. That said, we fully understand that there is a lot of work involved in upstreaming extensive patches such as these and we will not criticise the Grsecurity® team for not doing so. Instead we will continue to support work to make the kernel as secure as possible.

CII exists to support work improving the security of critical open source components. In a Linux system a flaw in the kernel can open up the opportunity for security problems in any or all the components – so it is in some sense the most critical component we have. Unsurprisingly, we have always been keen to support work that will make this more secure and plan to do even more going forward.

Over the past few years the CII has been funding the Kernel Self Protection Project, the aim of which is to ensure that the kernel fails safely rather than just running safely. Many of the threads of this project were ported from the GPL-licensed code created by the PaX and Grsecurity® teams while others were inspired by some of their design work. This is exactly the way that open source development can both nurture and spread innovation. Below is a list of some of the kernel security projects that the CII has supported.

One of the larger kernel security projects that the CII has supported was the work performed by Emese Renfy on the plugin infrastructure for gcc. This architecture enables security improvements to be delivered in a modular way and Emese also worked on the constify, latent_entropy, structleak and initify plugins.

  • Constify automatically applies const to structures which consist of function pointer members.

  • The Latent Entropy plugin mitigates the problem of the kernel having too little entropy during and after boot for generating crypto keys. This plugin mixes random values into the latent_entropy global variable in functions marked by the __latent_entropy attribute. The value of this global variable is added to the kernel entropy pool to increase the entropy.

  • The Structleak plugin zero-initializes any structures that containing a  __user attribute. This can prevent some classes of information exposures. For example, the exposure of siginfo in CVE-2013-2141 would have been blocked by this plugin.

  • Initify extends the kernel mechanism to free up code and data memory that is only used during kernel or module initialization. This plugin will teach the compiler to find more such code and data that can be freed after initialization, thereby reducing memory usage. It also moves string constants used in initialization into their own sections so they can also be freed.

Another, current project that the CII is supporting is the work by David Windsor on HARDENED_ATOMIC and HARDENED_USERCOPY.

HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the prevention of use-after-free bugs. It is based off of work done by Kees Cook and the PaX Team. David has been adding new data types for reference counts and statistics so that these do not need to use the main atomic_t type.

The overall hardened usercopy feature is extensive, and has many sub-components. The main part David is working on is called slab cache whitelisting. Basically, hardened usercopy adds checks into the Linux kernel to make sure that whenever data is copied to/from userspace, buffer overflows do not occur.  It does this by verifying the size of the source and destination buffers, the location of these buffers in memory, and other checks.

One of the ways that it does this is to, by default, deny copying from kernel slabs, unless they are explicitly marked as being allowed to be copied.  Slabs are areas of memory that hold frequently used kernel objects.  These objects, by virtue of being frequently used, are allocated/freed many times.  Rather than calling the kernel allocator each time it needs a new object, it rather just takes one from a slab. Rather than freeing these objects, it returns them to the appropriate slab. Hardened usercopy, by default, will deny copying objects obtained from slabs. The work David is doing is to add the ability to mark slabs as being “copyable.”  This is called “whitelisting” a slab.

We also have two new projects starting, where we are working with a senior member of the kernel security team mentoring a younger developer. The first of these projects is under Julia Lawall, who is based at the Université Pierre-et-Marie-Curie in Paris and who is mentoring Bhumika Goyal, an Indian student who will travel to Paris for the three months of the project. Bhumika will be working on ‘constification’ – systematically ensuring that those values that should not change are defined as constants.

The second project is under Peter Senna Tschudin, who is based in Switzerland and is mentoring Gustavo Silva, from Mexico, who will be working on the issues found by running the Coverity static analysis tool over the kernel. Running a tool like Coverity over a very large body of code like the Linux kernel will produce a very large number of results. Many of these results may be false positives and many of the others will be very similar to each other. Peter and Gustavo intend to use the Semantic Patch Language (SmPL) to write patches which can be used to fix whole classes of issue detected by Coverity in order to more rapidly work through the long list. The goal here is to get the kernel source to a state where the static analysis scan yields very few warnings, which in turn means that as new code is added which causes a warning it will more prominently stand out, which will make the results of future analysis much more valuable.

The Kernel Self Protection Project keeps a list of projects that they believe would be beneficial to the security of the kernel. The team has been working through this list and if you are interested in helping to make the Linux kernel more secure then we encourage you to get involved. Sign up to the mailing lists, get involved in the discussions and if you are up for it then write some code. If you have specific security projects that you want to work on and you need some support in order to be able to do so then do get in touch with the CII. Supporting this sort of work is our job and we are standing by for your call!

Co-authored by Dr. David A. Wheeler

Everybody loves getting badges.  Fitbit badges, Stack Overflow badges, Boy Scout merit badges, and even LEED certification are just a few examples that come to mind.  A recent 538 article Even psychologists love badges” publicized the value of a badge.

unnamed.png

CII badge

Core Infrastructure Initiative Best Practices

GitHub now has a number of specific badges for things like test coverage and dependency management, so for many developers they’re desirable. IBM has a slew of certifications for security, analytics, cloud and mobility, Watson Health and more. 

Recently The Linux Foundation joined the trend with the Core Infrastructure Initiative (CII) Best Practices Badges Program

The free, self-service Best Practices Badges Program was designed with the open source community. It provides criteria and an automated assessment tool for open source projects to demonstrate that they are following security best practices.

It’s a perfect fit for CII, which is comprised of technology companies, security experts and developers, all of whom are committed to working collaboratively to identify and fund critical open source projects in need of assistance. The badging project is an attempt to “raise all boats” in security, by encouraging projects to follow best practices for OSS development.  We believe projects that follow best practices are more likely to be healthy and produce secure software. 

Here’s more background on the program and some of the questions we’ve recently been asked.

Q: Why badges?

A: We believe badges encourage projects to follow best practices, to hopefully produce better results. The badges will:

  • Help new projects learn what those practices are (think training in disguise).

  • Help users know which projects are following best practices (so users can prefer such projects).  

  • Act as a signal to users. If a project has achieved a number of badges, it will inspire a certain level of confidence among users that the project is being actively maintained and is more likely to consistently produce good results.

Q: Who gets a badge?  Is this for individuals, projects, sites?

A: The CII best practices badge is for a project, not for an individual.  When you’re selecting OSS, you’re picking the project, knowing that some of the project members may change over time.

Q: Can you tell us a little about the “BadgeApp” web application that implements this?

A: “BadgeApp” is a simple web application that implements the criteria (fill in form).  It’s OSS, with an MIT license.  All the required libraries are OSS & legal to add; we check this using license_finder.

Our overall approach is that we proactively counter mistakes.  Mistakes happen, so we use a variety of tools, an automated test suite, and other processes to counter them.  For example, we use rubocop to lint our Ruby, and ESLint to lint our Javascript.  The test suite currently has 94% statement coverage with over 3000 checked assertions, and our project has a rule that the test suite must be at least 90%.

Please contribute!  See our CONTRIBUTING.md file for more.

Q: What projects already have a badge?
A: Examples of OSS projects that have achieved the badge include the Linux kernel, Curl, GitLab, OpenBlox, OpenSSL, Node.js, and Zephyr.  We specifically reached out to both smaller projects, like curl, and bigger projects, like the Linux kernel, to make sure that our criteria made sense for many different kinds of projects. It’s designed to handle both front-end and infrastructure projects.

Q: Can you tell us more about the badging process itself? What does it cost?

A: It doesn’t cost anything to get a badge.  Filling out the web form takes about an hour.  It’s primarily self-assertion, and the advantage of self-assertion systems is that they can scale up.

There are known problems with self-assertion, and we try to counter their problems.  For example, we do perform some automation, and, in some cases, the automation will override unjustified claims.  Most importantly, the project’s answers and justifications are completely public, so if someone gives false information, we can fix it and thus revoke the badge.

Q: How were the criteria created?

A: We developed the criteria, and the web application that implements them, as an open source software project.  The application is under the MIT license; the criteria are dual-licensed under MIT and CC-BY version 3.0 or later.  David A. Wheeler is the project lead, but the work is based on comments from many people.

The criteria were primarily based on reviewing a lot of existing documents about what OSS projects should do.  A good example is Karl Fogel’s book Producing Open Source Software, which has lots of good advice. We also preferred to add criteria if we could find at least one project that didn’t follow it.  After all, if everyone does it without exception, it’d be a waste of time to ask if your project does it too. We also worked to make sure that our own web application would get its own badge, which helped steer us away from impractical criteria.

Q: Does the project have to be on GitHub?

A: We intentionally don’t require or forbid any particular services or programming languages.  A lot of people use GitHub, and in those cases we fill in some of the form based on data we extract from GitHub, but you do not have to use GitHub.

Q: What does my project need to do to get a badge?

A: Currently there are 66 criteria, and each criterion is in one of three categories: MUST, SHOULD, or SUGGESTED. The MUST (including MUST NOTs) are required, and 42/66 criteria are MUST.  The SHOULD (NOT) are sometimes valid to not do; 10/66 criteria are SHOULDs.  The SUGGESTED criteria have common valid reasons to do them, but we want projects to at least consider them.  14/66 are SUGGESTED.  People don’t like admitting that they don’t do something, so we think that having criteria listed as SUGGESTED are helpful because they’ll nudge people to do them.

To earn a badge, all MUST criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be explicitly marked as met OR unmet (since we want projects to at least actively consider them). You can include justification text in markdown format with almost every criterion. In a few cases, we require URLs in the justification, so that people can learn more about how the project meets the criteria.

We gamify this – as you fill in the form you can see a progress bar go from 0% to 100%.  When you get to 100%, you’ve passed!

Q: What are the criteria?

A: We’ve grouped the criteria into 6 groups: basics, change control, reporting, quality, security, and analysis.  Each group has a tab in the form.  Here are a few examples of the criteria:

Basics

The software MUST be released as FLOSS. [floss_license]

Change Control

The project MUST have a version-controlled source repository that is publicly readable and has a URL.

Reporting

The project MUST publish the process for reporting vulnerabilities on the project site.

Quality

If the software requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code.

The project MUST have at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).

Security

At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.

Analysis

At least one static code analysis tool MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.

Q: Are these criteria set in stone for all time?

A: The badge criteria were created as an open source process, and we expect that the list will evolve over time to include new aspects of software development. The criteria themselves are hosted on GitHub, and we actively encourage the security community to get involved in developing them. We expect that over time some of the criteria marked as SUGGESTED will become SHOULD, some SHOULDs will become MUSTs, and new criteria will be added.

 

Q: What is the benefit to a project for filling out the form?  Is this just a paperwork exercise? Does it add any real value?

A: It’s not just a paperwork exercise; it adds value.

Project members want their project to produce good results.  Following best practices can help you produce good results – but how do you know that you’re following best practices?  When you’re busy getting specific tasks done, it’s easy to forget to do important things, especially if you don’t have a list to check against.

The process of filling out the form can help your project see if you’re following best practices, or forgetting to do something.  We’ve had several cases during our alpha stage where projects tried to fill out the form, found they were missing something, and went back to change their project.  For example, one project didn’t explain how to report vulnerabilities – but they agreed that they should.  So either a project finds out that they’re following best practices – and know that they are – or will realize they’re missing something important, so the project can then fix it.

There’s also a benefit to potential users.  Users want to use projects that are going to produce good work and be around for a while.  Users can use badges like this “best practices” badge to help them separate well-run projects from poorly-run projects.

Q: Does the Best Practices Badge compete with existing maturity models or anything else that already exists?

A: The Best Practices Badge is the first program specifically focused on criteria for an individual OSS project. It is free and extremely easy to apply for, in part because it uses an interactive web application that tries to automatically fill in information where it can.  

This is much different than maturity models, which tend to be focused on activities done by entire companies.

The BSIMM (pronounced “bee simm”) is short for Building Security In Maturity Model. It is targeted at companies, typically large ones, and not on OSS projects.

OpenSAMM, or just SAMM, is the Software Assurance Maturity Model. Like BSIMM, they’re really focusing on organizations, not specific OSS projects, and they’re focused on identifying activities that would occur within those organizations.  

Q: Does the project’s websites have to support HTTPS?

A: Yes, projects have to support HTTPS to get a badge. Our criterion sites_https now says: “The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. You can get free certificates from Let’s Encrypt.” HTTPS doesn’t counter all attacks, of course, but it counters a lot of them quickly, so it’s worth requiring.   At one time HTTPS imposed a significant performance cost, but modern CPUs and algorithms have basically eliminated that.  It’s time to use HTTPS and TLS.

Q: How do I get started or get involved?

A: If you’re involved in an OSS project, please go get your badge from here:

https://bestpractices.coreinfrastructure.org/

If you want to help improve the criteria or application, you can see our GitHub repo:

https://github.com/linuxfoundation/cii-best-practices-badge

We expect that there will need to be improvements over time, and we’d love your help.

But again, the key is, if you’re involved in an OSS project, please go get your badge:

https://bestpractices.coreinfrastructure.org/

Dr. David A. Wheeler is an expert on developing secure software and on open source software. His works include Software Inspection: An Industry Best Practice, Ada 95: The Lovelace Tutorial, Secure Programming for Linux and Unix HOWTO, Fully Countering Trusting Trust through Diverse Double-Compiling (DDC), Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers!, and How to Evaluate OSS/FS Programs. Dr. Wheeler has a PhD in Information Technology, a Master’s in Computer Science, a certificate in Information Security, and a B.S. in Electronics Engineering, all from George Mason University (GMU). He lives in Northern Virginia.

Emily Ratliff is Sr. Director of Infrastructure Security at The Linux Foundation, where she sets the direction for all CII endeavors, including managing membership growth, grant proposals and funding, and CII tools and services. She brings a wealth of Linux, systems and cloud security experience, and has contributed to the first Common Criteria evaluation of Linux, gaining an in-depth understanding of the risk involved when adding an open source package to a system. She has worked with open standards groups, including the Trusted Computing Group and GlobalPlatform, and has been a Certified Information Systems Security Professional since 2004.

 

1) The Linux Foundation Jobs Report (published this week) shows Open source programming and DevOps skills to be in-demand amongst hiring managers.

Linux Foundation: Open Source Programming and DevOps Jobs Plentiful– The VAR Guy

2) The Core Infrastructure Initiative’s new badge program underscores CII’s mission to improve the security of open-source projects.

Linux Foundation Launches Badge Program to Boost Open Source Security– ZDNet

3) In a short new video, Linus Torvalds explains why it’s smart to choose an open source career.

Watch Why Linus Torvalds Says Linux is the Best Option for Career Building– TechWorm

4) Bryan Lunduke has noticed the decline in Mac users at tech conferences over the years.

Where Have all the MacBooks Gone at Linux Conferences?– NetworkWorld

5) “IoT developers seem to favor open source because ‘it’s free as in freedom,'” writes Matt Asay.

Open Source Near Ubiquitous in IoT, Report Finds– ReadWrite