Posts

participating in open source

The Linux Foundation’s free online guide Participating in Open Source Communities can help organizations successfully navigate open source waters.

As companies in and out of the technology industry move to advance their open source programs, they are rapidly learning about the value of participating in open source communities. Organizations are using open source code to build their own commercial products and services, which drives home the strategic value of contributing back to projects.

However, diving in and participating without an understanding of projects and their communities can lead to frustration and other unfortunate outcomes. Approaching open source contributions without a strategy can tarnish a company’s reputation in the open source community and incur legal risks.

The Linux Foundation’s free online guide Participating in Open Source Communities can help organizations successfully navigate these open source waters. The detailed guide covers what it means to contribute to open source as an organization and what it means to be a good corporate citizen. It explains how open source projects are structured, how to contribute, why it’s important to devote internal developer resources to participation, as well as why it’s important to create a strategy for open source participation and management.

One of the most important first steps is to rally leadership behind your community participation strategy. “Support from leadership and acknowledgement that open source is a business critical part of your strategy is so important,” said Nithya Ruff, Senior Director, Open Source Practice at Comcast. “You should really understand the company’s objectives and how to enable them in your open source strategy.”

Building relationships is good strategy

The guide also notes that building relationships at events can make a difference, and that including community members early and often is a good strategy. “Some organizations make the mistake of developing big chunks of code in house and then dumping them into the open source project, which is almost never seen as a positive way to engage with the community,” the guide notes. “The reality is that open source projects can be complex, and what seems like an obvious change might have far reaching side effects in other parts of the project.”

Through the guide, you can also learn how to navigate issues of influence in community participation. It can be challenging for organizations to understand how influence is earned within open source projects. “Just because your organization is a big deal, doesn’t mean that you should expect to be treated like one without earning the respect of the open source community,” the guide advises.

The Participating in Open Source Communities guide can help you with these strategies and more, and it explores how to weave community focus into your open source initiatives. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that provide essential information for any organization running an open source program. The guides are available now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can efficiently establish and execute on their open source strategies.

These guides were produced based on expertise from open source leaders. Check out the guides and stay tuned for our continuing coverage.

Don’t miss the previous articles in the series:

How to Create an Open Source Program

Tools for Managing Open Source Programs

Measuring Your Open Source Program’s Success

Effective Strategies for Recruiting Open Source Developers

documentation

At the upcoming APIStrat conference in Portland, Taylor Barnett will explore various documentation design principles and discuss best practices.

Taylor Barnett, a Community Engineer at Keen IO, says practice and constant iteration are key to writing good documentation.  At the upcoming API Strategy & Practice Conference 2017, Oct. 31 -Nov. 2 in Portland, OR, Barnett will explain the different types of docs and describe some best practices.

In her talk — Things I Wish People Told Me About Writing Docs — Barnett will look at how people consume documentation and discuss tools and tactics to enable other team members to write documentation.  Barnett explains more in this edited interview.

The Linux Foundation: What led you to this talk? Have you encountered projects with bad documentation?

Taylor Barnett: For the last year, my teammate, Maggie Jan, and I have been leading work to improve the developer content and documentation experience at Keen IO. It’s no secret that developers love excellent documentation, but many API companies aren’t always equipped with the resources to make that happen. As a result of this, we all come across a lot of bad documentation when you are trying to use developer tools and APIs.

The Linux Foundation: Often, there is a team of documentation writers and there are developers who wrote that piece of software; both are experts in their own fields, but they need a lot of collaboration to create usable docs. How can that collaboration be improved?

Barnett: In large companies, this can definitely be true, although in many companies documentation is still owned by various teams. The need for more collaboration still applies, though. One way to improve collaboration is bringing docs into the product development process early on. If you wait until everything is done and going to be released soon, people writing documentation are going to feel left out of the process and like an afterthought. If people working on the product development collaborate early on, not only does the product become better, but so does the documentation. People who are writing documentation usually spend some time figuring out the API or tool they are writing about, so they only get better when they can work with the people doing product development early on. Also, they can give great feedback from a user’s perspective much earlier in the process.

Another way to improve collaboration is to bring more people into the documentation review process. We do most of our documentation reviews in GitHub. It’s great to not only have someone in the role of an editor review it but also people from the Engineering or Product teams. It increases the number of eyes on the docs and helps make them better.

The Linux Foundation: How should developers approach documentation?

Barnett: Most developers are pretty familiar with the idea of Test Driven Development (TDD), but how familiar are they with Documentation Driven Development (DDD)? The flow for DDD is:

  1. Write or update documentation,
  2. Get feedback on that documentation,
  3. Write a failing test according to that documentation (TDD),
  4. Write code to pass the failing test,
  5. Repeat.

It can be an excellent way for developers to save a lot of time and prevent spending too much time on poorly designed features. As Isaac Schlueter, co-founder of npm, says about Documentation Driven Development, writing clear prose is an “effective way to increase productivity by reducing both the frequency and cost of mistakes.” Our brains can only hold so much information at once. In computer terms, our working memory size is pretty small. Writing down some of the information we are thinking about is a way to “off-load significant chunks of thought with hardly any data-loss,” while allowing us to think slower and more carefully.

For example: At Keen IO, we recently split our JavaScript library into three different modules. This decision was inspired by the documentation we were maintaining. We had tried to streamline the docs, but there was just too much to cover in an attention-constrained world. Many important details and features were hidden in the noise. For example, if all of the documentation was written sooner, we may have made this decision sooner.

Also, as a developer who is writing docs myself, constant iteration and practice are important. Your first version of the docs aren’t going to be great, but with focusing on trying to write clear prose, they will get better with time. Also, having another person who is not familiar with the product and can step through the documentation to review it is essential.

The Linux Foundation: If developers are writing documentation for other developers, how can they really think as the users?

Barnett: I used to think that developers are the best people to write docs for other developers because they are one of them. While I still believe this is partially true, some developers also assume a lot of knowledge. If it has been a while since a developer has done something, the “curse of knowledge” can exist. The more you know, the more you forget what it was like before. That’s why I like to talk about empathic documentation.

You need to empathize with the user on the other end. Don’t assume they know how to do something and give resources to fill in the steps that might seem “easy” to you. Also, hearing that something is “easy” or “simple” when something is not working on the user’s’ end is the worst feeling. It makes your users doubt themselves, feel frustrated, and a bunch of other negative emotions. Always try to remember you need to be empathetic!

The Linux Foundation: What’s the importance of tools in creating documentation?

Barnett: Very important! Earlier I mentioned using GitHub for reviews. I also would recommend having some continuous integration testing in place for your documentation site if you aren’t using a service like ReadMe or Apiary to make sure you don’t break it. A related topic is, do you build your own thing or use a service? Tools can be helpful, but they might not always be the best fit. You have to find a balance based on your current resources. Lastly, I would recommend checking out Anne Gentle’s book, Docs Like Code. She brings up tools a lot in the book.

The Linux Foundation: Who should attend your session?

Barnett: Everyone! Just kidding (kind of). If you are in any role that is developer facing like developer relations, evangelists, advocates, marketers, etc., if you are on a Product team for a developer focused product or platform, or if you are a developer or engineer who wants to write better docs.

The Linux Foundation: What is the main takeaway from your talk?

Barnett: Anyone can write docs, but with some practice, iteration, and working on different documentation writing skills anyone can write better docs.

Learn more in Taylor Barnett’s talk at the APIStrat conference coming up Oct. 31 – Nov. 2 in Portland, Oregon.

The 2017 Linux Kernel Report illustrates the kernel development process and highlights the work of some of the dedicated developers creating the largest collaborative project in the history of computing.

Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since 2005, when the adoption of Git made detailed tracking possible, according to the 2017 Linux Kernel Development Report released at the Linux Kernel Summit in Prague.

This report — co-authored by Jonathan Corbet, Linux kernel developer and editor of LWN.net, and Greg Kroah-Hartman, Linux kernel maintainer and Linux Foundation fellow — illustrates the kernel development process and highlights the work of some of the dedicated developers who are creating the largest collaborative project in the history of computing.

Jens Axboe, Linux block maintainer and software engineer at Facebook, contributes to the kernel because he enjoys the work. “It’s challenging and fun, plus there’s a personal gratification knowing that your code is running on billions of devices,” he said.

The 2017 report covers development work completed through Linux kernel 4.13, with an emphasis on releases 4.8 to 4.13. During this reporting period, an average of 8.5 changes per hour were accepted into the kernel; this is a significant increase from the 7.8 changes seen in the previous report.

Here are other highlights from the report:

  • Since the last report, more than 4,300 developers from over 500 companies have contributed to the kernel.
  • 1,670 of these developers contributed for the first time — about a third of contributors.
  • The most popular area for new developers to make their first patch is the “staging tree,” which is a place for device drivers that are not yet ready for inclusion in the kernel proper.
  • The top 10 organizations sponsoring Linux kernel development since the last report are Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas, and Mellanox.

Kernel developer Julia Lawall, Senior Researcher at Inria, works on the Coccinelle tool that’s used to find bugs in the Linux kernel. She contributes to the kernel for many reasons, including “the potential impact, the challenge of understanding a huge code base of low-level code, and the chance to interact with a community with a very high level of technical skill.”

You can learn more about the Linux kernel development process and read more developer profiles in the full report. Download the 2017 Linux Kernel Development Report now.

APIs

Learn tricks, shortcuts, and key lessons learned in creating a Developer Experience team, at APIStrat.

Many companies that provide an API also include SDKs. At SendGrid, such SDKs send several billions of emails monthly through SendGrid’s Web API. Recently, SendGrid re-built their seven open source SDKs (Python, PHP, C#, Ruby, Node.js, Java, and Go) to support 233 API endpoints, a process which I’ll describe in my upcoming talk at APIStrat in Portland.

Fortunately, when we started this undertaking, Matt Bernier had just launched our Developer Experience team, covering our open source documentation and libraries. I joined the team as the first Developer Experience Engineer, with a charter to manage the open source libraries in order to ensure a fast and painless integration with every API SendGrid produces.

Our first task on the Developer Engineering side was to update all of the core SendGrid SDKs, across all seven programming languages, to support the newly released third version of the SendGrid Web API and its hundreds of endpoints. At the time, our SDKs only supported the email sending endpoint for version 2 of the API, so this was a major task for one person. Based on our velocity, we calculated that it would take about 8 years to hand code every single endpoint into each library.

This effort involved automated integration test creation and execution with a Swagger/OAI powered mock API server, documentation, code, examples, CLAs, backlogs, and sending out swag. Along the way, we also gained some insights on what should not be automated — like HTTP clients.

In my talk at APIStrat, I am going to share some tricks, automations, shortcuts, and key lessons that I learned on our journey to creating a Developer Experience team:

  • We will walk through what we automated and why, including how we leveraged OpenAPI and StopLight.io to automate SDK documentation, code, examples, and tests.
  • Then we’ll dive into how we used CLA-Assistant.io to automate CLA signing and management along with Kotis’ API to automate sending and managing swag for our contributors.
  • We’ll explore how these changes were received by our community, how we adapted to their feedback and prioritized with the RICE framework.

If you’re interested in attending, please take a moment to register and sign up for my talk. I hope to see you there!

By Fatih Degirmenci, Yolanda Robla Mota, Markos Chandras

The OPNFV Community will soon issue its fifth release, OPNFV Euphrates. Over the past four releases, the community has introduced different components from upstream projects, integrated them to compose different flavors of the stack, and put them through extensive testing to help establish a reference platform for Network Functions Virtualization (NFV). While doing this work, the OPNFV community strictly followed its founding principle: Upstream First. Bugs found or features identified as missing are implemented directly into upstream code; OPNFV has carried very little in its own source code repositories, reflecting the project’s true upstream nature. This was achieved by the use of stable release components from the upstream communities. In addition to the technical aspects of the work, OPNFV established good relationships with these upstream communities, such as OpenStack, OpenDaylight, FD.io, and others.

Building on previous experience working on integrating and testing different components of the stack, Euphrates brings applied learnings in Continuous Delivery (CD) and DevOps principles and practices into the fray, via the Cross Community Continuous Integration (XCI) initiative.  Read below for a quick summary about what it is, where we are now, what we are releasing as part of Euphrates, and a sneak peek into the future.

Upstream Development Model
The current development and release model employed by OPNFV provides value to OPNFV community itself and the upstream communities it works with, but is limited and dependent on using stable versions of upstream components. This essentially limits the speed at which new development and bugfixes can be contributed to upstream projects. This results in losing the essence of CI (finding issues, providing fast and tailored feedback) and means that the developers who contribute to upstream projects might not see results for several months, after everyone has moved on to the next item in their roadmap. The notion of constantly playing “catch up” with upstream projects is not sustainable.

In order for OPNFV to achieve true CI, we need to ensure that upstream communities implement a CD approach. One way to make this happen is to enable patch-level testing and consuming of components from master branches of upstream communities–allowing for more timely feedback when it matters most. The XCI initiative establishes new feedback loops across communities and with supporting tooling makes it possible to:

  • shorten the time it takes to introduce new features
  • make it easier to identify and fix bugs
  • ease the effort to develop, integrate, and test the reference platform
  • establish additional feedback loops within OPNFV, towards the users and between the communities OPNFV works with
  • provide additional testing from a production-like environment
  • increase real-time visibility

Apart from providing feedback to upstream communities, we strive to frequently provide working software to our users, allowing them to be part of the feedback loop. This ensures that while OPNFV pushes upstream communities to CD, the platform itself also moves in the same direction.

Helping Developers Develop by Supporting Source-Based Deployments
One of the most important aspects of XCI is to ensure developers do what they do best: develop. XCI achieves this by supporting source-based deployments. This means that developers can patch the source on their workstations and get their patch deployed quickly, cutting the feedback time from months to hours (or even minutes). The approach employed by XCI to enable source-based deployments ensures that nothing comes between developers and the source code who can even override whatever is provided by XCI to ensure the deployment fits their needs. Additionally, users also benefit as they can adjust what they get from XCI to further fit their needs. This is also important for patch-level testing and feedback.

Choice
What we summarized until now are firsts for OPNFV and perhaps firsts for the entire open source ecosystem; bringing multiple open source components together from master. But we have a few other firsts provided by XCI as part of the Euphrates release, such as:

  • multiple deployment flavors ranging from all-in-one to full blown HA deployment
  • multi-distro support: Ubuntu, Centos, and openSUSE
  • extended CI pipelines for all projects that choose to take part in XCI

This is another focus area of XCI: giving choice. We believe that if we offer choices to developers and users, they will leverage these options to invent new things or use them in new and different ways. XCI empowers the community by removing barriers and constraints and providing freedom of choice.

XCI utilizes tools such as Bifrost and OpenStack Ansible directly from upstream and what is done by XCI is to use these tools in a way that enables CI.

Join the Party
Are we done yet? Of course not. We are working on bringing even more components together and are reaching out to additional communities, such as ONAP and Kubernetes.

If you would like to be part of this, check the documentation and try using the XCI Sandbox to bring up a mini OPNFV cluster on your laptop. You can find XCI developers on #opnfv-pharos channel on Freenode and while you are there, join us to make things even better.

Finally, we would like to thank everyone who has participated in the development of XCI, reviewed our patches, listened to our ideas, provided hardware resources, motivated us in different ways, and, most importantly, encouraged us. What we have now is just the beginning and we are on our way to change things.

Heading to Open Source Summit Europe? Don’t miss Fatih’s presentation, “Bringing Open Source Communities Together: Cross-Community CI,” Monday, October 23, 14:20 – 15:00.

Learn more about XCI by reading the Solutions Brief or watching the video, and signing up for this XCI-based webinar on November 29th.

This article originally appeared on the OPNFV website.

“Recruiting Open Source Developers” is a free online guide to help organizations looking to attract new developers or build internal talent.

Experienced open source developers are in short supply. To attract top talent, companies often have to do more than hire a recruiter or place an ad on a popular job site. However, if you are running an open source program at your organization, the program itself can be leveraged as a very effective recruiting tool. That is precisely where the new, free online guide Recruiting Open Source Developers comes in. It can help any organization in recruiting developers, or building internal talent, through nurturing an open source culture, contributing to open source communities, and showcasing the utility of new open source projects.

Why does your organization need a recruiting strategy? One reason is that the growing shortage of skilled developers is well documented. According to a recent Cloud Foundry report, there are a quarter-million job openings for software developers in the U.S. alone and half a million unfilled jobs that require tech skills. They’re also forecasting the number of unfillable developer jobs to reach one million within the next decade.

Appeal to motivation

That’s a problem, but there are solutions. Effective recruitment appeals to developer motivation. If you understand what attracts developers to work for you, and on your open source projects (and open source, in general) you can structure your recruitment strategies in a way that appeals to them. As the Recruiting Open Source developers guide notes, developers want three things: rewards, respect and purpose.

The guide explains that your recruitment strategy can benefit greatly if you initially hire people who are leaders in open source. “Domain expertise and leadership in open source can sometimes take quite a long time at established companies,” said Guy Martin, Director of Open at Autodesk. “You need to put training together and start working with people in the company to begin to groom them for that kind of leadership. But, sometimes initially you’ve got to bootstrap by hiring people who are already leaders in those communities.”

Train internal talent

Another key strategy that the guide covers is training internal talent to advance open source projects and communities. “You will want to spend time training developers who show an interest or eagerness in contributing to open source,” the guide notes. “It pays to cultivate this next level of developers and include them in the open source decision-making process. Developers gain respect and recognition through their technical contributions to open source projects and their leadership in open source communities.”

In addition, it makes a lot of sense to set up internal systems for tracking the value of contributions to open source. The goal is to foster pride in contributions and emphasize that your organization cares about open source.  “You can’t throw a stone more than five feet in the cloud and not hit something that’s in open source,” said Guy Martin. “We absolutely have to have open source talent in the company to drive what we’re trying to do moving forward.”

Startups, including those in stealth mode, can apply these strategies as well. They can have developers work on public open source projects to establish their influence and showcase it for possible incoming talent. Developers have choices in open source, so the goal is to make your organization attractive for the talent to apply.

Within the guide, Ibrahim Haddid (@IbrahimAtLinux) recommends the following strategies for advancing recruitment strategies:

  1. Hire key developers and maintainers from the open source projects that are important to you.
  2. Allow your developers working on products to spend a certain % of their time contributing upstream.
  3. Set up a mentorship program where senior and more experienced developers guide junior, less experienced ones.
  4. Develop and offer both technical and open source methodology training to your developers.
  5. Participate in open source events. Send your developers and support them in presenting their work.
  6. Provide proper IT infrastructure that will allow your developers to communicate and work with the global open source community without any challenges.
  7. Set up an internal system to track the contributions of your developers and measure their impact.
  8. Internally, plan on contributing and focus on areas that are useful to more than one business unit/ product line.

The Recruiting Open Source Developers guide can help you with all these strategies and more, and it explores how to weave open source itself into your strategies. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program. The guides are available now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.

These guides were produced based on expertise from open source leaders. Check out the guides and stay tuned for our continuing coverage.

Also, don’t miss the previous articles in the series: How to Create an Open Source Program; Tools for Managing Open Source Programs; and Measuring Your Open Source Program’s Success.

API

Learn the basics of using REST APIs at the upcoming APIStrat conference.

APIs are becoming a very popular and are a must-know for every type of developer. But, what is an API? API stands for Application Programming Interface. It is a way to get one software application to talk to another software application. In this article, I’ll go over the basics of what they are and why to use them.

Nom Nom Nom! I happened to be snacking on chips while trying to think of a name for my REST API talk coming up at APIStrat in Portland. Similarly, the act of consuming or using a REST API means to eat it all up. In context, it means to eat it, swallow it, and digest it — leaving any others in the pile exposed. Sounds yummy, right?

It seems that every application out there is hungry for an API. Let’s look at Yelp for example. Yelp by itself won’t have the functionality you’d expect. In order to search nearby restaurants or locations, it needs to use an API for a map. It uses the Google API. With that, you can locate nearest places and get directions to the place. APIs allow you to integrate one tool into another tool to give it more functionality. Without the ability to make these types of integrations, you can say goodbye to a majority of all the apps out there that you use!

So why are APIs so important? Most companies today have several different software applications they need to use, including sales, accounting, CRM, a project management system, etc. To have the software all work together is increasingly important for financial reasons, which is also making work processes flow more easily. Companies can also create their own tools using other APIs to enhance their own software, making their customers happier and giving them the tools they need.

API Basics

Back in 2000, the very first API came from eBay. Since then, they have increased exponentially. In 2016, more than 50 million API requests have been made, and there are 30,000 available APIs out there. From 2015 to 2016, the number has doubled in growth from 15,000 to 30,000 APIs!

In my talk, I will be covering API basics, how to make API requests, how APIs are made, and much more.  I will show you how you can use POSTMAN to test making REST API calls, so that you will leave with the skills to make REST calls on any API. This talk is designed for any audience level. If you are brand new to programming, that’s fine. If you are an experienced programmer that currently uses APIs but want to go back into the basics to understand the breakdown of how APIs work, then that is fine, too!

If you want to learn more, be sure to check out my other talk at APIStrat:  “Chatbots are the Future: Let’s Build One!” In this talk, I will go over how to build a working Chatbot using the Cisco Spark API, which is a collaboration API for chat (messages), calling, and video. You don’t need to install or download anything to prepare. I will cover everything in the presentation, and it is designed for everyone to follow along. I guarantee you will have a working chatbot by the end of the presentation.

You can learn more at the upcoming APIStrat conference

Testing is especially important in modern distributed software systems. Learn more at the upcoming APIStrat conference.

As developers, we often hear that tests are important. Automated testing minimizes the number of bugs released to production, helps prevent regression, improves code quality, supplements documentation, and makes code reviews easier. In short, tests save businesses money by increasing system uptime and keeping developers working on new features instead of fighting fires. While software testing has been around for about as long as software has, I would argue that testing is especially important (and unfortunately more challenging) in modern distributed software systems.

Distributed software” refers to any software that is not run entirely on one computer. Almost all web applications are distributed software as they rely on applications on other servers (eg: remote data stores, internal REST APIs, third-party APIs, content delivery networks, etc.), and most mobile and desktop applications are as well. Distributed software presents new challenges and requires a thoughtful approach to testing. This list includes just some of the reasons that testing is crucial for distributed software:

1. Third Party APIs Can Change Unexpectedly

We would like to think that every REST API we use will adhere to some form of versioning, but this doesn’t always happen. Sometimes APIs break when a maintainer fixes a bug, sometimes breaking changes are overlooked, and sometimes the API just isn’t mature or stable yet. With more companies releasing public APIs, we’re bound to see the number of accidentally breaking releases rise, and tests are a great way to prevent those breaking changes from affecting our applications.

2. Internal API Changes can Affect Your App in Unexpected Ways

Even more commonly, breaking API changes come from within our own organization. For the past few years, I’ve been working with startups where the business requirements change almost as fast as we can implement them, so our internal APIs are rarely stable and sometimes the documentation gets outdated. Slowing down, improving communication between team-members, and writing tests for our internal APIs has helped.

3. Remotely Distributed Open Source Packages are More Popular Than Ever

78% of companies are now running on some form of open source software. This has helped the speed and ease of developing software to increase exponentially, but blindly relying on open source packages has bitten plenty of developers as well (see the left-pad incident of 2016). Once again, we hope that open source packages use semantic versioning, but it’s impossible to guarantee this. Testing the boundaries between packages and our software is one way to help improve reliability.

4. Network Connections Aren’t Perfect

In many server-to-server cases, network connections are pretty reliable, but when you start serving up data to a browser or mobile client via an API, it gets much harder to guarantee a connection. In either case, you should have a plan for failure: Does your app break? Throw an error? Retry gracefully? Adding tests that simulate a bad network connection can be a huge help in minimizing poor user experiences or data loss.

5. Siloed Teams can Lead to Communication Gaps

One of the advantages to distributed systems is that a team can be assigned to each component. This allows each team to become an expert on just one part of the system, enabling the scaling of software organizations like we see at Amazon. The downside to these siloed teams is that communication becomes more difficult, but a good test suite, thorough documentation, and self-documenting APIs can help minimize these gaps.

How Do We Test Distributed Systems?

Distributed software has become more popular as the cost of cloud computing has gone down and network connections have become more reliable. While distributed systems offer unique advantages for scaling and cost savings, they introduce new challenges for testing.

Borrowing from some of Martin Fowler’s ideas on testing microservices and my own experience building layered test plans, I’ll be presenting a strategy for testing distributed systems at this year’s API Strategy & Practice Conference. If you’re interested in learning more about the topic of testing distributed software, or you have questions, you can find me at the conference, or anytime on Twitter.

Learn more at APIStrategy and Practice Conference.

Measure success

Measuring Your Open Source Program’s Success is a free guide to help any organization learn exactly how their open source program is driving business value.

Open source programs are proliferating within organizations of all types, and if yours is up and running, you may have arrived at the point where you want to measure the program’s success. Many open source program managers are required to demonstrate the ROI of their programs, but even if there is no such requirement, understanding the metrics that apply to your program can help optimize it. That is where the free Measuring Your Open Source Program’s Success guide comes in. It can help any organization measure program success and can help program managers articulate exactly how their programs are driving business value.

Once you know how to measure your program’s success, publicizing the results — including the good, the bad, and the ugly — increases your program’s transparency, accountability, and credibility in open source communities. To see this in action, check out example open source report cards from Facebook and Google.

Facebook’s open source program office periodically posts the month-over-month results from its open source projects internally and sends an executive report to management. “Reports are just a good way to raise awareness,” said Christine Abernathy, Open Source Developer Advocate at Facebook. “Even though Facebook places a high value on open source (as an organization), it’s still always a good thing to market yourself internally all the time and show your value.”

Existing tools can help you measure program success. You can begin by setting up the right tools for collecting data and make sure the data sources are clean and in a format that everyone can understand. Many organizations create a dashboard of metrics for their open source programs, to track all of the data in one place and provide project snapshots that can help assess progress at a glance. (See our guide on Tools for Managing Open Source Programs.)

Key metrics for measuring open source program success

There are countless ways to measure success and track progress for open source programs. Project health isn’t the only thing to track, but is important. “How do you actually get the smartest people in the world working at your company?” asks Chris Aniszczyk, Executive Director of the Open Container Initiative and COO of the Cloud Native Computing Foundation (and former head of open source programs at Twitter). “Well, you open source stuff and then you convince them to contribute to your projects.”

It helps to be able to quantify project health. GitHub’s guide on open source metrics gives a great overview of what project maintainers should pay attention to.  Some key project metrics to track are:

  • Number of contributors (and ratio of internal to external contributors)
  • Number of pull requests submitted, opened and accepted (and time remaining open)
  • Number of issues submitted (and length of time remaining open)
  • Number of commits per contributor (internal and external)
  • Number of external adopters
  • Number of projects created or contributed to (program wide)

Other metrics include popularity and awareness, influence, and program costs. As you delve into these metrics, you can concretely report everything from diversity of contributors to your projects to the number of followers you have across channels.

The Measuring Your Open Source Program’s Success guide can help you with all these initiatives and more, and it explores how to set program goals and measure whether or not they are being met. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program. The guides are available now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.

You can read more in previous articles, on How to Create an Open Source Program and Tools for Managing Open Source Programs. We encourage you to check out all the guides and stay tuned for more coverage of them.

Join the Apache Mesos community in Prague for town halls, MesosCon university, and a full-day hackathon.

Get the latest on Apache Mesos with Ben Hindman, Co-Creator of Apache Mesos, at MesosCon Europe taking place October 25-27, 2017 in Prague, Czech Republic. At the conference, you’ll hear insights by industry experts deploying Mesos clusters, learn about containerization and security in Mesos, and more.

This annual conference brings together users and developers to share and learn about the Mesos project and its growing ecosystem. The conference features two days of sessions focused on the Apache Mesos Core and related technologies, as well as a one-day hackathon, town halls, and MesosCon University.  

Highlights include:

  • SMACK in the Enterprise keynote panel: Hear how the SMACK stack is impacting the data analytics landscape at large enterprises. Panelists will be announced soon.
  • MesosCon University: Tutorial-style sessions will offer hands-on learning for building a stateful service, operating your cluster, or bootstrapping a secure Mesos cluster.
  • Town Halls: A community gathering to discuss pressing needs and issues. The town halls will begin at 7:00pm after the onsite reception on Thursday, and will include drinks and appetizers sponsored by Mesosphere. Have a town hall you think we should run? Reach out to events@linuxfoundation.org.
  • Hackathon: Come and work on new Mesos features, new demos, new documentation, and win great prizes! The Hackathon will take place on Wednesday, October 25, and is included with your conference registration.  

View the full schedule of sessions and activities here.

Get a preview of what to expect at MesosCon Europe. Watch videos from MesosCon North America 2017 here.

Register now and use discount code MCEULDC17 to save $25 off your pass to MesosCon Europe.