Join us Wednesday, September 26, 2018 9:00 a.m. Pacific for an introductory webinar showing how to deploy Hyperledger Fabric.

Deploying a multi-component system like Hyperledger Fabric to production is challenging. Join us Wednesday, September 26, 2018 9:00 a.m. Pacific for an introductory webinar, presented by Alejandro (Sasha) Vicente Grabovetsky and Nicola Paoli of AID:Tech.

Why should you care?

Hyperledger Fabric is rather awesome, but deploying a distributed network has been known to give headaches and even migraines. In this talk, we will not be providing you with a guillotine that forever gets rid of these headaches, but instead we will talk you through some tools that can help you deploy a functioning, production-ready Hyperledger Fabric network on a Kubernetes cluster.

Who should attend?

Ideally, you are a Dev, an Ops or a DevOps interested in learning more about how to deploy Hyperledger Fabric to Kubernetes.

You might know a little bit about Hyperledger Fabric and about Docker containers and Kubernetes. We assume limited knowledge and will do our best to as possible and explain and demystify all the components along the way.

What we will talk about?

In this webinar, we will lower the threshold so that you can deploy your very own Hyperledger Fabric network onto Kubernetes. So what is each of these?

Hyperledger Fabric is a permissioned (unlike the permissionless Ethereum network) framework, allowing you to create consortium Blockchain networks, where one or more organisations share an immutable ledger of records and smart contracts (called “chaincode” in Hyperledger Fabric).

Kubernetes is a platform for deploying microservices (i.e. containerised applications, typically using Docker) applications on a cluster, such that the applications:

  • use fewer resources than when using dedicated (bare metal or virtual) machines for each component,
  • are self-healing, such that failed containers are restarted
  • and are configured in a declarative rather than procedural way, making them robust

We do this by using a set of Helm Charts. Rather than using a monolithic Helm Chart for the whole deployment, we use separate charts for each Hyperledger Fabric component, namely the Certificate Authority, Peer, CouchDB and Orderer. We demonstrate how to get these charts working together to provide a unified blockchain system.

Along the way, we will explain the different concepts you need to understand your Hyperledger Fabric network:

  • What is a Certificate Authority?
  • Why is the network split across Orderers and Peers?
  • And what are CouchDB and Apache Kafka doing in all of this?

We’ll also guide you in the right direction to other resources you can look at to expand your understanding on how Hyperledger Fabric works, including:

  • the official EdX course and our upcoming chapter on Composer,
  • Sasha’s own course on Hyperledger Fabric and Composer, and
  • we will be using the Helm Charts (Kubernetes packages) we created to make our own lives easier.

When and where?

The webinar will be running on Wednesday, September the 26th, 9-10am PDT.

What are you waiting for? Register here!

About the presenters

Sasha and Nicola work at AID:Tech, developing blockchain solutions leveraging a microservice architecture and Hyperledger Fabric and Composer frameworks to provide digital identities to transparently trace charitable donations and remittances as digital assets are exchanged.


A recent webinar, Get Involved: How to Get Started with Hyperledger Projects, focuses particularly on making Hyperledger projects more approachable.

Few technology trends have as much momentum as blockchain — which is now impacting industries from banking to healthcare. The Linux Foundation’s Hyperledger Project is helping drive this momentum as well as providing leadership around this complex technology, and many people are interested in getting involved. In fact, Hyperledger nearly doubled its membership in 2017 and recently added Deutsche Bank as a new member.  

A recent webinar, Get Involved: How to Get Started with Hyperledger Projects, focuses particularly on making Hyperledger projects more approachable. The free webinar is now available online and is hosted by David Boswell, Director of Ecosystem at Hyperledger and Tracy Kuhrt, Community Architect.

Hyperledger Fabric, Sawtooth, and Iroha

Hyperledger currently consists of 10 open source projects, seven that are in incubation and three that have graduated to active status.  “The three active projects are Hyperledger Fabric, Hyperledger Sawtooth, and Hyperledger Iroha,” said Boswell.

Fabric is a platform for distributed ledger solutions, underpinned by a modular architecture. “One of the major features that Hyperledger Fabric has is a concept called channels. Channels are a private sub-network of communication between two or more specific network members for the purpose of conducting private and confidential transactions.”

According to the website, Hyperledger Iroha is designed to be easy to incorporate into infrastructural projects requiring distributed ledger technology. It features simple construction, with emphasis on mobile application development.

Hyperledger Sawtooth is a modular platform for building, deploying, and running distributed ledgers, and you can find out more about it in this post.  One of the main attractions Sawtooth offers is “dynamic consensus.”

“This allows you to change the consensus mechanism that’s being used on the fly via a transaction, and this transaction, like other transactions, gets stored on the blockchain,” said Boswell. “With Hyperledger Sawtooth, there are ways to explicitly let the network know that you are making changes to the same piece of information across multiple transactions. By being able to provide this explicit knowledge, users are able to update the same piece of information within the same block.”

Sawtooth can also facilitate smart contracts. “You can write your smart contract in a number of different languages, including C++ JavaScript, Go, Java, and Python,” said Boswell. Demonstrations and resources for Sawtooth are available here:

How to contribute to Hyperledger projects

In the webinar, Kuhrt and Boswell explain how you can contribute to Hyperledger projects. “All of our working groups are open to anyone that wants to participate, including the training and education working group,” said Kuhrt. “This particular working group meets on a biweekly basis and is currently working to determine where it can have the greatest impact. I think this is really a great place to get in at the start of something happening.”

What are the first steps if you want to make actual project contributions? “The first step is to explore the contributing guide for a project,” said Kuhrt. “All open source projects have a document at the root of their source directory called contributing, and these guides are really to help you find information about how you’d file a bug, what kind of coding standards are followed by the project, where to find the code, where to look for issues that you might start working with, and requirements for pull requests.”

Now is a great time to learn about Hyperledger and blockchain technology, and you can find out more in the next webinar coming up May 31:

Blockchain and the enterprise. But what about security?

Date: Thursday, May 31, 2018
Time: 10:00 AM Pacific Daylight Time

This talk will leave you with understanding how Blockchain does, and does not, change the security requirements for your enterprise. Sign up now!

Submit to Speak at Hyperledger Global Forum

Hyperledger Global Forum will offer the unique opportunity for more than 1,200 users and contributors of Hyperledger projects from across the globe to meet, align, plan, and hack together in-person. Share your expertise and speak at Hyperledger Global Forum! We are accepting proposals through Sunday, July 1, 2018. Submit Now >>

open mainframe

To learn more about open source and mainframe, join us May 15 at 1:00 pm ET for a webinar led by Open Mainframe Project members Steven Dickens of IBM, Len Santalucia of Vicom Infinity, and Mike Riggs of The Supreme Court of Virginia.

When I mention the word “mainframe” to someone, the natural response is colored by a view of an architecture of days gone by — perhaps even invoking a memory of the Epcot Spaceship Earth ride. This is the heritage of mainframe, but it is certainly not its present state.

From the days of the System/360 in the mid 1960s through to the modern mainframe of the z14, the systems have been designed along four guiding principles of security, availability, performance, and scalability. This is exactly why mainframes are entrenched in the industries where those principles are top level requirements — think banking, insurance, healthcare, transportation, government, and retail. You can’t go a single day without being impacted by a mainframe — whether that’s getting a paycheck, shopping in a store, going to the doctor, or taking a trip.

What is often a surprise to people is how massive open source is on mainframe. Ninety percent of mainframe customers leverage Linux on their mainframe, with broad support across all the top Linux distributions along with a growing number of community distributions. Key open source applications such as MongoDB, Hyperledger, Docker, and PostgreSQL thrive on the architecture and are actively used in production. And DevOps culture is strong on mainframe, with tools such as Chef, Kubernetes, and OpenStack used for managing mainframe infrastructure alongside cloud and distributed.

Learn more

You can learn more about open source and mainframe, both the history along with the current and future states of open source on mainframe, in our upcoming presentation. Join us May 15 at 1:00pm ET for a session led by Open Mainframe Project members Steven Dickens of IBM, Len Santalucia of Vicom Infinity, and Mike Riggs of The Supreme Court of Virginia.

In the meantime, check out our podcast series “I Am A Mainframer” on both iTunes and Stitcher to learn more about the people who work with mainframe and what they see the future of mainframe to be.

Community manager and author Jono Bacon will provide tips for building and managing open source communities in a free webinar on Monday, July 24 at 9:30am Pacific.

In this webinar, Bacon will answer questions about community strategy and share an in-depth look at this exciting new conference held in conjunction with this year’s Open Source Summit North America, happening Sept. 11-14 in Los Angeles.

The Open Community Conference provides presentations, panels, and Birds-of-a-Feather sessions with practical guidance for building and engaging productive communities and is an ideal place to learn how to evolve your community strategy. The webinar will provide event details as well as highlights from the conference schedule, which includes such talks as:

  • Building Open Source Project Infrastructures – Elizabeth K. Joseph, Mesosphere

  • Scaling Open Source – Lessons Learned at the Apache Software Foundation – Phil Steitz, Apache Software Foundation

  • Why I Forked My Own Project and My Own Company – Frank Karlitschek, ownCloud

  • So You Have a Code of Conduct… Now What? – Sarah Sharp, Otter Tech

  • Fora, Q&A, Mailing Lists, Chat…Oh My! – Jeremy Garcia, / Datadog

Also, if you post questions on Twitter with the #AskJono hashtag about community strategy, leadership, open source, or the conference, you’ll get a chance to win a free ticket to the event (including all the sessions, networking events, and more).

Join us July 24, 2017 at 9:30am Pacific to learn more about community strategy from Jono Bacon. Sign Up Now »

2016 was a pivotal year for Apache Hadoop, a year in which enterprises across a variety of industries moved the technology out of PoCs and the lab and into production. Look no further than AtScale’s latest Big Data Maturity survey, in which 73 percent of respondents report running Hadoop in production.

ODPi recently ran a series of its own Twitter polls and found that 41 percent of respondents do not use Hadoop in-production, while 41% of respondents said they do. This split may partly be due to the fact that the concept of “production” Hadoop can be misleading. For instance, pilot deployments and enterprise-wide deployments are both considered “production,” but they are vastly different in terms of DataOps, as Table 1 below illustrates.


Table 1: DataOps Considerations from Lab to Enterprise-wide Production.

As businesses move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments are the norm and several important considerations must be addressed. 

Dive into this topic further on June 28th for a free webinar with John Mertic, Director of ODPi at the Linux Foundation, hosting Tamara Dull, Director of Emerging Technologies at SAS Institute.

The webinar will discuss ODPi’s recent 2017 Preview: The Year of Enterprise-wide Production Hadoop and explore DataOps at Scale and the considerations businesses need to make as they move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments.

Register for the webinar here.

As a sneak peek to the webinar, we sat down with Mertic to learn a little more about production Hadoop needs.

Why is it that the deployment and management techniques that work in limited production may not scale when you go enterprise wide?

IT policies kick in as you move from Mode 2 IT — which tends to focus on fast moving, experimental projects such as Hadoop deployments — to Mode 1 IT — which controls stable, enterprise wide deployments of software. Mode 1 IT has to consider both the enterprise security and access requirements, but also data regulations that impact how a tool is used. On top of that, cost and efficiency come into play, as Mode 1 IT is cost conscious.

What are some of the step-change DataOps requirements that come when you take Hadoop into enterprise-wide production? 

Integrating into Mode 1 IT’s existing toolset is the biggest requirement. Mode 1 IT doesn’t want to manage tools it’s not familiar with, nor those it doesn’t feel it can integrating into the existing management tools the enterprise is already using. The more Hadoop uniformly fits into the existing devops patterns – the more successful it will be.

Register for the webinar now.

On Friday, April 28, The Linux Foundation will continue its new series of Twitter chats with leaders at the organization. This monthly activity, entitled #AskLF, gives the open source community a chance to ask upper management at questions about The Linux Foundation’s strategies and offerings.

#AskLF aims to increase access to the bright minds and community organizers within The Linux Foundation. While there are many opportunities to interact with staff at Linux Foundation global events, which bring together over 25,000 open source influencers, a live Twitter Q&A will give participants a direct line of communication to the designated hosts.

The second host (following Arpit Joshipura’s chat last month) will be Clyde Seepersad, the General Manager of Training and Certification since 2013. His #AskLF session will take place in the midst of many new training initiatives at the organization, including a new Inclusive Speaker Orientation and a Kubernetes Fundamentals course. @linuxfoundation followers are encouraged to ask Seepersad questions related to Linux Foundation courses, certifications, job prospects in the open source industry, and recent training developments.

Sample questions might include:

  • I’m new to open source but I want to work in the industry. How can a Linux Foundation Certification help me?

  • What are The Linux Foundation Training team’s support offerings like?

  • How will a Linux Foundation certification give me an advantage over other candidates with competitors’ certifications?

Here’s how you can participate in the first #AskLF:

  • Follow @linuxfoundation on Twitter: Hosts will take over The Linux Foundation’s account during the session.

  • Save the date: April 28, 2017 at 10 a.m. PT.

  • Use the hashtag #AskLF: To ask Clyde your questions while he hosts. Click here to spread the news of #AskLF with your Twitter community.

  • Be a n00b!: If you’ve been considering beginning a open source training journey, don’t be afraid to ask Clyde basic questions about The Linux Foundation’s methods, recommendations, or subjects covered. No inquiry is too basic!

More dates and details for future #AskLF sessions to come! We’ll see you on Twitter, April 28th at 10 a.m. PT.

More information on Linux Foundation Training can be found in the training blog via

Hear Clyde’s thoughts on why Linux Foundation certifications give you a competitive advantage in this on-demand webinar:

No More Excuses: Why You Need to Get Certified Now

*note: unlike Reddit-style AMAs, #AskLF is not focused around general topics that might pertain to the host’s personal life. To participate, please focus your questions around open source networking and Clyde Seepersad’s career.

As part of its goal to cultivate more diverse thoughts and opinions in open source, the April Women in Open Source webinar will discuss why publishing your own research, technical work and industry commentary is a smart move for your career and incredibly beneficial to the industry at large.

In this webinar, learn how to get started, good topics to write about and how to contribute to magazines, journals and new publishing platforms like Medium. “Why and How To Publish Your Work and Opinions” will be held Thursday, April 27, 2017, at 9 a.m. Pacific Time.

Designed to share both inspirational ideas and practical tips the community can immediately put into action, the webinar will provide examples of women in open source who have successfully published their technical work and viewpoints, as well as identify influential publications to target. So mark your calendars!

Register today for this free webinar, brought to you by Women in Open Source.

As the community manager and an editor for, Rikki helps grow and oversee a  community of moderators, contributors, and participants. attracts more than 1 million pageviews each month, with articles contributed by the open source community and community moderators.

Libby oversees content strategy for The Linux Foundation, including and its newsletter, managing a team of freelance writers and editors. In addition, she writes and edits content for the site.

For news on future Women in Open Source events and initiatives, join the Women in Open Source email list and Slack channel. Please send a request to join via email to

If you operate within the open source galaxy or the tech industry in general, you’ve likely run across the phrase “cloud-native” with increasing frequency — and you may be wondering what all the buzz is about.

Cloud-native refers to the model in which applications are built expressly for and run exclusively in the cloud — rather than designed and run on-prem, as enterprises historically have done. Cloud computing architecture, which leans heavily on open source code, promises on-demand computing power at lower cost, with no need to spend excessively on data center equipment, staffing and upkeep. Creating cloud-native applications and services is the natural next step for developers accustomed to working entirely in the cloud.

But enterprise-level cloud-native applications require a platform like Cloud Foundry to get up and running in the cloud. Platforms drastically reduce the resource drains associated with “snowflake” infrastructure, and in fact, they automate and integrate the concepts of continuous delivery, microservices, containers and more, to make deploying an application as easy and fast as possible — in any cloud you want, meaning you can operate in a truly multi-cloud environment.

On March 29 at 11 a.m. PST, join Pivotal’s Bridget Kromhout and Michael Coté for a free webinar that will take a deep dive into how cloud-native is the wave of the future and get answers to questions like:

  • What is the cloud-native approach? How will it benefit your software product team?

  • How does cloud-native enable cloud application platforms like Cloud Foundry to standardize production, accelerate cycles and create a multi-cloud environment?

  • Which companies are cloud-native? What lessons can we take from their new model?

Join Cloud Foundry and The Linux Foundation for “Better Software Through Cloud Platforms Like Cloud Foundry” on Wednesday, March 29, 2017 at 11:00am Pacific. Register Now! >>

Women in Open Source will kick off a webinar series that will discuss cultivating more diverse viewpoints and voices in open source, including both inspirational ideas and practical tips the community can immediately put into action. The first webinar, “From Abstract to Presentation: How To Develop a Winning Speaking Submission” will be held Thursday, March 9, 2017, at 8 a.m. Pacific Time.

Register today for this free webinar, brought to you by Women in Open Source.

In this webinar, Deb Nicholson, FOSS policy and community advocate, will discuss how to write a winning abstract for a CFP to become a speaker. From picking interesting topics and writing a compelling proposal to the best style and format and how to get the biggest audience once chosen, Deb will summarize the most important factors to consider. And she’ll spend time answering your questions. So mark your calendars and join us!

Deb is community outreach director for the Open Invention Network, the largest patent non-aggression community in history and supports freedom of action in Linux as a key element of open source software. She’s won the O’Reilly Open Source Award, one of the most recognized awards in the FLOSS world, for her work on GNU MediaGoblin and OpenHatch.

For news on future Women in Open Source events and initiatives, join the Women in Open Source email list and Slack channel. Please send a request to join via email to