Posts

The Linux Foundation’s new Kubernetes training course is now available for developers and system administrators who want to learn container orchestration using this popular open source tool.

Kubernetes is quickly becoming the de facto standard to operate containerized applications at scale in the data center. As its popularity surges, so does demand for IT practitioners skilled in Kubernetes.

“Kubernetes is rapidly maturing in development tests and trials and within production settings, where its use has nearly tripled in the last eight months,” said Dan Kohn, executive director, the Cloud Native Computing Foundation.

Kubernetes Fundamentals (LFS258) is a self-paced, online course that teaches students how to use Kubernetes to manage their application infrastructure. Topics covered include:

  • Kubernetes architecture

  • Deployment

  • How to access the cluster

  • Tips and tricks

  • ConfigMaps, and more.

Developed by The Linux Foundation and the Cloud Native Computing Foundation, home of the Kubernetes open source project, developers and admins will learn the technology straight from the source.

Students will learn the fundamentals needed to understand Kubernetes and get quickly up to speed to start building distributed applications that will scale, be fault-tolerant, and simple to manage.

The course distills key principles, such as pods, deployments, replicasets and services, and give students enough information to start using Kubernetes on their own. And it’s designed to work with a wide range of Linux distributions, so students will be able to apply the concepts learned regardless of their distribution of choice.

LFS258 also will help prepare those planning to take the Kubernetes certification exam, which will launch later this year. Updates are planned for the course ahead of the certification exam launch, which will be specifically designed to assist with preparation for the exam.

The course, which has been available for pre-registration since November, is available to begin immediately. The $199 course fee provides unlimited access to the course for one year to all content and labs. Sign up now!

Chris Aniszczyk is The Linux Foundation’s Vice President of Developer Relations and Programs where he serves as the Executive Director of the Open Container Initiative and COO of the Cloud Native Computing Foundation.

As we kick off 2017 and look ahead to the coming year, I want to take some time to reflect back on what the Open Container Initiative (OCI) community accomplished in 2016 and how far we’ve come in a short time since we were founded as a Linux Foundation project a little over a year ago.

The community has been busy working toward our mission to create open industry standards around container formats and runtime! Last year the project saw 3000+ commits from 128 different authors across 36 different organizations. With the addition of the Image Format specification project, we expanded our initial scope from just the runtime specification. Our membership grew to nearly 50 members with the addition of Anchore, ContainerShip, EasyStack and Replicated, which add an even more diverse perspective to the community.  We also added new developer tools projects —runtime-tools and image-tools which serve as repositories for conformance testing tools and have been instrumental in gearing up for the upcoming v1.0 release.

xrLUtogS3eywm7G7oqJzD1MtblU_SSE-3JK0Hpu3

We’ve also recently created a new project within OCI called go-digest (which was donated and migrated from docker/go-digest). This provides a strong hash-identity implementation in Go and services as a common digest package to be used across the container ecosystem.

In terms of early adoption, we have seen Docker support the OCI technology in its container runtime (libcontainer) and contribute it to the OCI project. Additionally, Docker has committed to adopting OCI technology in its latest containerd announcement. The Cloud Foundry community has been an early consumer of OCI by embedding runc via Garden as the cornerstone of its container runtime technology. The Kubernetes project is incubating a new Container Runtime Interface (CRI) that adopts OCI components via implementations like CRI-O and rklet. The rkt community is adopting OCI technology already and is planning to leverage the reference OCI container runtime runc in 2017. The Apache Mesos community is currently building out support for the OCI image specification.

Speaking of the v1.0 release, we are getting close to launch! The milestone release of the OCI Runtime and Image Format Specifications version 1.0 will be available this first quarter of 2017, drawing the industry that much closer to standardization and true portability. To that end, we’ll be launching an official OCI Certification program once the v1.0 release is out. With OCI certification, folks can be confident that their OCI-certified solutions meet a high set of criteria that deliver agile, interoperable solutions.

We’ll be looking into the possibility of adding more projects in the coming year, and we hope to showcase even more demonstrations of the specs in action under different scenarios. We’ll be onsite at several industry events, so please be on the lookout and check out events page for details.

There is still much work to be done!  The success of our community depends on a wide array of contributions from all across the industry; the door is always open, so please come join us in shaping the future of container technology! In particular, if you’re interested in contributing to the technology, we recommend joining the OCI developer community which is open to everyone. If you’re building products on OCI technology, we recommend joining as a member and participating in the upcoming certification program.

Want to learn more about container standards? Watch the free re-play of The Linux Foundation webinar, “Container Standards on the Horizon.” Watch now!

This blog originally appeared on the OCI website.

LinuxCon, ContainerCon, and CloudOpen will be held in China this year for the first time, The Linux Foundation announced this week.

After the success of other Linux Foundation events in the country, including MesosCon Asia and Cloud Foundry Summit Asia, The Linux Foundation decided to offer its flagship LinuxCon, ContainerCon and CloudOpen events in China as well, said Linux Foundation Executive Director Jim Zemlin.

“Chinese developers and businesses have strongly embraced open source and are contributing significant amounts of code to a wide variety of projects,” Zemlin said. “We have heard the call to bring more open source events to China.”

The flagship event, also known as LC3, will be held June 19-20, 2017 at the China National Convention Center in Beijing. As it was in previous years, the event will also be held in North America and Europe this year under a new name, Open Source Summit.

LC3 will cover many of the hottest topics in open source, including open networking, Blockchain, compliance issues and the business and professionalization of open source.

Attendees will have access to the content of all three events with one registration. Activities will include 70+ educational sessions, keynotes from industry leaders, an exhibit hall for demonstrations and networking, hackathons, social events, and more.

  • LinuxCon is where the leading maintainers, developers and project leads in the Linux community and from around the world gather together for updates, education, collaboration and problem-solving to further the Linux ecosystem.

  • ContainerCon is the place to learn how to best take advantage of container technology, which is revolutionizing the way we automate, deploy and scale workloads; from hardware virtualization to storage to software defined networking, containers are helping to drive a cloud native approach.

  • CloudOpen gathers top professionals to discuss cloud platforms, automation and management tools, DevOps, virtualization, software-defined networking, storage and filesystems, Big Data tools and platforms, open source best practices, and much more.

The conference is designed to enable attendees to collaborate, share information and learn about the newest and most interesting open source technologies, including Linux, containers, cloud technologies, networking, microservices and more. It also provides insight into how to navigate and lead in the open source community.

Speaking proposals are being accepted through March 18. Submit your proposal now!

Registration for the event will be open in the coming weeks.

The Cloud Native Computing Foundation is taking to the road February 7-9  in Portland, Seattle and Vancouver to offer end users, developers, students and other community members the ability to learn from experts at Red Hat, Apprenda and CNCF on how to use Kubernetes and other cloud native technologies in production. Sponsored by Intel and Tigera, the first ever Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest will introduce key concepts, resources and opportunities for learning more about cloud native computing.

The CNCF roadshow series focuses on meeting with and catering to those using cloud native technologies in development, but not yet in production. Cities and locations include:

Each roadshow will be held from 2-5pm, with the full agenda including presentations from:

Dan Kohn, Executive Director of the Cloud Native Computing Foundation.  Dan will discuss:

  • What is cloud native computing — orchestrated containers as part of a microservices architecture — and why are so many cloud users moving to it instead of virtual machines

  • An overview of the CNCF projects — Kubernetes, Prometheus, OpenTracing and Fluentd — and how we as a community are building maps through previously uncharted territory

  • A discussion of top resources for learning more, including Kubernetes the Hard Way, Kubernetes bootcamp, and CloudNativeCon/KubeCon and training and certification opportunities

Brian Gracely, Director of Product Strategy at Red Hat. Brian will discuss:

  • Real-world use of Kubernetes in production today at Amadeus, LeShop, Produban/Santander & FICO

  • Why contributing to CNCF-hosted projects should matter to you

  • How cross-community collaboration is the key to the success of the future of Cloud Native

Isaac Arias, Technology Executive, Digital Business Builder, and Passionate Entrepreneur at Apprenda. Isaac will discuss:

  • Brief history of machine abstractions: from VMs to Containers

  • Why containers are not enough: the case for container orchestration

  • From Borg to Kubernetes: the power of declarative orchestration

  • Kubernetes concepts and principles and what it takes to be Cloud Native

By the end of this event, attendees will understand how cloud users are implementing cloud native computing — orchestrated containers as part of a microservices architecture – instead of virtual machines. Real-world Kubernetes use cases at Amadeus, LeShop, Produban/Santander, and FICO will be presented. A detailed walk through of Prometheus (monitoring system), OpenTracing (tracing standard) and Fluentd (logging) projects and each level of the stack will also be provided.

Each city is limited in space, so sign up now! Use the code MEETUP50 to receive 50% off registration!

omhIFAtkg2SLFFdl0mzJazFtC3OK1RwVr8zjmpZY

With emerging technology, there can be the thought that old is not good. It could lack the features and performance the business requires.  Cloud technology changes so much, do we still need something like Swift that predates OpenStack?

To answer this question, we must understand Swift’s unique architecture. Only with Swift can we harness the power of the BLOB.  

A central concept to Swift is the Binary Large OBject (BLOB). Instead of block storage, data is divided into some number of binary streams. Any file, of any format, can be reduced to a series of ones and zeros, sometimes referred to as serialization. Start at the first bit of a file and count ones and zeros until you have a  block, a megabyte or even five gigabytes. This becomes an object. The next number of bits becomes an object until there is no more file to divide into objects. These objects can be stored locally or sent to a Swift proxy server. The proxy server will send the object to a series of storage servicers where memcached will accept the object, at memory speeds. Definitely an advantage in the days before inexpensive solid state drives.

These independent objects can be placed anywhere, as long as they can be brought back together in the same order, which is what Swift does on our behalf through services. Swift uses three services to track the blobs, where they are stored, and who owns them:  

  • Object Servers

  • Container Servers

  • Account Servers

These services can be deployed on the same system, or individually across several systems. This allows the Swift cluster to scale and meet the changing needs of the storage. The three services are independant of one another and distribute their data among the available nodes. The distribution has led to the use of the term “ring services.” The distribution among the object, container, and account rings is not round-robin, as the name might imply. Instead it uses an algorithm that includes the device partition index and weights to determine which node the object or its replicas should store the object.

The Object Servers are responsible for storing the actual blobs. The object is stored as a file while the metadata is stored in extended attributes (xattrs). As long as the local filesystem supports xattrs you should be able to use it for local storage.  Each node could use its own filesystem, no need for the entire cluster to be the same.

The objects are stored relative to a container. The Container Server keeps a database of which objects are in which containers. It also maintains a total number of objects and how much storage each container is using.

The third of the “ring services” tracks container ownership and is maintained by the Account Server.  

While the most common deployment of Swift is that each new node runs all three services, it can be easily changed as necessary. Some services may be more active than others, and the node resource demands can be different per ring as well. The flexibility of Swift means we can change our cluster to meet the storage demands for size or speed as necessary. We can deploy more Object Servers without the need to use resources for additional Account Servers.

Swift architecture frees us from the common constraints often found with NAS systems. We can store any data, anywhere we want, on whichever hardware we want.  There is no vendor lock. Rackspace developed a forward thinking solution to cloud storage.  As an open source tool it has revolutionised enterprise storage.

I discuss Swift in more detail in my recent Linux Foundation webinar on OpenStack: Exploring Object Storage with Ceph and Swift.

Watch the full webinar on demand now (login required).

To help you better understand containers, container security, and the role they can play in your enterprise, The Linux Foundation recently produced a free webinar hosted by John Kinsella, Founder and CTO of Layered Insight. Kinsella covered several topics, including container orchestration, the security advantages and disadvantages of containers and microservices, and some common security concerns, such as image and host security, vulnerability management, and container isolation.

In case you missed the webinar, you can still watch it online. In this article, Kinsella answers some of the follow-up questions we received.

john-kinsella.jpg

John Kinsella

John Kinsella, Founder CTO of Layered Insight

Question 1: If security is so important, why are some organizations moving to containers before having a security story in place?

Kinsella: Some groups are used to adopting technology earlier. In some cases, the application is low-risk and security isn’t a concern. Other organizations have strong information security practices and are comfortable evaluating the new tech, determining risks, and establishing controls on how to mitigate those risks.

In plain talk, they know their applications well enough that they understand what is sensitive. They studied the container environment to learn what risks an attacker might be able to leverage, and then they avoided those risks either through configuration, writing custom tools, or finding vendors to help them with the problem. Basically, they had that “security story” already.

Question 2: Are containers (whether Docker, LXC, or rkt) really ready for production today? If you had the choice, would you run all production now on containers or wait 12-18 months?

Kinsella: I personally know of companies who have been running Docker in production for over two years! Other container formats that have been around longer have also been used in production for many years. I think the container technology itself is stable. If I were adopting containers today, my concern would be around security, storage, and orchestration of containers. There’s a big difference between running Docker containers on a laptop versus running a containerized application in production. So, it comes down to an organization’s appetite for risk and early adoption. I’m sure there are companies out there still not using virtual machines…

We’re running containers in production, but not every company (definitely not every startup!) has people with 20 years of information security experience.

Question 3: We currently have five applications running across two Amazon availability zones, purely in EC2 instances. How should we go about moving those to containers?

Kinsella: The first step would be to consider if the applications should be “containerized.” Usually people consider the top benefits of containers to be quick deployment of new features into production, easy portability of applications between data centers/providers, and quick scalability of an application or microservice. If one or more of those seems beneficial to your application, then next would be to consider security. If the application processes highly sensitive information or your organization has a very low appetite for risk, it might be best to wait a while longer while early adopters forge ahead and learn the best ways to use the technology. What I’d suggest for the next 6 months is to have your developers work with containers in development and staging so they can start to get a feel for the technology while the organization builds out policies and procedures for using containers safely in production.

Early adopter? Then let’s get going! There’s two views on how to adopt containers, depending on how swashbuckling you are: Some folks say start with the easiest components to move to containers and learn as you migrate components over. The alternative is to figure out what would be most difficult to move, plan out that migration in detail, and then take the learnings from that work to make all the other migrations easier. The latter is probably the best way but requires a larger investment of effort up front.

Question 4: What do you mean by anomaly detection for containers?

Kinsella: “Anomaly detection” is a phrase we throw around in the information security industry to refer to technology that has an expectation of what an application (or server) should be doing, and then responds somehow (alerting or taking action) when it determines something is amiss. When this is done at a network or OS level, there’s so many things happening simultaneously that it can be difficult to accurately determine what is legitimate versus malicious, resulting in what are called “false positives.”

One “best practice” for container computing is to run a single process within the container. From a security point of view, this is neat because the signal-to-noise ratio is much better, from an anomaly detection point of view. What type of anomalies are being monitored for? It could be network or file related, or maybe even what actions or OS calls the process is attempting to execute. We can focus specifically on what each container should be doing and keep it within much more narrow boundary for what we consider anomalous for its behavior.

Question 5: How could one go and set up containers in a home lab? Any tips? Would like to have a simpler answer for some of my colleagues. I’m fairly new to it myself so I can’t give a simple answer.

Kinsella: Step one: Make sure your lab machines are running a patched, modern OS (released within the last 12 months).

Step two: Head over to http://training.docker.com/self-paced-training and follow their self-paced training. You’ll be running containers within the hour! I’m sure lxd, rkt, etc. have some form of training, but so far Docker has done the best job of making this technology easy for new users to adopt.

Question 6: You mentioned using Alpine Linux. How does musl compare with glibc?

Kinsella: musl is pretty cool! I’ve glanced over the source — it’s so much cleaner than glibc! As a modern rewrite, it probably doesn’t have 100 percent compatibility with glibc, which has support for many CPU architectures and operating systems. I haven’t run into any troubles with it yet, personally, but my use is still minimal. Definitely looking to change that!

Question 7: Are you familiar with OpenVZ? If so, what would you think could be the biggest concern while running an environment with multiple nodes with hundreds of containers?

Kinsella: Definitely — OpenVZ has been around for quite a while. Historically, the question was “Which is more secure — Xen/KVM or OpenVZ?” and the answer was always Xen/KVM, as they provide each guest VM with hardware-virtualized resources. That said, there have been very few security vulnerabilities discovered in OpenVZ over its lifetime.

Compared to other forms of containers, I’d put OpenVZ in a similar level of risk. As it’s older, it’s codebase should be more mature with fewer bugs. On the other hand, since Docker is so popular, more people will be trying to compromise it, so the chance of finding a vulnerability is higher. A little bit of security-through-obscurity, there. In general, though, I’d go through a similar process of understanding the technology and what is exposed and susceptible to compromise. For both, the most common vector will probably be compromising an app in a container, then trying to burrow through the “walls” of the container. What that means is you’re really trying to defend against local kernel-level exploits: keep up-to-date and be aware of new vulnerability announcements for software that you use.

John Kinsella is the Founder CTO of Layered Insight, a container security startup based in San Francisco, California. His nearly 20-year background includes security and network consulting, software development, and datacenter operations. John is on the board of directors for the Silicon Valley chapter of the Cloud Security Alliance, and has long been active in open source projects, including recently as a contributor, member of the PMC and security team for Apache CloudStack.

Check out all the upcoming webinars from The Linux Foundation.

This was an exciting year for webinars at The Linux Foundation! Our topics ranged from network hardware virtualization to Microsoft Azure to container security and open source automotive, and members of the community tuned in from almost every corner of the globe. The following are the top 5 Linux Foundation webinars of 2016:

  1. Getting Started with OpenStack

  2. No More Excuses: Why you Need to Get Certified Now

  3. Getting Started with Raspberry Pi

  4. Hyperledger: Blockchain Technologies for Business

  5. Security Top 5: How to keep hackers from eating your Linux machine

Curious to watch all the past webinars in our library? You can access all of our webinars for free by registering on our on-demand portal. On subsequent visits, click “Already Registered” and use your email address to access all of the on-demand sessions.


Getting Started with OpenStack

Original Air Date: February 25, 2016

Cloud computing software represents a change to the enterprise production environment from a collection of closed, proprietary software to open source software. OpenStack has become the leader in Cloud software supported and used by small and large companies alike. In this session, guest speaker Tim Serewicz addressed the most common OpenStack questions and concerns including:

  • I think I need it but where do I even start?

  • What are the problems that OpenStack solves?

  • History & Growth of OpenStack: Where’s it been and where is it going?

  • What are the hurdles?

  • What are the sore points?

  • Why is it worth the effort?

Watch Replay >>


No More Excuses: Why you Need to Get Certified Now

Original Air Date: June 9, 2016

According to the 2016 Open Source Jobs Report, 76% of open source professionals believe that certifications are useful for their careers. This webinar session focused on tips, tactics, and practical advice to help professionals build the confidence to take the leap to commit to, schedule, and pass their next certification exam. This session, covered:

  • How certifications can help you reach your career goals

  • Which certification is right for you: Linux Foundation Certified SysAdmin or Certified Engineer?

  • Strategies to thoroughly prepare for the exam

  • How to avoid common exam mistakes

  • The ins and outs of the performance certification process to boost your exam confidence

  • And more…

Watch Replay >>


Getting Started with the Raspberry Pi

Original Air Date: December 14, 2016

Maybe you bought a Raspberry Pi a year or two ago and never got around to using it. Or you built something interesting once, but now there’s a new Pi and new add-ons, and you want to know if they could make your project even better? The Raspberry Pi has grown from its original purpose as a teaching tool to become the tiny computer of choice for many makers, allowing those with varied Linux and hardware experience to have a fully functional computer the size of a credit card powering their ideas. Regardless of where you are in Pi experience, this session with guest speaker Ruth Suehle, had some great tricks for getting the most out of the Raspberry Pi and showcased dozens of great projects to get you inspired.

Watch Replay >>


Hyperledger: Blockchain Technologies for Business

Original Air Date: December 1, 2016

Curious about the foundations of distributed ledger technologies, smart contracts, and other components that comprise the modern blockchain technology stack? In this session, guest speaker Dan O’Prey from Digital Asset, provided an overview of the Hyperledger Project at The Linux Foundation, the main use cases and requirements for the technology for commercial applications, as well as an overview on the history and projects in the Hyperledger umbrella and how you can get involved.

Watch Replay >>


Security Top 5: How to keep hackers from eating your Linux machine

Original Air Date: November 15, 2016

There is nothing a hacker likes more than a tasty Linux machine available on the Internet. In this session, a professional pentester talked tactics, tools and methods that hackers use to invade your space. Learn the 5 easiest ways to keep them out, and know if they have made it in. The majority of the session focused on answering audience questions from both advanced security professionals and those just starting in security.

Watch Replay >>

Don’t forget to view our upcoming webinar calendar to participate in our upcoming live webinars with top open source experts.

In previous articles, we’ve discussed four notable trends in cloud computing and how the rise of microservices and the public cloud has led to a whole new class of open source cloud computing projects. These projects leverage the elasticity of the public cloud and enable applications designed and built to run on it.

Early on in cloud computing, there was a migration of existing applications to Amazon Web Services, Google, and Microsoft’s Azure. Virtually any app that ran on hardware in private data centers could be virtualized and deployed to the cloud. Now with a mature cloud market, more applications are being written and deployed directly to the cloud and are often referred to as being cloud native.

Here we’ll explore three emerging cloud technologies and mention a few key projects in each area. For a more in-depth explanation and to see a full list of all the projects across six broad categories, download our free 2016 Guide to the Open Cloud report.  

Cloud Native Applications

While there is no textbook definition, the self-described cloud native in its simplest definition indicates applications that have been designed to run on modern distributed systems environments capable of scaling to tens of thousands of nodes. The old mantra, “No one ever got fired for buying IBM (or Microsoft),” has changed to the new slogan of “No one is going to get fired for moving to the cloud.” Rather than looking at hard and fast qualifiers for cloud-native, we need to look at the design patterns that are being applied to this evolving breed of applications.

In the pre-cloud days we saw virtualization take hold where entire operating systems were portable inside of virtual machines. That way a machine could move from server to server based on its compatibility with hypervisors like VMware, KVM or Xen Project. In recent years we have seen the level of abstraction at the application layer where applications are container-based and run in portable units that are easily moved from server to server regardless of hypervisor due to container technologies like Docker and CoreOS-sponsored rkt (pronounced rocket).

Containers

A more recent addition in the cloud era is the rise of the container, most notably Docker and rkt. These application hosts are an evolution of previous innovations including Linux control groups (cgroups) and LXC, and an even further abstraction to make applications more portable. This allows them to be moved from development environments to production without the need for reconfiguration.

Applications are now deployed either from registries or through continuous deployment systems to containers that are orchestrated using tools like Ansible, Puppet, or Chef.

Finally, to scale out these applications, the use of schedulers such as Kubernetes, Docker Swarm, Mesos, and Diego coordinate these containers across machines and nodes.

Unikernels

Another emerging technology that bears some similarity to containers is that of unikernels. A unikernel is a pared-down operating system, which is combined with a single application into a unikernel application and which is typically run within a virtual machine. Unikernels are sometimes called library operating systems, because they include libraries that enable applications to use hardware and network protocols in combination with a set of policies for access control and isolation of the network layer. There were systems in the 1990s called Exokernel and Nemesis, but today popular unikernels include MirageOS and OSv. Because unikernel applications can be used independently and deployed across diverse environments, unikernels can create highly specialized and isolated services and have become increasingly used for developing applications in a microservices architecture.

In the series that follows, we’ll dive into each category of open source cloud technology and list the most useful, influential, and promising open source projects with which IT managers and practitioners can build, manage, and monitor their current and future mission-critical cloud resources.  

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in this series:

4 Notable Trends in Open Source Cloud Computing

Trends in the Open Source Cloud: A Shift to Microservices and the Public Cloud

Why the Open Source Cloud Is Important

 

The Linux Foundation today released its third annual “Guide to the Open Cloud” report on current trends and open source projects in cloud computing.

The report aggregates and analyzes industry research to provide insights on how trends in containers, microservices, and more shape cloud computing today. It also defines the open source cloud and cloud native computing and discusses why the open cloud is important to just about every industry.

“From banking and finance to automotive and healthcare, companies are facing the reality that they’re now in the technology business. In this new reality, cloud strategies can make or break an organization’s market success. And successful cloud strategies are built on Linux and open source software,” according to the report.

A list of 75 projects at the end of the report serves as a directory for IT managers and practitioners looking to build, manage, and monitor their cloud resources. These are the projects to know about, try out, and contribute to in order to ensure your business stays competitive in the cloud.

The projects are organized into key categories of cloud infrastructure including IaaS, PaaS, virtualization, containers, cloud operating systems, DevOps, configuration management, logging and monitoring, software-defined networking (SDN), software-defined storage, and networking for containers.

New this year is the addition of a section on container management and automation tools, which is a hot area for development as companies race to fill the growing need to manage highly distributed, cloud-native applications. Traditional DevOps CI/CD tools have also been collected in a separate category, though functionality can overlap.

These additions reflect a movement toward the use of public cloud services and microservices architectures which is changing the nature of open source cloud computing.

“A whole new class of open source cloud computing projects has now begun to leverage the elasticity of the public cloud and enable applications designed and built to run on it,” according to the report.

To learn more about current trends in cloud computing and to see a full list of the most useful, influential, and promising open source cloud projects, download the report now.

Watch open source leaders, entrepreneurs, visionaries, and educators speak live on Aug. 22-24, 2016, at LinuxCon and ContainerCon North America in Toronto.  The Linux Foundation will provide live streaming video of all the event’s keynotes for those who can’t attend.

Sign up for the free streaming video.

The keynote speakers will focus on the technologies and trends having the biggest impact on open source development today, including containers, networking and IoT, as well as hardware, cloud applications, and the Linux kernel. See the full schedule.

Linus Torvalds, Linux and Git creator and Linux Foundation fellow, will be on stage on Wednesday at 9 a.m. Eastern in a Q&A with Dirk Hohndel, chief open source officer at VMware.

Brian Behlendorf, founder of The Apache Software Foundation will also give a keynote on Wednesday in his new role as executive director at the Hyperledger Project.

Joe Beda, entrepreneur in residence at venture capital firm Accel Partners, will speak at 5:15 p.m. Eastern on Tuesday. Beda, the lead architect of Google Compute Engine who also helped launch Kubernetes, has carte blanche from Accel to come up with new business ideas and will eventually launch his own startup (possibly around Kubernetes.)  

Other keynote speakers include:

  • Abhishek Chauhan, vice president and CTO at Citrix

  • Cory Doctorow, science fiction author, activist, journalist and blogger

  • Dr. Margaret Heffernan, entrepreneur, management expert and author of five books including “BEYOND MEASURE: The Big Impact of Small Changes”

  • Dr. Ainissa Ramirez, science and STEM education evangelist and author of “Save our Science”

  • Jim Whitehurst, president and CEO of Red Hat

  • Jim Zemlin, executive director at The Linux Foundation
     

Sign up for free streaming video and follow along on Twitter with the hashtag #linuxcon.

Can’t catch the live stream next week? You can still register and receive recordings of the keynotes after the conference ends.