Posts

Alibaba Cloud, AWS, Cloud Foundry, Docker, Google, IBM, Rancher Labs and more support promotion of ecosystem’s most-widely adopted container runtime

SAN FRANCISCO, Calif., February 28, 2019 – The Cloud Native Computing Foundation® (CNCF®), which sustains open source technologies like Kubernetes® and Prometheus™, today announced that containerd is the fifth project to graduate, following Kubernetes, Prometheus, Envoy, and CoreDNS. To move from the maturity level of incubation to graduation, projects must demonstrate thriving adoption, diversity, a formal governance process, and a strong commitment to community sustainability and inclusivity.

“After being accepted into CNCF nearly two years ago, containerd continues to see significant momentum – showcasing the demand for foundational container technologies,” said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation. “A lot of work and collaboration from the community went into the development and testing of a stable, core container runtime, the community worked hard to broaden its maintainer and adoption base, on top of going through a external security audit so I’m thrilled to see the project graduate.”

Born at Docker in 2014, containerd started out as a lower-layer runtime manager for the Docker engine. Following it’s acceptance into CNCF in March 2017, containerd has become an industry-standard container runtime focused on simplicity, robustness and portability with its widest usage and adoption as the layer between the Docker engine and the OCI runc executor.

“When Docker contributed containerd to the community, our goal was to share a robust and extensible runtime that millions of users and tens of thousands of organizations have already standardized on as part of Docker Engine,” said Michael Crosby, containerd maintainer and Docker engineer. “It is rewarding to see increased adoption and further innovation with containerd over the past year as we expanded the scope to address the needs of modern container platforms like Docker platform and the Kubernetes ecosystem. As adoption of containerd continues to grow, we look forward to continued collaboration across the ecosystem to continue  to push our industry forward.”

“The IBM Cloud Kubernetes Service (IKS) is focused on providing an awesome managed Kubernetes experience for our customers. To achieve this, we are always looking at streamlining our architecture and operational posture in IKS,” said Dan Berg, Distinguished Engineer, IBM Cloud Kubernetes Service. “Moving to containerd has helped to simplify the Kubernetes architecture that we configure and manage on behalf of customers. By adopting containerd as our container engine, we have reduced an additional layer in the architecture which has both improved operations and increased service performance for our customers.”

containerd has had a variety of maintainers and reviewers since its inception, with 14 committers, 4,406 commits and 166 contributors currently from companies including Alibaba,  Cruise Automation, Docker, Facebook, Google, Huawei, IBM, Microsoft, NTT, Tesla, and many more. containerd project statistics, contributor stats, and more can be found on DevStats.

“Since its inception, Alibaba has been using containerd and we are thrilled to see the project hit this milestone. containerd is playing a critical role as an open, reliable and common foundation of container runtimes. At Alibaba Cloud, we take advantage of simplicity, robustness and extensibility of containerd in Alibaba Cloud Kubernetes Service and Serverless Kubernetes.” said Li Yi, Senior Staff Engineer, Alibaba Cloud. “Alibaba team will continue our commitment to the community to drive innovation forward.”

To officially graduate from incubating status, the project also adopted the CNCF Code of Conduct, executed an independent security audit and defined its own governance structure to grow the community. Additionally, containerd also had to earn (and maintain) a Core Infrastructure Initiative Best Practices Badge. Completed on September 1, 2018, the CII badge shows an ongoing commitment to code quality and security best practices.

containerd Background

  • containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. containerd is available as a daemon for Linux and Windows.
  • containerd manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond.
  • For downloads, documentation, and how to get involved, visit https://github.com/containerd/containerd.

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of cloud native software stacks, including Kubernetes and Prometheus. CNCF serves as the neutral home for collaboration and brings together the industry’s top developers, end users and vendors – including the world’s largest public cloud and enterprise software companies as well as dozens of innovative startups. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit www.cncf.io.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contact

Natasha Woods

The Linux Foundation

nwoods@linuxfoundation.org

Today, we’re pleased to announce that containerd (pronounced Con-Tay-Ner-D), an industry-standard runtime for building container solutions, has reached its 1.0 milestone. From Docker’s announcement in December of last year that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the past 12 months. Within both the Docker and Kubernetes communities,  there has been a significant uptick in investment with contributions from independents and CNCF member companies alike including Docker, Google, NTT, IBM, Microsoft, AWS, ZTE, Huawei and ZJU.

Similarly, the maintainers have been working to add key functionality to containerd.  The initial containerd donation included methods for:

  • transferring container images,
  • container execution and supervision,
  • low-level local storage and network interfaces and
  • the ability to work on both Linux, Windows and other platforms.

Additional work has been done to add a:

  • complete storage and distribution system that supports both OCI and Docker image formats and
  • robust events system
  • A more sophisticated snapshot model to manage container filesystems

These changes helped the team build out a smaller interface for the snapshotters, while still fulfilling the requirements needed from things like a builder. It also reduces the amount of code needed, making it much easier to maintain in the long run.

The containerd 1.0 milestone comes after several months of in alpha and beta status, that allowed the team to implement many performance improvements: creation of a stress testing system, improvements in garbage collection and shim memory usage.

“In 2017 key functionality has been added containerd to address the needs of modern container platforms like Docker and orchestration systems like Kubernetes,” said Michael Crosby, Maintainer for containerd and engineer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embeddable in higher level systems to provide core container capabilities. We will continue to work with the community to create a runtime that’s lightweight yet powerful, balancing new functionality with the desire for code that is easy to support and maintain.”

containerd is already being used by Kubernetes for its cri-containerd project, which enables users to run Kubernetes clusters using containerd as the underlying runtime. containerd is also an essential upstream component of the Docker platform and is currently used by millions of end users. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available.

Key Milestones in the progress to 1.0 include:

 

Notable containerd facts and figures:

  • 1922 GitHub stars, 401 forks
  • 108 contributors
  • 8 maintainers from independents and and member companies alike including Docker, Google, IBM, ZTE and ZJU .
  • 2949+ commits, 26 releases

Availability and Resources

To participate in containerd: https://github.com/docker/containerd/ .

Meet us at KubeCon

Learn more about containerd at KubeCon by attending Justin Cormack’s LinuxKit & Kubernetes talk at Austin Docker Meetup, Patrick Chanezon’s Moby session Phil Estes’ session or the containerd salon.

This week was a busy one for open source enterprise wins! Read the latest installment of our weekly digest to stay on the cutting edge of OSS business beats.

1) The Linux Foundation’s Dronecode project receives accolades for the creator of its PX4 project; Lorenz Meier has been recognized by MIT Technology Review in its annual list of Innovators Under 35.

Dronecode’s Meier Named to MIT Technology Review’s Prestigious List– Unmanned Aerial Online

2) “New round places company’s raised cash at more than $250m as the container application market value soars to $2.7bn.”

From Startup To An Open Source Giant. Docker Valuation Hits $1.3B Amid Fresh Funding Round– Data Economy

3) “One of the keys to Ubuntu’s success has been heavy optimization of the standard Linux kernel for cloud computing environments.”

Cloud-Optimized Linux: Inside Ubuntu’s Edge in AWS Cloud Computing– Silicon Angle

4) Microsoft announced purchase of a startup called Cycle Computing for an “undisclosed sum”. While it doesn’t have the name recognition of some of its peers, the startup has played a pivotal role in cloud computing today.

Microsoft Just Made a Brilliant Acquisition in Cloud Wars Against Amazon, Google– Business Insider

5) Open source content management system was initially released without frills or fanfare. After 2,600 commits, the 1.0 version is ready to tackle the blogging giants.

Ghost, the Open Source Blogging System, is Ready For Prime Time– TechCrunch

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

To help you better understand containers, container security, and the role they can play in your enterprise, The Linux Foundation recently produced a free webinar hosted by John Kinsella, Founder and CTO of Layered Insight. Kinsella covered several topics, including container orchestration, the security advantages and disadvantages of containers and microservices, and some common security concerns, such as image and host security, vulnerability management, and container isolation.

In case you missed the webinar, you can still watch it online. In this article, Kinsella answers some of the follow-up questions we received.

john-kinsella.jpg

John Kinsella

John Kinsella, Founder CTO of Layered Insight

Question 1: If security is so important, why are some organizations moving to containers before having a security story in place?

Kinsella: Some groups are used to adopting technology earlier. In some cases, the application is low-risk and security isn’t a concern. Other organizations have strong information security practices and are comfortable evaluating the new tech, determining risks, and establishing controls on how to mitigate those risks.

In plain talk, they know their applications well enough that they understand what is sensitive. They studied the container environment to learn what risks an attacker might be able to leverage, and then they avoided those risks either through configuration, writing custom tools, or finding vendors to help them with the problem. Basically, they had that “security story” already.

Question 2: Are containers (whether Docker, LXC, or rkt) really ready for production today? If you had the choice, would you run all production now on containers or wait 12-18 months?

Kinsella: I personally know of companies who have been running Docker in production for over two years! Other container formats that have been around longer have also been used in production for many years. I think the container technology itself is stable. If I were adopting containers today, my concern would be around security, storage, and orchestration of containers. There’s a big difference between running Docker containers on a laptop versus running a containerized application in production. So, it comes down to an organization’s appetite for risk and early adoption. I’m sure there are companies out there still not using virtual machines…

We’re running containers in production, but not every company (definitely not every startup!) has people with 20 years of information security experience.

Question 3: We currently have five applications running across two Amazon availability zones, purely in EC2 instances. How should we go about moving those to containers?

Kinsella: The first step would be to consider if the applications should be “containerized.” Usually people consider the top benefits of containers to be quick deployment of new features into production, easy portability of applications between data centers/providers, and quick scalability of an application or microservice. If one or more of those seems beneficial to your application, then next would be to consider security. If the application processes highly sensitive information or your organization has a very low appetite for risk, it might be best to wait a while longer while early adopters forge ahead and learn the best ways to use the technology. What I’d suggest for the next 6 months is to have your developers work with containers in development and staging so they can start to get a feel for the technology while the organization builds out policies and procedures for using containers safely in production.

Early adopter? Then let’s get going! There’s two views on how to adopt containers, depending on how swashbuckling you are: Some folks say start with the easiest components to move to containers and learn as you migrate components over. The alternative is to figure out what would be most difficult to move, plan out that migration in detail, and then take the learnings from that work to make all the other migrations easier. The latter is probably the best way but requires a larger investment of effort up front.

Question 4: What do you mean by anomaly detection for containers?

Kinsella: “Anomaly detection” is a phrase we throw around in the information security industry to refer to technology that has an expectation of what an application (or server) should be doing, and then responds somehow (alerting or taking action) when it determines something is amiss. When this is done at a network or OS level, there’s so many things happening simultaneously that it can be difficult to accurately determine what is legitimate versus malicious, resulting in what are called “false positives.”

One “best practice” for container computing is to run a single process within the container. From a security point of view, this is neat because the signal-to-noise ratio is much better, from an anomaly detection point of view. What type of anomalies are being monitored for? It could be network or file related, or maybe even what actions or OS calls the process is attempting to execute. We can focus specifically on what each container should be doing and keep it within much more narrow boundary for what we consider anomalous for its behavior.

Question 5: How could one go and set up containers in a home lab? Any tips? Would like to have a simpler answer for some of my colleagues. I’m fairly new to it myself so I can’t give a simple answer.

Kinsella: Step one: Make sure your lab machines are running a patched, modern OS (released within the last 12 months).

Step two: Head over to http://training.docker.com/self-paced-training and follow their self-paced training. You’ll be running containers within the hour! I’m sure lxd, rkt, etc. have some form of training, but so far Docker has done the best job of making this technology easy for new users to adopt.

Question 6: You mentioned using Alpine Linux. How does musl compare with glibc?

Kinsella: musl is pretty cool! I’ve glanced over the source — it’s so much cleaner than glibc! As a modern rewrite, it probably doesn’t have 100 percent compatibility with glibc, which has support for many CPU architectures and operating systems. I haven’t run into any troubles with it yet, personally, but my use is still minimal. Definitely looking to change that!

Question 7: Are you familiar with OpenVZ? If so, what would you think could be the biggest concern while running an environment with multiple nodes with hundreds of containers?

Kinsella: Definitely — OpenVZ has been around for quite a while. Historically, the question was “Which is more secure — Xen/KVM or OpenVZ?” and the answer was always Xen/KVM, as they provide each guest VM with hardware-virtualized resources. That said, there have been very few security vulnerabilities discovered in OpenVZ over its lifetime.

Compared to other forms of containers, I’d put OpenVZ in a similar level of risk. As it’s older, it’s codebase should be more mature with fewer bugs. On the other hand, since Docker is so popular, more people will be trying to compromise it, so the chance of finding a vulnerability is higher. A little bit of security-through-obscurity, there. In general, though, I’d go through a similar process of understanding the technology and what is exposed and susceptible to compromise. For both, the most common vector will probably be compromising an app in a container, then trying to burrow through the “walls” of the container. What that means is you’re really trying to defend against local kernel-level exploits: keep up-to-date and be aware of new vulnerability announcements for software that you use.

John Kinsella is the Founder CTO of Layered Insight, a container security startup based in San Francisco, California. His nearly 20-year background includes security and network consulting, software development, and datacenter operations. John is on the board of directors for the Silicon Valley chapter of the Cloud Security Alliance, and has long been active in open source projects, including recently as a contributor, member of the PMC and security team for Apache CloudStack.

Check out all the upcoming webinars from The Linux Foundation.

Watch open source leaders, entrepreneurs, developers, and IT operations experts speak live next week, Oct. 4-6, 2016, at LinuxCon and ContainerCon Europe in Berlin. The Linux Foundation will provide live streaming video of all the event’s keynotes for those who can’t attend.

Sign up for the free streaming video.

The keynote speakers will focus on the technologies and trends having the biggest impact on open source development today, including containers, networking and IoT, as well as hardware, cloud applications, and the Linux kernel. See the full agenda of keynotes.

Tune into free live video streaming at 9 a.m. CET each day to watch keynotes with:

  • Jilayne Lovejoy, Principal Open Source Counsel, ARM

  • Solomon Hykes, Founder, CTO and Chief Product Officer, Docker

  • Brian Behlendorf, Executive Director, Hyperledger Project

  • Christopher Schlaeger, Director Kernel and Operating Systems, Amazon Development Center Germany

  • Dan Kohn, Executive Director, Cloud Native Computing Foundation

  • Brandon Philips, CTO, CoreOS

  • Many more

Can’t catch the live stream next week? Don’t worry—if you register now, we’ll send out the recordings of keynotes after the conference ends!

You can also follow along on Twitter with the hashtag #linuxcon. Share the live streaming of keynotes with your friends and colleagues!