Posts

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series, we’re taking a preview look at the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation.

In the first article, we talked about installing Docker and setting up your environment. You’ll need Docker installed to work along with the examples, so be sure to get that out of the way first. The first video below provides a quick overview of terms and concepts you’ll learn.

In this part, we’ll describe how to get started with Docker Machine.

Or, access all the sample videos now!

Docker has a client server-architecture, in which the Client sends the command to the Docker Host, which runs the Docker Daemon. Both the Client and the Docker Host can be in the same machine, or the Client can communicate with any of the Docker Hosts running anywhere, as long as it can reach and access the Docker Daemon.

The Docker Client and the Docker Daemon communicate over REST APIs, even on the same system. One tool that can help you manage Docker Daemons running on different systems from our local workstation is Docker Machine.

If you are using Docker for Mac or Windows, or install Docker Toolbox, then Docker Machine will be available on your workstation automatically. With Docker Machine, we will be deploying an instance on DigitalOcean and installing Docker on that.  For that, we would first create our API key from DigitalOcean, with which we can programmatically deploy an instance on DigitalOcean.

After getting the token, we will be exporting that in an environment variable called “DO_TOKEN”, which we will be using in the “docker-machine” command line, in which we are using the “digitalocean” driver and creating an instance called “dockerhost”.

Docker Machine will then create an instance on DigitalOcean, install Docker on that, and configure the secure access between the Docker Daemon running on the “dockerhost” and our client, which is on our workstation. Next, you can use the “docker-machine env” command with our installed host, “dockerhost”, to find the respective parameters with which you can connect to the remote Docker Daemon from your Docker Client.

With the “eval” command, you can export all the environment variables with respect to your “dockerhost” to your shell. After you export the environment variables, the Docker Client on your workstation will directly connect with the DigitalOcean instance and run the commands there. The videos below provide additional details.

In the next article, we will look at some Docker container operations.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos here for more details:

Want to learn more? Access all the free sample chapter videos now!

Containers are becoming the de facto approach for deploying applications, because they are easy to use and cost-effective. With containers, you can significantly cut down the time to go to market if the entire team responsible for the application lifecycle is involved — whether they are developers, Quality Assurance engineers, or Ops engineers.

The new Containers for Developers and Quality Assurance (LFS254) self-paced course from The Linux Foundation is designed for developers and Quality Assurance engineers who are interested in learning the workflow of an application with Docker. In this self-paced course, we will quickly review some Docker basics, including installation, and then, with the help of a sample application, we will walk through the lifecycle of that application with Docker.

The online course is presented almost entirely on video and some of the topics covered in this course preview include:

  • Overview and Installation

  • Docker Machine

  • Docker Container and Image Operations

  • Dockerfiles and Docker Hub

  • Docker Volumes and Networking

  • Docker Compose

Access a free sample chapter

In the course, we focus on creating an end-to-end workflow for our application — from development to production. We’ll use Docker as our primary container environment and Jenkins as our primary CI/CD tool. All of the Docker hosts used in this course will be deployed on the cloud (DigitalOcean).

Install Docker

You’ll need to have Docker installed in order to work along with the course materials. All of Docker’s free products come under the Docker Community Edition. They’re offered in two variants: edge and stable. All of the enterprise and production-ready products come under the Docker Enterprise Edition umbrella.

And, you can download all the Docker products from the Docker Store. For this course, we will be using the Community edition. So, click on “GET DOCKER CE” to proceed further. If you select “Linux” in the “Operating Systems” section, you’ll see that Docker is available on all the major Linux distributions, like CentOS, Ubuntu, Fedora, and so on. It’s also available for Mac and Windows.

This preview series is intended to give you a sample of the course format and quality of the content, which is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos to learn more:  

Want to learn more? Access all the free sample chapter videos now!

This article series previews the new Containers Fundamentals training course from The Linux Foundation, which is designed for those who are new to container technologies. In previous excerpts, we talked about what containers are and what they’re not and explained a little of their history. In this last post of the series, we will look at the building blocks for containers, specifically, namespaces, control groups, and UnionFS.

Namespace is a feature of the Linux kernel, which isolates and virtualizes system resources for a process, so that each process gets its own resource, like its own IP address, hostname, etc. System resources that can be virtualized are: mount [mnt], process ID [PID], network [net], Interprocess Communication [IPC], hostnames [UTS], and users [User IDs].

Using the namespace feature of the Linux kernel, we can isolate one process from another. The container is nothing but a process for the kernel, so we isolate each container using different namespaces.

Another important feature that enables containerization is control groups. With control groups, we can limit, account, and isolate the resource users like CPU, memory, disk, network, etc.  And, with UnionFS, we can transparently overlay two or more directories and implement a layered approach for containers.

You can get more details in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.

Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.

VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.

Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?”  From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.

Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!