Posts

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In this series, we’re taking a preview look at the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation.

In the first article, we talked about installing Docker and setting up your environment. You’ll need Docker installed to work along with the examples, so be sure to get that out of the way first. The first video below provides a quick overview of terms and concepts you’ll learn.

In this part, we’ll describe how to get started with Docker Machine.

Or, access all the sample videos now!

Docker has a client server-architecture, in which the Client sends the command to the Docker Host, which runs the Docker Daemon. Both the Client and the Docker Host can be in the same machine, or the Client can communicate with any of the Docker Hosts running anywhere, as long as it can reach and access the Docker Daemon.

The Docker Client and the Docker Daemon communicate over REST APIs, even on the same system. One tool that can help you manage Docker Daemons running on different systems from our local workstation is Docker Machine.

If you are using Docker for Mac or Windows, or install Docker Toolbox, then Docker Machine will be available on your workstation automatically. With Docker Machine, we will be deploying an instance on DigitalOcean and installing Docker on that.  For that, we would first create our API key from DigitalOcean, with which we can programmatically deploy an instance on DigitalOcean.

After getting the token, we will be exporting that in an environment variable called “DO_TOKEN”, which we will be using in the “docker-machine” command line, in which we are using the “digitalocean” driver and creating an instance called “dockerhost”.

Docker Machine will then create an instance on DigitalOcean, install Docker on that, and configure the secure access between the Docker Daemon running on the “dockerhost” and our client, which is on our workstation. Next, you can use the “docker-machine env” command with our installed host, “dockerhost”, to find the respective parameters with which you can connect to the remote Docker Daemon from your Docker Client.

With the “eval” command, you can export all the environment variables with respect to your “dockerhost” to your shell. After you export the environment variables, the Docker Client on your workstation will directly connect with the DigitalOcean instance and run the commands there. The videos below provide additional details.

In the next article, we will look at some Docker container operations.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos here for more details:

Want to learn more? Access all the free sample chapter videos now!

Containers are becoming the de facto approach for deploying applications, because they are easy to use and cost-effective. With containers, you can significantly cut down the time to go to market if the entire team responsible for the application lifecycle is involved — whether they are developers, Quality Assurance engineers, or Ops engineers.

The new Containers for Developers and Quality Assurance (LFS254) self-paced course from The Linux Foundation is designed for developers and Quality Assurance engineers who are interested in learning the workflow of an application with Docker. In this self-paced course, we will quickly review some Docker basics, including installation, and then, with the help of a sample application, we will walk through the lifecycle of that application with Docker.

The online course is presented almost entirely on video and some of the topics covered in this course preview include:

  • Overview and Installation

  • Docker Machine

  • Docker Container and Image Operations

  • Dockerfiles and Docker Hub

  • Docker Volumes and Networking

  • Docker Compose

Access a free sample chapter

In the course, we focus on creating an end-to-end workflow for our application — from development to production. We’ll use Docker as our primary container environment and Jenkins as our primary CI/CD tool. All of the Docker hosts used in this course will be deployed on the cloud (DigitalOcean).

Install Docker

You’ll need to have Docker installed in order to work along with the course materials. All of Docker’s free products come under the Docker Community Edition. They’re offered in two variants: edge and stable. All of the enterprise and production-ready products come under the Docker Enterprise Edition umbrella.

And, you can download all the Docker products from the Docker Store. For this course, we will be using the Community edition. So, click on “GET DOCKER CE” to proceed further. If you select “Linux” in the “Operating Systems” section, you’ll see that Docker is available on all the major Linux distributions, like CentOS, Ubuntu, Fedora, and so on. It’s also available for Mac and Windows.

This preview series is intended to give you a sample of the course format and quality of the content, which is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos to learn more:  

Want to learn more? Access all the free sample chapter videos now!

This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.

Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.

VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.

Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?”  From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.

Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!