Posts

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This week in open source and Linux news, Toyota’s 2018 Camry to feature Automotive Grade Linux (AGL) infotainment system, older Raspberry Pis risk vulnerability without updating, and more. Read on!

1) Toyota has adopted the Automotive Grade Linux (AGL) platform for its infotainment systems. The 2018 Toyota Camry will be their first vehicle to have it preinstalled.

Toyota Moves to Automotive Grade Linux for Infotainment – BlackBerry Hits Back– IoTNews

2) Older Raspberry Pi devices may be more vulnerable to the malware if they haven’t been updated in a while.

Linux Malware Enslaves Raspberry Pi to Mine Cryptocurrency– ZDNet

3) Toyota’s decision not to offer Apple CarPlay or Andriod Auto, favoring a Linux system. What will this mean for proprietary software fans?

Toyota owners to get Linux system instead of Apple CarPlay, Android Auto. Hooray?– The Car Connection

4) “[Red Hat Summit & OpenStack Summit] brought unique open source perspectives as a business and as a community.”

Red Hat Summit And OpenStack Summit: Two Weeks Of Open Source Software In Boston– Forbes

5) Eric S Raymond has brought back Colossal Cave Adventure as an open source program.

​One of the First Computer Games Is Born Again in Open Source– ZDNet

Containers are becoming the de facto approach for deploying applications, because they are easy to use and cost-effective. With containers, you can significantly cut down the time to go to market if the entire team responsible for the application lifecycle is involved — whether they are developers, Quality Assurance engineers, or Ops engineers.

The new Containers for Developers and Quality Assurance (LFS254) self-paced course from The Linux Foundation is designed for developers and Quality Assurance engineers who are interested in learning the workflow of an application with Docker. In this self-paced course, we will quickly review some Docker basics, including installation, and then, with the help of a sample application, we will walk through the lifecycle of that application with Docker.

The online course is presented almost entirely on video and some of the topics covered in this course preview include:

  • Overview and Installation

  • Docker Machine

  • Docker Container and Image Operations

  • Dockerfiles and Docker Hub

  • Docker Volumes and Networking

  • Docker Compose

Access a free sample chapter

In the course, we focus on creating an end-to-end workflow for our application — from development to production. We’ll use Docker as our primary container environment and Jenkins as our primary CI/CD tool. All of the Docker hosts used in this course will be deployed on the cloud (DigitalOcean).

Install Docker

You’ll need to have Docker installed in order to work along with the course materials. All of Docker’s free products come under the Docker Community Edition. They’re offered in two variants: edge and stable. All of the enterprise and production-ready products come under the Docker Enterprise Edition umbrella.

And, you can download all the Docker products from the Docker Store. For this course, we will be using the Community edition. So, click on “GET DOCKER CE” to proceed further. If you select “Linux” in the “Operating Systems” section, you’ll see that Docker is available on all the major Linux distributions, like CentOS, Ubuntu, Fedora, and so on. It’s also available for Mac and Windows.

This preview series is intended to give you a sample of the course format and quality of the content, which is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos to learn more:  

Want to learn more? Access all the free sample chapter videos now!

This article series previews the new Containers Fundamentals training course from The Linux Foundation, which is designed for those who are new to container technologies. In previous excerpts, we talked about what containers are and what they’re not and explained a little of their history. In this last post of the series, we will look at the building blocks for containers, specifically, namespaces, control groups, and UnionFS.

Namespace is a feature of the Linux kernel, which isolates and virtualizes system resources for a process, so that each process gets its own resource, like its own IP address, hostname, etc. System resources that can be virtualized are: mount [mnt], process ID [PID], network [net], Interprocess Communication [IPC], hostnames [UTS], and users [User IDs].

Using the namespace feature of the Linux kernel, we can isolate one process from another. The container is nothing but a process for the kernel, so we isolate each container using different namespaces.

Another important feature that enables containerization is control groups. With control groups, we can limit, account, and isolate the resource users like CPU, memory, disk, network, etc.  And, with UnionFS, we can transparently overlay two or more directories and implement a layered approach for containers.

You can get more details in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

Hundreds of companies are now using the open source Prometheus monitoring solution in production, across industries ranging from telecommunications and cloud providers to video streaming and databases.

In advance of CloudNativeCon + KubeCon Europe 2017 to be held March 29-30 in Berlin, we talked to Brian Brazil, the founder of Robust Perception and one of the core developers of the Prometheus project, who will be giving a keynote on Prometheus at CloudNativeCon. Make sure to catch the full Prometheus track at the conference.

Linux.com: What makes monitoring more challenging in a Cloud Native environment?

Brian Brazil: Traditional monitoring tools come from a time when environments were static and machines and services were individually managed. By contrast, a Cloud Native environment is highly automated and dynamic, which requires a more sophisticated approach.

With a traditional setup there were a relatively small number of services, each with their own machine. Monitoring was on machine metrics such as CPU usage and free memory, which were the best way available to alert on user-facing issues. In a Cloud Native world, where many different services not only share machines, but the way in which they’re sharing them is in constant flux, such an approach is not scalable.

For example with a mixed workload of user-facing and batch jobs, a high CPU usage merely indicates that you’re getting good value for money out of your resources. It doesn’t necessarily indicate anything about end-user experience. Thus, metrics like latency, failure ratios, and processing times from services spread across machines must be aggregated up and then used for graphs and alerts.

In the same way that the move was made from manual management of machines and services to tools like Chef and now Kubernetes, we must make a similar transition in the monitoring space.

Linux.com: What are the advantages of Prometheus?

Brian Brazil: Prometheus was created with a dynamic cloud environment in mind. It has integrations with systems such as Kubernetes and EC2 that keep it up to date with what type of containers are running where, which is essential with the rate of change in a modern environment.

Prometheus client libraries allow you to instrument your applications for the metrics and KPIs that matter in your system. For third-party application such as Cassandra, HAProxy or MySQL, there’s a variety of exporters to expose their useful metrics.

The data Prometheus collects is enriched by labels. Labels are arbitrary key-value pairs that can be used to distinguish the development cluster from the production environment, or which HTTP endpoints the metric is broken out by.

The PromQL query language allows for aggregation based on these labels, calculation of 95th percentile latencies per container, service or datacenter, forecasting, and any other math you’d care to do. What’s more: if you can graph it, you can alert on it. This gives you the power to have alerts on what really matters to you and your users, and helps eliminate those late night alerts for non-issues.

Linux.com: Are there things that catch new users off guard?

Brian Brazil: One common misunderstanding is the type of monitoring system that Prometheus is, and where it fits as part of your overall monitoring strategy.

Prometheus is metrics based, meaning it is designed to efficiently deal with numbers — numbers such as how many HTTP requests you’ve served and their latency. What Prometheus is not is an event logging system, and is thus not suitable for tracking the details of each individual HTTP request made. By having both a metrics solution and an event logging solution (such as the ELK stack), you’ll cover a good range in terms of breadth and depth. Neither is sufficient on their own, due to the different engineering tradeoffs each must make.

Linux.com: What has the response to Prometheus been?

Brian Brazil: From its humble beginnings in 2012 when Prometheus had just two developers working on it part time, today in 2017 hundreds of developers have contributed to the Prometheus project itself. In addition a rich ecosystem has spawned, with over 150 third-party party integrations — and that’s just the ones we know of.

There are hundreds of companies using Prometheus in production across all industries from telecommunications to cloud providers, video streaming to databases and startups to Fortune 500s. Since announcing 1.0 last year, the growth in users and the ecosystem has only accelerated.

Linux.com: Are there any talks in particular to watch out for at CloudNativeCon + KubeCon Europe?

Brian Brazil: For those who are used to more static environments, or just trying to reduce pager noise, Alerting in Cloud Native Environments by Fabian Reinartz of CoreOS is essential. If you’re already running Prometheus in a rapidly growing system, in Configuring Prometheus for High Performance, then Soundcloud’s Björn Rabenstein , who wrote the current storage system, will cover what you’ll need to know.

For those on the development side, there’s a workshop on Prometheus Instrumentation that’ll take you from instrumenting your code all the way through visualising the results. My own talk on Counting in Prometheus is a deep dive into the deceptively simple sounding question of counting how many requests there were in the past hour, and how it really works in various monitoring systems.

Not everything is cloud native, Prometheus: The Unsung Heroes is a user story of how Prometheus can monitor infrastructure such as load balancers via SNMP. Finally, in Integrating Long-Term Storage with Prometheus, Julius Volz looks at the plans for our most sought after pieces of future functionality.

All talks will be recorded, so if you aren’t lucky enough to attend in person, you can watch the talks later online.

CloudNativeCon + KubeCon Europe is almost sold out! Register now to secure your seat.

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

In last week’s tutorial, we tried out tcpdump and wireshark, two of the most useful tools for troubleshooting what is happening as network traffic is transmitted and received on the system.

nmap is another essential tool for troubleshooting and discovering information about the network and services available in an environment. This is an active tool (in contrast to tcpdump and wireshark) which sends packets to remote systems in order to determine information about the applications running and services offered by those remote systems.

Be sure to inform the network security team as well as obtain written permission from the owners and admins of the systems which you will be scanning with the nmap tool. In many environments, active scanning is considered an intrusion attempt.

The information gleaned from running nmap can provide clues as to whether or not a firewall is active in between your system and the target. nmap also indicates what the target operating system might be, based on fingerprints of the replies received from the target systems. Banners from remote services that are running may also be displayed by the nmap utility.

Set up your system

Access to the Linux Foundation’s lab environment is only possible for those enrolled in the course. However, we’ve created a standalone lab for this tutorial series to run on any single machine or virtual machine which does not need the lab setup to be completed. The best results are obtained by using “bridging” rather than “NAT” in your virtualization manager. Consult the documentation for your virtualization type (i.e., Oracle VirtualBox, VMware Workstation, and others ) to verify or alter the networking connection type.  

Start the exercise

First, let’s install nmap on your Linux machine.

For Red Hat, Fedora and Suse machines:

$ sudo yum install nmap

For Debian and Ubuntu machines:

$ sudo apt-get install nmap  

Next, explore the nmap man page.

$ man nmap

For the best results, run nmap as root or use sudo with the nmap command.

Now, we will run nmap on the localhost:

# nmap localhost 

Increase the information nmap acquires:

# nmap -sS -PO -sV -O localhost

By adding the -A option to the nmap program, we can see the OS fingerprint detection capabilities of nmap:

# nmap -A localhost

A common usage for nmap is to perform a network ping scan; basically, ping all possible IP addresses in a subnet range in order to discover what IP addresses are currently in use. This is also sometimes referred to as network discovery.

# nmap -sP 192.168.0.0/24

Another interesting nmap command to find all the active IP address on a locally attached network:

#nmap  -T4 -sP 192.168.0.0/24 1>/dev/null  && grep -v “00:00:00:00:00:00” /proc/net/arp 

Addressing for nmap is very flexible DNS names can be used, IP addresses, IP ranges are all acceptable, consult the nam page for additional details.

We cover more uses for this tool later in the course. For now, have fun exploring the tool!

This concludes our six-part series on Linux Security Fundamentals. Download the entire sample chapter for the course or re-visit previous tutorials in this series, below.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark