Posts

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series, we’re taking a preview look at the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation.

In the first article, we talked about installing Docker and setting up your environment. You’ll need Docker installed to work along with the examples, so be sure to get that out of the way first. The first video below provides a quick overview of terms and concepts you’ll learn.

In this part, we’ll describe how to get started with Docker Machine.

Or, access all the sample videos now!

Docker has a client server-architecture, in which the Client sends the command to the Docker Host, which runs the Docker Daemon. Both the Client and the Docker Host can be in the same machine, or the Client can communicate with any of the Docker Hosts running anywhere, as long as it can reach and access the Docker Daemon.

The Docker Client and the Docker Daemon communicate over REST APIs, even on the same system. One tool that can help you manage Docker Daemons running on different systems from our local workstation is Docker Machine.

If you are using Docker for Mac or Windows, or install Docker Toolbox, then Docker Machine will be available on your workstation automatically. With Docker Machine, we will be deploying an instance on DigitalOcean and installing Docker on that.  For that, we would first create our API key from DigitalOcean, with which we can programmatically deploy an instance on DigitalOcean.

After getting the token, we will be exporting that in an environment variable called “DO_TOKEN”, which we will be using in the “docker-machine” command line, in which we are using the “digitalocean” driver and creating an instance called “dockerhost”.

Docker Machine will then create an instance on DigitalOcean, install Docker on that, and configure the secure access between the Docker Daemon running on the “dockerhost” and our client, which is on our workstation. Next, you can use the “docker-machine env” command with our installed host, “dockerhost”, to find the respective parameters with which you can connect to the remote Docker Daemon from your Docker Client.

With the “eval” command, you can export all the environment variables with respect to your “dockerhost” to your shell. After you export the environment variables, the Docker Client on your workstation will directly connect with the DigitalOcean instance and run the commands there. The videos below provide additional details.

In the next article, we will look at some Docker container operations.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos here for more details:

Want to learn more? Access all the free sample chapter videos now!

This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.

Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.

VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.

Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?”  From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.

Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

The Linux Foundation’s new Kubernetes training course is now available for developers and system administrators who want to learn container orchestration using this popular open source tool.

Kubernetes is quickly becoming the de facto standard to operate containerized applications at scale in the data center. As its popularity surges, so does demand for IT practitioners skilled in Kubernetes.

“Kubernetes is rapidly maturing in development tests and trials and within production settings, where its use has nearly tripled in the last eight months,” said Dan Kohn, executive director, the Cloud Native Computing Foundation.

Kubernetes Fundamentals (LFS258) is a self-paced, online course that teaches students how to use Kubernetes to manage their application infrastructure. Topics covered include:

  • Kubernetes architecture

  • Deployment

  • How to access the cluster

  • Tips and tricks

  • ConfigMaps, and more.

Developed by The Linux Foundation and the Cloud Native Computing Foundation, home of the Kubernetes open source project, developers and admins will learn the technology straight from the source.

Students will learn the fundamentals needed to understand Kubernetes and get quickly up to speed to start building distributed applications that will scale, be fault-tolerant, and simple to manage.

The course distills key principles, such as pods, deployments, replicasets and services, and give students enough information to start using Kubernetes on their own. And it’s designed to work with a wide range of Linux distributions, so students will be able to apply the concepts learned regardless of their distribution of choice.

LFS258 also will help prepare those planning to take the Kubernetes certification exam, which will launch later this year. Updates are planned for the course ahead of the certification exam launch, which will be specifically designed to assist with preparation for the exam.

The course, which has been available for pre-registration since November, is available to begin immediately. The $199 course fee provides unlimited access to the course for one year to all content and labs. Sign up now!

Dice and The Linux Foundation recently released an updated Open Source Jobs Report that examines trends in open source recruiting and job seeking. The report clearly shows that open source professionals are in demand and that those with open source experience have a strong advantage when seeking jobs in the tech industry. Additionally, 87 percent of hiring managers say it’s hard to find open source talent.

The Linux Foundation offers many training courses to help you take advantage of these growing job opportunities. The courses range from basic to advanced and offer essential open source knowledge that you can learn at your own pace or through instructor-led classes.

This article looks at some of the available training courses and other resources that can provide the skills needed to stay competitive in this hot open source job market.  

Networking Courses            

The Open Source Jobs Report highlighted networking as a leading emergent technology — with 21 percent of hiring managers saying that networking has the biggest impact on open source hiring. To build these required networking skills, here are some courses to consider.

Essentials of System Administration

This introductory course will teach you how to administer, configure, and upgrade Linux systems. You’ll learn all the tools and concepts necessary to efficiently build and manage a production Linux infrastructure including networking, file system management, system monitoring, and performance tuning. This comprehensive, online, self-paced course also forms the basis for the Linux Foundation Certified System Administrator skillset.

Advanced Linux System Administration and Networking

The need for sys admins with advanced administration and networking skills has never been greater. This course is designed for system administrators and IT professionals who need to gain a hands-on knowledge of Linux network configuration and services as well as related topics such as basic security and performance.

Software Defined Networking with OpenDaylight

Software Defined Networking (SDN) is a rapidly emerging technology that abstracts networking infrastructure away from the actual physical equipment. This course is designed for experienced network administrators who are either migrating to or already using SDN and/or OpenDaylight, and it provides an overview of the principles and methods upon which this technology is built.

Cloud Courses

Cloud technology experience is even more sought after than networking skills — with 51 percent of hiring managers stating that knowledge of OpenStack and CloudStack has a big impact on open source hiring decisions.

Introduction to Cloud Infrastructure Technologies

As companies increasingly rely on cloud products and services, it can be overwhelming to keep up with all the technologies that are available. This free, self-paced course will give you a fundamental understanding of today’s top open source cloud technology options.

Essentials of OpenStack Administration

OpenStack adoption is expanding rapidly, and there is high demand for individuals with experience managing this cloud platform. This instructor-led course will teach you everything you need to know to create and manage private and public clouds with OpenStack.

OpenStack Administration Fundamentals

This online, self-paced course will teach you what you need to know to administer private and public clouds with OpenStack. This course is also excellent preparation for the Certified OpenStack Administrator exam from the OpenStack Foundation.

Open Source Licensing and Compliance

A good working knowledge of open source licensing and compliance is critical when contributing to open source projects or integrating open source software into other projects. The Compliance Basics for Developers course teaches software developers why copyrights and licenses are important and explains how to add this information appropriately. This course also provides an overview of the various types of licenses to consider.    

Along with these — and many other — training courses, the Linux Foundation also offers free webinars and ebooks on various topics. The free resources listed below can help you get started building your career in open source:

 

/linux-com_ctas_may2016_v2_opensource.jpg?itok=Hdu0RIJn

The Linux Foundation has launched a new self-paced, online course to help senior Linux sysadmins prepare for the advanced Linux Foundation Certified Engineer (LFCE) exam.

The Linux Networking and Administration (LFS211) course gives students access to 40-50 hours of coursework, and more than 50 hands-on labs — practical experience that translates to real-world situations. Students who complete the course will come away with the knowledge and skills necessary to succeed as a senior Linux sysadmin and pass the LFCE  exam, which is included in the cost of the course.

The LFCE exam builds on the domains and competencies tested in the Linux Foundation Certified System Administrator (LFCS) exam. Sysadmins who pass the LFCE exam have a wider range and greater depth of skill than the LFCS. Linux Foundation Certified Engineers are responsible for the design and implementation of system architecture and serve as subject matter experts and mentors for the next generation of system administration professionals.

Advance your career

With the tremendous growth in open source adoption across technology sectors, it is more important than ever for IT professionals to be proficient in Linux. Every major cloud platform, including OpenStack and Microsoft Azure, is now based on or runs on Linux. The type of training provided in this new course confers the knowledge and skills necessary to manage these systems.

Certification also carries an opportunity for career advancement, as more recruiters and employers seek certified job candidates and often verify job candidates’ skills with certification exams.

The 2016 Open Source Jobs Report, produced by The Linux Foundation and Dice, finds that 51 percent of hiring managers say hiring certified professionals is a priority for them, and 47 percent of open source professionals plan to take at least once certification exam this year.

Certifications are increasingly becoming the best way for professionals to differentiate from other job candidates and to demonstrate their ability to perform critical technical functions.

“More individuals and more employers are seeing the tremendous value in certifications, but it can be time-consuming and cost-prohibitive to prepare for them,” said Clyde Seepersad, Linux Foundation General Manager for Training. “The Linux Foundation strives to increase accessibility to quality training and certification for anyone, and offering advanced system administration training and certification that can be accessed anytime, anywhere, for a lower price than the industry standard helps to achieve that.”

Register now for LFS211 at the introductory price of $349, includes one year of course access and a voucher to take the LFCE certification exam with one free re-take. For more information on Linux Foundation training and certification programs, visit http://training.linuxfoundation.org.

 

With some studies showing the majority of private cloud deployments are on OpenStack, it’s essential that that today’s SysAdmins and DevOps pro’s are proficient on this important technology. That’s why today at OpenStack Summit in Austin, The OpenStack Foundation announced the availability of a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. It will enable professionals to demonstrate their OpenStack skills and employers to be confident new hires are ready to roll, and existing employees stay up to date on the latest advancements.

The Linux Foundation offers an OpenStack Administration Fundamentals course, which serves as preparation for the new certification. Starting today, that course is available bundled with the COA exam, enabling students to learn the skills they need to work as an OpenStack administrator and get the certification to prove it. The most unique feature of the course is that it provides each learner with a live OpenStack lab environment which can be rebooted at any time (to reduce the pain of troubleshooting what went wrong). Customers have access to the course and the lab environment for a full 12 months after purchase.

Like the exam, the course is available anytime, anywhere. It is online and self-paced, meaning students can learn on their own schedule, can skip sections they are already familiar with, and retake ones in which they need more preparation. Making this type of training and certification more accessible should help meet the growing demand for qualified OpenStack talent and provide ongoing career opportunities well into the future.