Posts

Actor and online entrepreneur Joseph Gordon-Levitt will be speaking at Open Source Summit North America — Sept. 11-14 in Los Angeles, CA — about his experiences with collaborative technologies.

Gordon-Levitt, the founder and director of HITRECORD — an online production company that makes art collaboratively with more than half a million artists of all kinds — will share his views on the evolution of the Internet as a collaborative medium and offer some key technological lessons learned since the company’s launch.

Other new additions to the keynote lineup are:

  • Wim Coekaerts, Senior Vice President, Linux and Virtualization Engineering, Oracle

  • Chris Wright, Vice President & Chief Technologist, Office of Technology at Red Hat

And, previously announced speakers include:

  • Linus Torvalds, Creator of Linux and Git, in conversation with Jim Zemlin, Executive Director of The Linux Foundation

  • Tanmay Bakshi, a 13-year-old Algorithm-ist and Cognitive Developer, Author and TEDx Speaker

  • Bindi Belanger, Executive Program Director, Ticketmaster

  • Christine Corbett Moran, NSF Astronomy and Astrophysics Postdoctoral Fellow, CalTech

  • Dan Lyons, FORTUNE Columnist and Bestselling Author of “Disrupted: My Misadventure in the Startup Bubble”

  • Jono Bacon, Community Manager, Author, Podcaster

  • Nir Eyal, Behavioral Designer and Bestselling Author of “Hooked: How to Build Habit Forming Products”

  • Ross Mauri, General Manager, IBM z Systems & LinuxONE, IBM

  • Zeynep Tufekci, Professor, New York Times Writer, Author and Technosociologist

The full exciting lineup of Open Source Summit North America speakers and 200+ sessions can be viewed here.

Register by July 30th and save $150! Linux.com readers receive a special discount. Use LINUXRD5 to save an additional $47.

Do you use or contribute to open source technologies? Or, are you responsible for hiring open source professionals? If so, please take a minute to complete a short open source jobs survey from Dice and The Linux Foundation and make your voice heard.

During the past decade, open source development has experienced a massive shift, becoming a mainstay of the IT industry. Flexibility in accommodating new technologies and adapting to a changing market make open source software vital to modern companies, which are increasingly investing in open source talent.

To gather more information about the changing landscape and opportunities for developers, administrators, managers, and other open source professionals, Dice and The Linux Foundation have partnered to produce two open source jobs surveys — designed specifically for hiring managers and industry professionals.

Take the Hiring Managers Survey

Take the Professionals/Candidates Survey 

As a token of our appreciation, $2,000 in Amazon gift cards will be awarded to survey respondents selected at random after the closing date. Complete the survey for a chance to win one of 10 $100 gift cards, or one of two $500 gift cards. 

The survey results will be compiled into the 2017 Open Source Jobs Report. This annual report evaluates the state of the job market for open source professionals and examines what hiring managers are looking for and what motivates employees in the industry. You can download the 2016 Open Source Jobs Report for free.  

Survey responses must be received by Thursday, July 27, at 12:00 pm Eastern time.

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Check out the session highlights for the new Diversity Empowerment Summit (DES), which will take place Sept. 14, 2017, in Los Angeles as part of Open Source Summit North America.

Featured sessions and speakers for DES include:

  • Chaos Theory + Civil Liberties = 21st Century Corporate Practices – Kate Ertmann, GO

  • Open Your Arms to Open Source – Solutions to Bring in Social Innovation to All Walks of Life All Over the World – Arpana Durgaprasad, IBM

  • You’re Not a *Real* Software Engineer – Amy Chen, Rancher Labs

  • CO.LAB: A Collaborative, Mobile Learning Experience – John Adams, Red Hat

Other diversity and inclusion activities at Open Source Summit North America include:

Note that registration for DES is included in Open Source Summit registration fees at no additional cost.  Anyone in open source who wants to learn more about furthering diversity and inclusion in the community, as well as the broader technology industry, is encouraged to attend.

Onsite resources to increase accessibility to the event include:

  • Nursing room

  • Complimentary child care

  • Wheelchair & medical equipment rental from One Stop Mobility

  • Quiet room where conversation and interaction are not allowed

  • Communication stickers to indicate an attendee’s requested level of interaction

  • Non-binary restrooms

  • Strictly enforced Code of Conduct

The full lineup of all Open Source Summit North America sessions, including those at the DES, features more than 200 sessions covering everything from Cloud and Containers, to Security and Networking, to Linux and Kernel Development. Register now & Save $150!

In this series previewing the self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered installing Docker, introduced Docker Machine, and some basic commands for performing Docker container and image operations. In the three sample videos below, we’ll take a look at Dockerfiles and Docker Hub.

Docker can build an image by reading the build instructions from a file that’s generally referred to as Dockerfile. So, first, check your connectivity with the “dockerhost” and then create a folder called nginx. In that folder, we have created a file called dockerfile and in the dockerfile, we have used different instructions, like FROM, RUN, EXPOSE, and CMD.

To build an image, we’ll need to use the docker build command. With the -t option, we can specify the image name, and with a “.” at the end, we are requesting Docker to look at the current folder to find the dockerfile, and then build the image.

On the Docker Hub, we also see the repositories — for example, for nginx, redis, and busybox. For a given repository, you can have different tags, which will define the individual image. On the repository page, we can also see a respective Dockerfile, from which an image is created — for example, you can see the Dockerfile of the nginx image. 

If you don’t have an account on Docker Hub, I recommend creating one at this time. After logging in, you can see all the repositories we’ve created. Note that the repository name is prefixed with our username.

To push an image to Docker Hub, make sure that the image name is prefixed with the username used to log into the Docker Hub account. With the docker image push command, we can push the image to the Docker Registry, which would, by default, go to the Docker Hub.  

DockerHub has a functionality called Docker automated builds, that can trigger a build on DockerHub as soon as you commit a code on your GitHub repository. On GitHub, we have a repository called docker-automated-build, in which we have a Dockerfile, using which the image will be created. In the real-world example, we would have our application code with Dockerfile.

To create the automated build, we need to first log into our DockerHub account, and then, link our GitHub account with DockerHub. Once the GitHub account is linked, we click on “Create” and then on “Create Automated Build.”

Next, we provide a short description and then click on “Create.” Then, we select the GitHub repository that we want to link with this DockerHub automated build procedure. Now, we can go to our GitHub repository and change something there. As soon as we commit the change, a Docker build process would start on our DockerHub account.

Our image build is currently in queue, which will be scheduled eventually, and our image would be created. After that, anybody would be able to download the image.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

At the recent Open Networking Summit, the SDN/NFV community convened in Santa Clara to share, learn, collaborate, and network about one of the most pervasive industry transformations of our time.

This year’s theme at ONS was “Harmonize, Harness, and Consume,” representing a significant turning point as network operators spanning telecommunications, cable, enterprise, cloud, and the research community renew their efforts to redefine the network architecture.

Widespread new technology adoption takes years to succeed, and requires close collaboration among those producing network technology and those consuming it. Traditionally, standards development organizations (SDOs) have played a critical role in offering a forum for discussion and debate, and well-established processes for systematically standardizing and verifying new technologies.

Introduction of largely software (vs. hardware) functionality necessitates a rethinking of the conventional technology adoption lifecycle. In a software driven world, it is infeasible to define a priori complex reference architectures and software platforms without a more iterative approach. As a result, industry has been increasingly turning to open source communities for implementation expertise and feedback.

In this new world order, closer collaboration among the SDOs, industry groups, and open source projects is needed to capitalize upon each constituent’s strengths:

  • SDOs provide operational expertise and well-defined processes for technology definition, standardization, and validation
  • Industry groups offer innovative partnerships between network operators and their vendors to establish open reference architectures that are guiding the future of the industry
  • Open source projects provide technology development expertise and infrastructure that are guided by end-user use cases, priorities, and requirements

Traditionally each of these groups operates relatively autonomously, liaising formally and informally primarily for knowledge sharing.

Moving ahead, close coordination is essential to better align individual organizations objectives, priorities, and plans. SDN/NFV are far too pervasive for any single group to own or drive. As a result, the goal is to capitalize upon the unique strengths of each to accelerate technology adoption.

It is in the spirit of such harmonization that The Linux Foundation is pleased to unveil an industry-wide call to action to achieve this goal.

As a first step, we are issuing a white paper, “Harmonizing Open Source and Standards in the Telecom World,” to outline the key concepts, and invite an unprecedented collaboration among the SDOs, open source projects, and industry groups that each play a vital role in the establishment of a sustainable ecosystem which is essential for success.

The introduction of The Linux Foundation Open Network Automation Platform (ONAP) is a tangible step in the direction of harmonization, not only merging OPEN-O and the open source ECOMP communities, but also establishing a platform that by its nature as an orchestration and automation platform, must inherently integrate with a diverse set of standards, open source projects, and reference architectures.

We invite all in the community to participate in the process, in a neutral environment, where the incentives for all are to work together vs. pursue their own paths.

Join us to usher in a new era of collaboration and convergence to reshape the future.

Download the Whitepaper

Open source is the new normal for startups and large enterprises looking to stay competitive in the digital economy. That means that open source is now also a viable long-term career path.

“It is important to start thinking about the career road map, and the pathway that you can take and how Linux and open source in general can help you meet your career goals,” said Clyde Seepersad, general manager of training at The Linux Foundation, in a recent webinar.

Certification is one clear path with real career benefits. Forty-four percent of hiring managers in our recent 2016 Open Source Jobs Report said they’re more likely to hire certified candidates. And 76 percent of open source pros surveyed believe certifications lead to a career boost.

The Linux Foundation Certified System Administrator (LFCS) and Certified Engineer (LFCE) exams are great opportunities for sysadmins to polish and prove their skills. The exams are available online to anyone in the world at any time. They’re also performance based, working within a Linux server terminal and overseen by a proctor. Because the format is not multiple choice, even seasoned pros will need some preparation in order to avoid common mistakes and complete the exam within the time limit.

To help you prepare for the certification exam, and a long and successful sysadmin career, we’ve gathered some tips, below, from Linux Foundation certified sysadmins who have completed the LFCS or LFCE exams.

chris-vanhorn-lfcs.png

Chris van Horn

Chris van Horn, LFCS

1. Practice

“Experience is key. Spin up a VM, take a fresh snapshot of it and go to work applying all the requirements of the exam in practice. When you feel you have satisfied all the exam topics thoroughly, apply that fresh snapshot to revert changes and begin again until it is second nature. Also, feel comfortable with man pages; they are your best friend when Google is not an option.”

Chris Van Horn, Linux Foundation Certified System Administrator (LFCS) and a “Debian guy.”

dashamir.jpg

Dashamir Hoxha

Dashamir Hoxha, LFCS

2. Give it time

“The best preparation is your experience. If you feel that you have enough experience with the topics required by the exam, you can give it a try. Otherwise, you have to work hard to get those skills.

Don’t think that in a short time you can learn everything.”

Dashamir Hoxha, LFCS, an Ubuntu user and open source contributor.

williambrawner.jpg

William Brawner

William Brawner, LFCS

3. Learn how to use man pages

“If you haven’t already, get familiar with the man pages. Know what they are and how to use them efficiently.

No matter how much you study, you can’t learn everything, and if you could, you wouldn’t retain it all anyway. The man pages will fill in the gaps.”

William Brawner, LFCS, and Arch Linux user who plans to take the LFCE exam next.

frantsao-300.jpg

Francisco Tsao

Francisco Tsao, LFCE

4. Understand the material, don’t just memorize it

“Forget recipes, it’s not about memorization. Understand what are you doing by reading some books and documentation that give you a deep background of the tasks you’ll perform at the exam and in real life.

Imagine real problems and try to solve them.”

Francisco Tsao, LFCE, self-professed Debian fanboy and Fedora contributor.

georgedoumas-crop.jpg

George Doumas

George Doumas, LFCS

5. The boring stuff is still important

“Do not rely on one book only! Study and practice…even the stuff that you find mundane.

A portion of the tasks are boring, but you cannot avoid them.”

George Doumas, LFCS, and a fan of Scientific Linux, openSUSE, and Linux Mint.

6. Follow the instructions

jorge.png

Jorge Tudela Gonzalez de Riancho

Jorge Tudela Gonzalez de Riancho, LFCS

“For experienced professionals, I recommend that they prepare the environment for the exam, and follow the instructions. It’s not a difficult exam if you work daily with Linux.

On the other hand, for newcomers, apart from having a look to open/free resources, I just encourage them to set up a Linux environment at home and get their hands dirty!!”

Jorge Tudela Gonzalez de Riancho, LFCS, Debian user and Raspberry Pi enthusiast.

7. Have fun!

fotogabriel_crop.png

Gabriel Canepa

Gabriel Canepa, LFCS

“Make sure you love what you are doing, and do not forget to have fun, to experiment, and then to do it all over again and again, and make sure you learn something new each time.”

Gabriel Canepa, LFCS, Red Hat Enterprise Linux admin and technical writer.

Sign up to receive one free Linux tutorial each week for 22 weeks from Linux Foundation Training. Sign Up Now »