Posts

LF Networking became a catalyst for the telecom industry by creating an umbrella project under which various players can contribute and enrich the technologies involved.

The telecom industry is at the heart of the fourth industrial revolution. Whether it’s connected IoT devices or mobile entertainment, the modern economy runs on the Internet.

However, the backbone of networking has been running on legacy technologies. Some telecom companies are centuries old, and they have a massive infrastructure that needs to be modernized.

The great news is that this industry is already at the forefront of emerging technologies. Companies such as AT&T, Verizon, China Mobile, DTK, and others have embraced open source technologies to move faster into the future. And  LF Networking is at the heart of this transformation.

“2018 has been a fantastic year,” said Arpit Joshipura, General Manager of Networking at Linux Foundation, speaking at Open Source Summit in Vancouver last fall. “We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.”

Now LF Networking has more than 100 members, which represent ~70% of the global subscribers of these telecom players. These members are actively participating in software development at LF Networking. They are collaborating on existing projects, and they are contributing their own in-house code to the foundation and releasing it as open source.

For example, AT&T contributed their own work on virtual networks as ONAP to the Linux Foundation. The project is now being used by in production by other companies, and AT&T in return is benefitting from the work the competitors are doing to improve the code base.

“Over $500 million worth of software innovation, in terms of value, has been created in the open source community,” said Joshipura. “We can now safely say that the telecom industry is going to use open source that is based out of Linux Foundation to build their next generation networks.

Telecom Transformation

What’s incredible about this transformation within the telecom industry is that unlike other industries where developers drive the change, here top leadership has advocated for change all the way down.

LF Networking became a catalyst to help the industry by creating an umbrella project under which various players can gather, contribute, and enrich the technologies involved.

The primary focus of LF Networking at the moment is to see more and more of these technologies in production. “But our next goal is to see how networking enables what we call cross-project collaboration, cross-industry collaboration, cross-community collaboration. How does blockchain impact telcos, how can telcos go cloud-native with Kubernetes… and so on,” said Joshipura.

One of the most promising areas for the networking community is edge computing, as seen in the recent creation of the new LF Edge umbrella project. There is a lot of innovation happening in the space — 5G, autonomous driving, and so on.  “Our focus is on figuring out how do these projects come together and collaborate so that there’s more value to our end users, to our members,” he said.

The Linux Foundation has a wide range of projects, many of which are building code individually. Joshipura wants these projects to collaborate closely.  “We have the concept of VNF (Virtual Network Functions). How do we make them cloud-native? We created a project called CNF (Cloud Native network Functions), but we need to work with the ONAP community, networking community, and Kubernetes community to solve some of the problems that the networking community is facing,” he said.

With its current momentum and community support, LF Networking is on track to lead the way.

Watch the complete video at:

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

This series previews the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we installed Docker, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Hub.

In this article, we’ll talk a bit about Docker Volumes and networking. To create a volume, we use the docker volume create command. And, to list the volumes, we use the docker volume list command.

To mount the volume inside a container, we need to use the -v option with the docker container run command. For example, we can mount the myvol volume inside the container at the /data location. After moving into the /data folder, we create two files there.

Next, we come out of the container and create a new container from the busybox image, but mounting the same myvol volume. The files that we created in the earlier container are available under /data. This way, we can share the content between the containers using the volumes. You can watch both of the videos below for details.

 To review Docker networking, we first create a container from the nginx image. With the docker container inspect command, we can get the container’s IP address, but that IP address would be given by the docker0 bridge, which would not be accessible from the external world. 

To access the container from the external world, we need to do port mapping between the host port and the container port. So, with the -p option added to the docker container run command, we can map the host port with the container port. For example, we can map Port 8080 of the host system with Port 80 of the container.

Once the port is mapped, we can access the container from the dockerhost by accessing the dockerhost on Port 8080.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

In this series, we’re taking a preview look at the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation.

In the first article, we talked about installing Docker and setting up your environment. You’ll need Docker installed to work along with the examples, so be sure to get that out of the way first. The first video below provides a quick overview of terms and concepts you’ll learn.

In this part, we’ll describe how to get started with Docker Machine.

Or, access all the sample videos now!

Docker has a client server-architecture, in which the Client sends the command to the Docker Host, which runs the Docker Daemon. Both the Client and the Docker Host can be in the same machine, or the Client can communicate with any of the Docker Hosts running anywhere, as long as it can reach and access the Docker Daemon.

The Docker Client and the Docker Daemon communicate over REST APIs, even on the same system. One tool that can help you manage Docker Daemons running on different systems from our local workstation is Docker Machine.

If you are using Docker for Mac or Windows, or install Docker Toolbox, then Docker Machine will be available on your workstation automatically. With Docker Machine, we will be deploying an instance on DigitalOcean and installing Docker on that.  For that, we would first create our API key from DigitalOcean, with which we can programmatically deploy an instance on DigitalOcean.

After getting the token, we will be exporting that in an environment variable called “DO_TOKEN”, which we will be using in the “docker-machine” command line, in which we are using the “digitalocean” driver and creating an instance called “dockerhost”.

Docker Machine will then create an instance on DigitalOcean, install Docker on that, and configure the secure access between the Docker Daemon running on the “dockerhost” and our client, which is on our workstation. Next, you can use the “docker-machine env” command with our installed host, “dockerhost”, to find the respective parameters with which you can connect to the remote Docker Daemon from your Docker Client.

With the “eval” command, you can export all the environment variables with respect to your “dockerhost” to your shell. After you export the environment variables, the Docker Client on your workstation will directly connect with the DigitalOcean instance and run the commands there. The videos below provide additional details.

In the next article, we will look at some Docker container operations.

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Watch the sample videos here for more details:

Want to learn more? Access all the free sample chapter videos now!

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

In 2017, The Linux Foundation’s Embedded Linux Conference marks its 12th year as the premier vendor-neutral technical conference for companies and developers using Linux in embedded products.

Now co-located with OpenIoT Summit, ELC promises to be the best place for embedded and application developers, product vendors, kernel and systems developers as well systems architects and firmware developers to learn, share and advance the technical work required for embedded Linux and IoT.

In anticipation of this year’s North America event, to be held Feb. 21-23 in Portland, Oregon, we rounded up the top videos from the 2017 ELC and OpenIoT Summit. Register now with the discount code, LINUXRD5, for 5% off the registration price. Save over $150 by registering before January 15, 2017.

1. Home Assistant: The Python Approach to Home Automation

Several home automation platforms support Python as an extension, but if you’re a real Python fiend, you’ll probably want Home Assistant, which places the programming language front and center. Paulus Schoutsen created Home Assistant in 2013 “as a simple script to turn on the lights when the sun was setting,” as he told attendees of his recent Embedded Linux Conference and OpenIoT Summit presentation, “Automating your Home with Home Assistant: Python’s Answer to the Internet of Things.”

Schoutsen, who works as a senior software engineer for AppFolio in San Diego, has attracted 20 active contributors to the project. Home Assistant is now fairly mature, with updates every two weeks and support for more than 240 different smart devices and services. The open source (MIT license) software runs on anything that can run Python 3 — from desktop PCs to a Raspberry Pi, and counts thousands of users around the world.

2. Linus Torvalds Talks IoT, Smart Devices, Security Concerns, and More

Linus Torvalds, the creator and lead overseer of the Linux kernel, and “the reason we are all here,” in the words of his interviewer, Intel Chief Linux and Open Source Technologist Dirk Hohndel, was upbeat about the state of Linux in embedded and Internet of Things applications. Torvalds’ very presence signaled that embedded Linux, which has often been overshadowed by Linux desktop, server, and cloud technologies, has come of age.

“Maybe you won’t see Linux at the IoT leaf nodes, but anytime you have a hub, you will need it,” Torvalds told Hohndel. “You need smart devices especially if you have 23 [IoT standards]. If you have all these stupid devices that don’t necessarily run Linux, and they all talk with slightly different standards, you will need a lot of smart devices. We will never have one completely open standard, one ring to rule them all, but you will have three of four major protocols, and then all these smart hubs that translate.”

3. Taming the Chaos of Modern Caches

It turns out that software — and computer education curricula — have not always kept up with new developments in hardware, ARM Ltd. kernel developer Mark Rutland said in his presentation “Stale Data, or How We (Mis-)manage Modern Caches.”

“Cache behavior is surprisingly complex, and caches behave in subtly different ways across SoCs,” Rutland told the ELC audience. “It’s very easy to misunderstand the rules of how caches work and be lulled into a false sense of security.”

4. IoTivity 2.0: What’s in Store?

Speaking shortly after the release of Open Connectivity Foundation (OCF)’s IoTivity 1.1, Vijay Kesavan, a Senior Member of Technical Staff in the Communication and Devices Group at Intel Corp, told the ELC audience about plans to support new platforms and IoT ecosystems in v2.0. He also explained how the OCF is exploring usage profiles beyond home automation in domains like automotive and industrial.

5. A Linux Kernel Wizard’s Adventures in Embedded Hardware

Sometimes the best tutorials come not from experts, but from proficient newcomers who are up to date on the latest entry-level technologies and can remember what it’s like to be a newbie. It also helps if, like Grant Likely, the teacher is a major figure in embedded Linux who understands how hardware is ignited by software.

At the Embedded Linux Conference, Likely — who is a Linux kernel engineer and maintainer of the Linux Device Tree subsystem used by many embedded systems — described his embedded hardware journey in a presentation called “Hardware Design for Linux Engineers” — or as he put it, “explaining stuff I only learned six months ago.”

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Save over $150 by registering before January 15, 2017.

Read More:

10 Great Moments from Linux Foundation 2016 Events

Top 7 Videos from ApacheCon and Apache Big Data 2016

It’s been two years since The Linux Foundation forged a partnership with the Apache Software Foundation to become the producer of their official ASF events. This year, ApacheCon and Apache Big Data continued to grow and gain momentum as the place to share knowledge, ideas, best practices and creativity with the rest of the Apache open source community.

As 2016 draws to a close, we looked back at some of the highlights from ApacheCon and Apache Big Data and collected the 7 videos from our most-read articles about the events in 2016.

These videos help highlight the good work the open source community accomplished for and with Apache projects this year. We hope they inspire you to participate in the community and present your work again at ApacheCon and Apache Big Data, May 16-18, 2017 in Miami. The deadline to submit proposals is February 11!

Submit an ApacheCon Proposal      

Submit an Apache: Big Data Proposal

1. IBM’s Wager on Open Source Is Still Paying Off

When IBM got involved with the Linux open source project in 1998, they were betting that giving their code and time to the community would be a worthwhile investment. Now, 18 years later, IBM is more involved than ever, with more than 62,000 employees trained and expected to contribute to open source projects, according to Todd Moore, Vice President of Open Technology at IBM, speaking at ApacheCon in Vancouver in May.

“It became apparent that open source could be the de facto standards we needed to be the engine to go out and drive things,” Moore said in his keynote. “[The contributions] were bets; we didn’t know how this was going to come out, and we didn’t know if open source would grow, we knew there would be roadblocks and things we’d have to overcome along the way, but it had promise. We thought this would be the way of the future.”

Moore reiterated IBM’s commitment to open source, highlighting projects born at IBM’s developerWorks Open (dWOpen), such as SystemML, Toree, and Quarks, and now in the Apache Incubator.

Read our coverage of Moore’s presentation, and watch the full video below.

2. Open Source is a Positive-Sum Game, Sam Ramji, Cloud Foundry

As open source software matures and is used by more and more major corporations, it is becoming clear that the enterprise software game has changed. Sam Ramji, CEO of the Cloud Foundry Foundation, believes that open source software is a positive sum game, as reflected in his ApacheCon keynote.

Invoking his love of game theory, Ramji stated emphatically that open source software is a positive-sum game, where the more contributors there are to the common good, the more good there is for everyone. This idea is the opposite of a zero-sum game, where if someone benefits or wins, then another person must suffer, or lose.

Read the full coverage and watch the video below.

3. Apache Milagro: A New Security System for the Future of the Web

With 25 billion new devices set to hit the Internet by 2025, the need for a better worldwide cryptosystem for securing information is paramount. That’s why the Apache Milagro project is currently incubating at the Apache Software Foundation. It’s a collaboration between MIRACL and Nippon Telegram and Telegraph (NTT), and Brian Spector, MIRACL CEO and Co-Founder, discussed the project in his keynote at ApacheCon in May.

Spector said the project was born in a bar on the back of a napkin after a brainstorm about how one would rebuild Internet security from the ground up. That sounds like a lot of work, but Spector believes it’s absolutely necessary: the future of the Web is going to be very different from the past.

Read the full article and watch the video below.

4. Netflix Uses Open Source Tools for Global Content Expansion

“We measured, we learned, we innovated, and we grew.”

Brian Sullivan, Director of Streaming Data Engineering & Analytics at Netflix, recited this recipe for the streaming video giant’s success several times during his keynote address at the Apache Big Data conference in Vancouver today. It was this mantra, combined with an open source toolkit, that took the stand-alone streaming product from a tiny test launch in Canada to making Netflix a global presence.

Read a summary of the presentation and watch the video, below.

5. Spark 2.0 Is Faster, Easier for App Development, and Tackles Streaming Data

It only makes sense that as the community of Spark contributors got bigger the project would get even more ambitious.  Spark 2.0 came out with three robust new features, according to Ion Stoica, the founder of Databricks.

“Spark 2.0 is about taking what has worked and what we have learned from the users and making it even better,” Stoica said.

Read our coverage of the keynote and watch the full presentation, below.

6. IBM Uses Apache Spark Across Its Products to Help Enterprise Customers

IBM is invested in Spark’s machine-learning capabilities and is contributing back to the project with its work on SystemML, which helps create iterative machine-learning algorithms. The company offers Spark-as-a-service in the cloud, and it’s building it into the next iteration of the Watson analytics platform. Basically anywhere it can, IBM is harnessing the efficient power of Apache Spark.

“We at IBM … have noted the power of Spark, and the other big data technologies that are coming in [from the Apache Software Foundation],” said Luciano Resende, an architect at IBM’s Spark Technology Center.

Read the full article and watch the presentation, below.

7. How eBay Uses Apache Software to Reach Its Big Data Goals

eBay’s ecommerce platform creates a huge amount of data. It has more than 800 million active listings, with 8.8 million new listings each week. There are 162 million active buyers, and 25 million sellers.

“The data is the most important asset that we have,” said Seshu Adunuthula, eBay’s head of analytics infrastructure, during a keynote at Apache Big Data in Vancouver in May. “We don’t have inventory like other ecommerce platforms, what we’re doing is connecting buyers and sellers, and data plays an integral role into how we go about doing this.”

About five years ago, eBay made the conscious choice to go all-in with open source software to build its big data platform and to contribute back to the projects as the platform took shape.

Read the full article about eBay and watch the presentation, below.

Share your knowledge and best practices on the technologies and projects driving the future of open source. Submit a speaking proposal for ApacheCon and Apache Big Data today!

Submit an ApacheCon Proposal      

Submit an Apache: Big Data Proposal

Not interested in speaking but want to attend? Linux.com readers can register now for ApacheCon or Apache: Big Data with the discount code, LINUXRD5, for 5% off the registration price.

Read More:

10 Great Moments from Linux Foundation 2016 Events

Individuals start open source projects because it matters to them. Whether motivated by passion, interest, necessity, curiosity or fame, projects are often started by individuals who want to build better software. Do better work. Have an impact. See their code in the world’s best technology and products.

Because open source today makes up an ever increasing footprint in technology infrastructure and products, we have a responsibility to these individuals and the community and industry at large to support this work and build practices and processes that sustain the world’s greatest shared technologies for the long term.

Part of this work is a shift in thinking, moving away from old world open source questions to new world open source questions. From questions like: Is everything really free and what is an OSS license? To how does my employer integrate OSS into the product development process? Are adequate resources committed to maintaining this project? Open source projects today must meet the level of sophistication companies expect and on which they’re investing their futures.

We can together help ensure this through focusing on new world open source questions and creating a bigger tent — a bigger tent that includes everyone: business managers, users and developers across gender, race and economic class. One that brings open source strategy, tools, training, compliance and more to everyone. We must invest in the open source professional and focus on open source readiness that supports innovative research and development.

This focus is already resulting in big tent outcomes. Outcomes that together we are making possible. Here are just a few.

You can learn more about how The Linux Foundation is working to support open source for decades to come in Jim Zemlin’s complete keynote from The Linux Foundation Collaboration Summit video, below.

And view all 13 keynote videos from Collaboration Summit, held March 29-31 in Lake Tahoe, California.