Posts

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

OPNFV

The OPNFV project provides users with open source technology they can use and tailor for their purposes; learn how to get involved.

Over the past several weeks, we have been discussing the Understanding OPNFV book (see links to previous articles below). In this last article in the series, we will look at why you should care about the project and how you can get involved.

OPNFV provides both tangible and intangible benefits to end users. Tangible benefits include those that directly impact business metrics, whereas the intangibles include benefits that speed up the overall NFV transformation journey but are harder to measure. The nature of the OPNFV project, where it primarily focuses on integration and testing of upstream projects and adds carrier-grade features to these upstream projects, can make it difficult to understand these benefits.

To understand this more clearly, let’s go back to the era before OPNFV. Open source projects do not, as a matter of routine, perform integration and testing with other open source projects. So, the burden of taking multiple disparate projects and making the stack work for NFV primarily fell on Communications Service Providers (CSPs), although in some cases vendors shouldered part of the burden. For CSPs or vendors to do the same integration and testing didn’t make sense.

Furthermore, upstream communities are often horizontal in their approach and do not investigate or prioritize requirements for a particular industry vertical. In other words, there was no person or entity driving carrier grade features in many of these same upstream projects. OPNFV was created to fill these gaps.

Tangible and Intangible Benefits

With this background, OPNFV benefits become more clear. Chapter 10 of the book breaks down the tangible and intangible benefits further. Tangible benefits to CSPs include:

  • Faster rollout of new network services
  • Vendor-agnostic platform to onboard and certify VNFs
  • Stack based on best-in-class open source components
  • Reduced vendor lock-in
  • Ability to drive relevant features in upstream projects

Additionally, the OPNFV community operates using DevOps principles and is organized into small, independent and distributed teams. In doing so, OPNFV embodies many of the same practices used by the web giants. CSPs can gain valuable insight into people and process changes required for NFV transformation by engaging with OPNFV. These intangible benefits include insights into:

  • Organizational changes
  • Process changes
  • Technology changes
  • Skillset acquisition

OPNFV is useful not only for CSPs, however; it also provides benefits to vendors (technology providers) and individuals. Vendors can benefit from interoperability testing (directly if their products are open source, or indirectly through private testing or plugfests), and gain insights into carrier-grade requirements and industry needs. Individuals can improve their skills by gaining broad exposure to open source NFV. Additionally, users can learn how to organize their teams and retool their processes for successful NFV transformation.

The primary objective of the OPNFV project is to provide users with open source technology they can use and tailor for their purposes, and the Understanding OPNFV book covers the various aspects to help you get started with and get the most out of OPNFV. The last section of the book also explains how  you might get involved with OPNFV and provides links to additional OPNFV resources.

Want to learn more? You can download the Understanding OPNFV ebook in PDF (in English or Chinese), or order a printed version on Amazon. Or you can check out the previous blogs:

This article series previews the new Containers Fundamentals training course from The Linux Foundation, which is designed for those who are new to container technologies. In previous excerpts, we talked about what containers are and what they’re not and explained a little of their history. In this last post of the series, we will look at the building blocks for containers, specifically, namespaces, control groups, and UnionFS.

Namespace is a feature of the Linux kernel, which isolates and virtualizes system resources for a process, so that each process gets its own resource, like its own IP address, hostname, etc. System resources that can be virtualized are: mount [mnt], process ID [PID], network [net], Interprocess Communication [IPC], hostnames [UTS], and users [User IDs].

Using the namespace feature of the Linux kernel, we can isolate one process from another. The container is nothing but a process for the kernel, so we isolate each container using different namespaces.

Another important feature that enables containerization is control groups. With control groups, we can limit, account, and isolate the resource users like CPU, memory, disk, network, etc.  And, with UnionFS, we can transparently overlay two or more directories and implement a layered approach for containers.

You can get more details in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

In previous excerpts of the new, self-paced Containers Fundamentals course from The Linux Foundation, we discussed what containers are and are not. Here, we’ll take a brief look at the history of containers, which includes chroot, FreeBSD jails, Solaris zones, and systemd-nspawn. 

Chroot was first introduced in 1979, during development of Seventh Edition Unix (also called Version 7), and was added to BSD in 1982. In 2000, FreeBSD extended chroot to FreeBSD Jails. Then, in the early 2000s, Solaris introduced the concept of zones, which virtualized the operating system services.

With chroot, you can change the apparent root directory for the currently running process and its children. After configuring chroot, subsequent commands will run with respect to the new root (/). With chroot, we can limit the processes only at the filesystem level, but they share the resources, like users, hostname, IP address, etc. FreeBSD Jails extended the chroot model by virtualizing users, network sub-systems, etc.

systemd-nspawn has not been around as long as chroot and Jails, but it can be used to create containers, which would be managed by systemd. On modern Linux operating systems, systemd is used as an init system to bootstrap the user space and manage all the processes subsequently.

This training course, presented mainly in video format, is aimed at those who are new to containers and covers the basics of container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more.

You can learn more in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.

Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.

VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.

Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?”  From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.

Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:

Want to learn more? Access all the free sample chapter videos now!

In this series, we’ll provide a preview of the new Containers Fundamentals (LFS253) course from The Linux Foundation. The course is designed for those who are new to container technologies, and it covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In this installment, we start from the basics. You can also sign up to access all the free sample chapter videos now.

What Are Containers?

In today’s world, developers, quality assurance engineers, and everyone involved in the application lifecycle are listening to customer feedback and striving to implement the requested features as soon as possible.

Containers are an application-centric way to deliver high-performing, scalable applications on the infrastructure of your choice by bundling the application code, the application runtime, and the libraries.

Additionally, using containers with microservices makes a lot of sense, because you can do rapid development and deployment with confidence. With containers, you can also record a deployment by building an immutable infrastructure. If something goes wrong with the new changes, you can simply return to the previously known working state.

This self-paced course — taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook — is provided almost entirely in video format. This video from chapter 1 gives an overview of containers.

Want to learn more? Access all the free sample chapter videos now!

The Linux Foundation has announced keynote speakers and session highlights for Open Networking Summit, to be held April 3-6, 2017 in Santa Clara, CA.

ONS promises to be the largest, most comprehensive and most innovative networking and orchestration event of the year. The event brings enterprises, carriers, and cloud service providers together with the networking ecosystem to share learnings, highlight innovation and discuss the future of open source networking.

Speakers and attendees at Open Networking Summit represent the best and brightest in next-generation open source networking and orchestration technologies.

ONS keynote speakers

Martin Casado, a general partner at the venture capital firm Andreessen Horowitz and co-founder of Nicira (acquired by VMware in 2012) will give a keynote on the future of networking. (See our Q&A with Casado for a sneak preview.)

Other keynote speakers include:

  • John Donovan, Chief Strategy Officer and Group President – AT&T Technology and Operations with Andre Fuetsch, President AT&T Labs and Chief Technology Officer at AT&T

  • Justin Dustzadeh, VP, Head of Global Infrastructure Network Services, Visa

  • Dr. Hossein Eslambolchi, Technical Advisor to Facebook, Chairman & CEO, 2020 Venture Partners

  • Albert Greenberg, Corporate Vice President Azure Networking, Microsoft

  • Rashesh Jethi, SVP Engineering at Amadeus IT Group SA, the world’s leading online travel platform

  • Sandra Rivera, Vice President Datacenter Group, General Manager, Network Platforms Group, Intel Corporation

  • Amin Vahdat, Google Fellow and Technical Lead for Networking, Google

ONS session speakers

Summit sessions will cover the full scope of open networking across enterprise, cloud and service providers. Topics that will be explored at the event include container networking, software-defined data centers, cloud-native application development, security, network automation, microservices architecture, orchestration, SDN, NFV and so much more. Look forward to over 75 tutorials, workshops, and sessions led by networking innovators.

Session highlights include:

  • Accelerated SDN in Azure, Daniel Firestone, Microsoft

  • Troubleshooting for Intent-based Networking, Joon-Myung Kang, Hewlett Packard Labs

  • Beyond Micro-Services Architecture, Larry Peterson, Open Networking Lab

  • Combining AI and IoT. New Industrial Revolution in our houses and in the Universe, Karina Popova, LINK Mobility

  • Rethinking NFV: Where have we gone wrong, and how can we get it right?, Scott Shenker, UC Berkeley

View the full schedule with many more sessions across six tracks.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Register to attend by February 19 and save more than $800 over late registration pricing.

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

In part 1 of this series, we defined cloud computing and discussed different cloud services models and the needs of users and platform providers. This time we’ll discuss some of the challenges that conventional data centers face and why automation and virtualization, alone, cannot fully address these challenges. Part 3 will cover the fundamental components of clouds and existing cloud solutions.

For more on the basic tenets of cloud computing and a high-level look at OpenStack architecture, download the full sample chapter from The Linux Foundation’s online Essentials of OpenStack Administration course.

Conventional Data Centers

Conventional data centers are known for having a lot of hardware that is, by current standards at least, grossly underutilized. In addition to that, all that hardware (and the software that runs on it) is usually managed with relatively little automation.

Even though many things happen automatically these days (configuration deployment systems such as Puppet and Chef help here), the overall level of automation is typically not very high.

With conventional data centers it is very hard to find the right balance between capacity and utilization. This is complicated by the fact that many workloads do not fully utilize a modern server: for instance, some may use a lot of CPU but little memory, or a lot of disk IO but little CPU. Still, data centers will want enough capacity to handle spikes in load, but don’t want the cost of idle hardware

Whatever the case, it is clear that modern data centers require a lot of physical space, power, and cooling. The more efficient they run, the better for all parties involved.

mPXG1nmdkzEB0TDMlBvUDh5ZeHI6CzEsqoVDI0BR

Figure 1: In a conventional data center some servers may use a lot of CPU but little memory (MEM), or a lot of disk IO but little CPU.

A conventional data center may have several challenges to efficiency. Often there are several silos, or divisions of duties among teams. You may have a systems team that handles the ongoing maintenance of operating systems. A hardware team that does the physical and plant maintenance. Database and network teams. Perhaps even storage and backup teams as well. While this does allow for specialization in a particular area the efficiency of producing a new instance for the customer requirements is often low.

As well, a conventional data center tends to grow in an organic method. By that I mean, it may not be a well thought-out change. If it’s 2 a.m. and something needs doing, a person from that particular team may make the changes that they think are necessary. Without the proper documentation the other teams are then unaware of those changes and to figure it out in the future requires a lot of time, and energy, and resources which further lowers efficiency.

Manual Intervention

One of the problems arises when a data center needs to expand: new hardware is ordered, and, once it arrives, it’s installed and provisioned manually. Hardware is likely specialized, making it expensive. Provisioning processes are manual and, in turn, costly, slow, and inflexible.

“What is so bad about manual provisioning?” Think about it: network integration, monitoring, setting up high availability, billing… There is a lot to do, and some of it is not simple. These are things that are not hard to automate, but up until recently, this was hardly ever done.

Automation frameworks such as Puppet, Chef, JuJu, Crowbar, or Ansible can take care of a fair amount of the work in modern data centers and automate it. However, even though the frameworks exist, there are many tasks in a data center they cannot do or do not do well.

Virtualization

A platform provider needs automation, flexibility, efficiency, and speed, all at low cost. We have automation tools, so what is the missing piece? Virtualization!

Virtualization is not a new thing. It has been around for years, and many people have been using it extensively. Virtualization comes with the huge advantage of isolating the hardware from the software being used. Modern server hardware can be used much more efficiently when being combined with virtualization. Also, virtualization allows for a much higher level of automation than standard IT setups do.

bDtr1KvuvNJMSduZhCRKoF81ayc1M-n_31H9pR-v

Figure 2: Virtualization flexibility.

Virtualization and Automation

For instance, deploying a new system in a virtualized environment is fairly easy, because all it takes is creating a new Virtual Machine (VM). This helps us plan better when buying new hardware, preparing it, and integrating it into the platform provider’s data center. Typical virtualization environments such as VMWare, KVM on Linux, or Microsoft Hyper-V are good examples.

Yet, the situation is not ideal, because in standard virtualized environments, many things need to still be done by hand.

Customers will typically not be able to create new VMs on their own; they need to wait for the provider to do it for them. The infrastructure provider will first create storage (such as Ceph, SAN, or iSCSI LUN), attach it to the VM, and then perform OS installation and basic configuration.

In other words, standard virtualization is not enough to fulfill either providers’ or their customers’ needs. Enter cloud computing!

In Part 3 of this series, we’ll contrast what we’ve learned about conventional, un-automated infrastructure offerings with what happens in the cloud.

Read the other articles in this series: 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Help shape the future of open networking! The Linux Foundation is now seeking business and technical leaders to speak at Open Networking Summit 2017.

On April 3-6 in Santa Clara, CA, ONS will gather more than 2,000 executives, developers and network architects to discuss innovations in networking and orchestration. It is the only event that brings together the business and technical leaders across carriers and cloud service providers, vendors, start-ups and investors, and open source and open standards projects in software-defined networking (SDN) and network functions virtualization (NFV).

Submit a talk to speak in one of our five new tracks for 2017 and share your vision and expertise. The deadline for submissions is Jan. 21, 2017.

The theme this year is “Open Networking: Harmonize, Harness and Consume.” Tracks and suggested topics include:

General Interest Track

  • State of Union on Open Source Projects (Technical updates and latest roadmaps)

  • Programmable Open Hardware including Silicon & White Boxes + Open Forwarding Innovations/Interfaces

  • Security in a Software Defined World

Enterprise DevOps/Technical Track

  • Software Defined Data Center Learnings including networking interactions with Software Defined Storage

  • Cloud Networking, End to End Solution Stacks – Hypervisor Based

  • Container Networking

Enterprise Business/Architecture Track

  • ROI on Use Cases

  • Automation – network and beyond Analytics

  • NFV for Enterprise (vPE

Carriers DevOps/Technical Track

  • NFV use Cases – VNFs

  • Scale & Performance of VNFs

  • Next Gen Orchestration OSS/BSS & FCAPS models

Carriers Business/Architecture Track

  • SDN/NFV learnings

  • ROI on Use Cases

  • Architecture Learnings from Cloud

See the full list of potential topics on the ONS website.

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register by February 19 to save over $850.