Posts

Software-Defined Infrastructure

“The magic of the Uber app today is powered by a highly distributed software architecture,’’ says Uber’s Justin Dustzadeh.

The only way for Uber to deliver the required level of network performance and availability is through software and automation, said Justin Dustzadeh, the head of global network and software platform at Uber.  The ride sharing company relies heavily on software to automate its infrastructure and thoroughly tests not only its software but also the test environment itself, Dustzadeh said, speaking at the recent Open Networking Summit

“Our approach … is to create a test environment that can not only provide the capabilities needed to do the traditional software test cycles — such as feature testing, regression testing, integration testing — but also enables us to deploy and use the tested software to provision, monitor and configure the test environment itself,’’ he told the audience.

To give an idea of just how vast a network Uber has, the company, which started in 2010, logged over five billion trips in 2016, and about 15 million rides occur every day in over 600 cities and in 78 countries, he said.

Software architecture

“The magic of the Uber app today is powered by a highly distributed software architecture that relies on a fault-tolerant and highly available infrastructure,’’ Dustzadeh said.  To fully achieve the benefits of software-based automation, he said they always strive to use open standards-based technologies and avoid dependency on a single vendor across the entire infrastructure stack.

At Uber, a key enabler is to build real-time or near real-time visibility into the infrastructure state, and then leveraging that information and augment it with additional insights from analytics and machine learning, Dustzadeh said. Then IT can push the desired state of the infrastructure through programmatic interfaces.

In terms of real-life examples, he said they use software to automate many areas, from delivering forecasting models to doing capacity planning, provisioning infrastructure and managing all the changes that IT performs. Additionally, software is used to automate detecting incidents and for mitigating and remediating when things fail.

Automation

“For provisioning across our server and network environments we leverage a number of homegrown software platforms to automate and orchestrate the entire provisioning process,” in areas like auto discovery, Dustzadeh said. On the network side, for example, IT pushes intelligence to the devices to enable a distributed self-discovery model and enable zero-touch provisioning, he noted. This includes auto validation of the state of the hardware, for example, to prevent bad devices from going into production, he added.

Uber’s IT group utilizes a distributed and highly available platform for auto-detection, he said. On the network side, they do both active and passive monitoring, leveraging streaming telemetry. This gives officials near real-time visibility into the state of the network, including network reachability, network latency, packet losses, and link utilization, he said.

Auto-mitigation and auto-remediation are other areas where Uber heavily leverages software to improve its operational efficiencies, he said. “So when hardware fails, not only do we have to ensure that the issue is mitigated quickly before it becomes a service impacting incident, we also automate the back-end workflows to automatically generate troubleshooting and/or RMA tickets.”

If necessary, he said, they can also do auto-diagnostic tests, auto-remediation tests and perform failure prediction functions, for example, by monitoring specific metrics or by running specific playbooks.

Resiliency

Uber views its network as a key enabler of its business, Dustzadeh said. “Such network resiliency with the focus on deterministic failure behavior is one of our top design principles. Operational efficiency is also a key objective, meaning that the network has to be simple to build and also be flexible and cost effective.”

On the backbone side and in the WAN space, Uber is moving away from static and long-term contract models toward a more flexible approach, preferably SDN-controlled, on-demand spectrum-as-a-service, he said. “We are also exploring ideas and future models where regional and long-haul bandwidth could be more on demand and usage based like cloud services where carriers would serve as spectrum brokers.”

On the data center side, in addition to the software-defined capabilities Dustzadeh outlined, the company is also looking into server OEMs and a modular rack design to support multiple server types, for example, across compute, storage, and AI, and machine learning with GPU and FPGA, he said. They are also looking at network disaggregation in the data center.

“There is a great opportunity, especially in the data center space, to look into the disaggregated model to separate network hardware and network software,’’ he said. This could enable a much faster pace of innovation and faster development of new features, he noted.  

Watch the complete presentation below:

software defined networking

Wendy Cartee, Nick McKeown, Guru Parulkar, and Chris Wright discuss the first 10 years of software defined networking at Open Networking Summit North America.

In 2008, if you wanted to build a network, you had to build it from the same switch and router equipment that everyone else had, according to Nick McKeown, co-founder of Barefoot Networks, speaking as part of a panel of networking experts at Open Networking Summit North America

Equipment was closed, proprietary, and vertically integrated with features already baked in, McKeown noted. And, “network management was a dirty word. If you wanted to manage a network of switches, you had to write your own scripts over a lousy, cruddy CLI, and everybody had their own way of doing it in order to try to make their network different from everybody else’s.”

All this changed when Stanford University Ph.D. student Martin Casado had the bold idea to rebuild the Stanford network out of custom-built switches and access points, he said.

Separate Planes

“Martin just simply showed that if you lift the control up and out of the switches, up into servers, you could replace the 2,000 CPUs with one CPU centrally managed and it would perform exactly how you wanted, could administered by about 10 people instead of 200. And you could implement the policies of a large institution directly in one place, centrally administered,” said McKeown.

That led to the birth of The Clean Slate program and, shortly afterward, Kate Green from MIT Technology Review coined the term Software Defined Networking (SDN), he said.

“What seemed like a very simple idea, to just separate the control plane from the forwarding plane, define a protocol that is OpenFlow, and enable the research community to build new capabilities and functionality on top of that control plane … caught the attention of the research community and made it very, very easy for them to innovate,’’ said Guru Parulkar, executive director of the Open Networking Foundation.

On the heels of that came the idea of slicing a production network using OpenFlow and a simple piece of software, he said. In one slice you could run a production network, and in another slice you could run an experimental network and show the new capabilities.

The notion of the segregating of the control plane and the data plane brought about a whole new way of doing networking as it became open, along with the intersection of open source and SDN, noted moderator Wendy Cartee, senior director of marketing, Cloud Native Applications, at VMware.

“Building all of this new virtualization technology and bringing it into enterprises and to the world at large, created a need for a type of network programmability” that was happening as the same time as the research, noted Chris Wright, vice president and CTO, at Red Hat. That brought about open source tools like Open vSwitch, “so we could build a type of network topology that we needed in virtualization.”

Confluence of Events

In the beginning, there was much hype about SDN and desegregation and OpenFlow, Wright said. But, he continued, it’s not about a particular tool or a protocol, “it’s about a concept, and the concept is about programmability of the network, and open source is a great way to help develop skills and advance the industry with a lot of collaborative effort.”

There was a confluence of events: taking some core tenets from research, creating open source projects for people to collaborate around and solve real engineering problems for themselves, Wright said. “To me it’s a little bit of the virtualization, a little bit of academic research coming together at just the right time and then accelerated with open source code that we can collaborate on.”

Today, many service providers are deploying CORD (Central Office Re-architected as a Datacenter) because operators want to rebuild the network edge because 5G is coming, Parulkar observed.

“Many operators want to [offer] gigabit-plus broadband access to their residential customers,” he said. “The central offices are very old and so building the new network edge is almost mandatory.” Ideally, they want to do it with new software defined networking, open source, desegregation and white boxes, he added.

The Next 10 Years

Looking ahead, the networking community “risks a bit of fragmentation as we will go off in different directions,’’ said McKeown. So he said it’s important to find a balance, and the common interest is in creating production quality software from ODL, ONS, CORD, and P4.

The overall picture is that “we’re trying to build next-generation networks,’’ said Wright. “What’s challenging for us as a broad industry is finding the best-of-breed ways to do that … so that we don’t create fragmentation. Part of that fragmentation is a lack of interoperability, but part of that fragmentation is just focus.”

There is still a way to go to realize the full potential of SDN, said Parulkar. But in 10 years’ time, predicted Wright, “SDN20 will be really an open source movement. I think SDN is about unlocking the potential of the network in the context of applications and users, not just the operators trying to connect … two different, separate end points.”

Wright suggested that audience members change their mindset and grow their skills, “because many of the operational practices that we see today in networks don’t translate into a software world where things move rapidly. We [need to] look at being able to make small, consistent, incremental changes rather than big bang, roll out changes. Getting involved and really being open to new techniques, new tools and new technologies … is how, together we can create the next generation. The new Internet.”

Vint Cerf

Vint Cerf, a “Father of the Internet,” spoke at the recent Open Networking Summit. Watch the complete presentation below.

The secret behind Internet protocol is that it has no idea what it’s carrying – it just a bag of bits going from point A to point B. So said Vint Cerf, vice president and chief internet evangelist at Google, speaking at the recent Open Networking Summit.

Cerf, who is generally acknowledged as a “Father of the Internet” said that one of the objectives of this project, which was turned on in 1983, was to explore the implications of open networking, including “open source, open standards and the process for which the standards were developed, open protocol architectures, which allowed for new protocols to be invented and inserted into this layered architecture.” This was important, he said, because people who wanted to do new things with the network were not constrained to its original design but could add functionality.

Open Access

When he and Bob Kahn (co-creator for the TCP/IP protocol) were doing the original design, Cerf said, they hoped that this approach would lead to a kind of organic growth of the Internet, which is exactly what has been seen.  

They also envisioned another kind of openness, that of open access to the resources of the network, where people were free both to access information or services and to inject their own information into the system. Cerf said they hoped that, by lowering the barriers to access this technology, they would open the floodgates for the sharing of content, and, again, that is exactly what happened.

There is, however, a side effect of reducing these barriers, which, Cerf said, we are living through today, which includes the proliferation of fake news, malware, and other malicious content. It has also created a set of interesting socioeconomic problems, one of which is dealing with content in a way that allows you decide which content to accept and which to reject, Cerf said. “This practice is called critical thinking, and we don’t do enough of it. It’s hard work, and it’s the price we pay for the open environment that we have collectively created.”

Internet Architecture

Cerf then shifted gears to talk about the properties of Internet design. “One of the most interesting things about the Internet architecture is the layering structure and the tremendous amount of attention being paid to interfaces between the layers,’’ he noted. There are two kinds: vertical interfaces and the end-to-end interactions that take place. Adoption of standardized protocols essentially creates a kind of interoperability among various components in the system, he said.

“One interesting factor in the early Internet design is that each of the networks that made up the Internet, the mobile packet radio net, the packet satellite net, and the ARPANET, were very different inside,” with different addressing structures, data rates and latencies. Cerf said when he and Bob Kahn were trying to figure out how to make this look uniform, they concluded that “we should not try to change the networks themselves to know anything about the Internet.”

Instead, Cerf said, they decided the hosts would create Internet packets to say where things were supposed to go. They had the hosts take the Internet packets (which Cerf likened to postcards) and put them inside an envelope, which the network would understand how to route. The postcard inside the envelope would be routed through the networks and would eventually reach a gateway or destination host; there, the envelope would be opened and the postcard would be sent up a layer of protocol to the recipient or put into a new envelope and sent on.

“This encapsulation and decapsulation isolated the networks from each other, but the standard, the IP layer in particular, created compatibility, and it made these networks effectively interoperable, even though you couldn’t directly connect them together,’’ Cerf explained. Every time an interface or a boundary was created, the byproduct was “an opportunity for standardization, for the possibility of creating compatibility and interoperability among the components.”

Now, routers can be disaggregated, such as in the example of creating a data plane and a control plane that are distinct and separate and then creating interfaces to those functions. Once we standardize those things, Cerf said, devices that exhibit the same interfaces can be used in a mix. He said we should “be looking now to other ways in which disaggregation and interface creation creates opportunities for us to build equipment” that can be deployed in a variety of ways.

Cerf said he likes the types of switches being built today – bare hardware with switching capabilities inside – that don’t do anything until they are told what to do, he said. “I have to admit to you that when I heard the term ‘software-defined network,’ my first reaction was ‘It’s a buzzword, it’s marketing,’ it’s always been about software.”

But, he continued, “I think that was an unfair and too shallow assessment.” His main interest in basic switching engines is that “they don’t do anything until we tell them what to do with the packets.”

Adopting Standards

Being able to describe the functionality of the switching system and how it should treat packets, if standardized, creates an opportunity to mix different switching systems in a common network, he said. As a result, “I think as you explore the possibilities of open networking and switching platforms, basic hardware switching platforms, you are creating some new opportunities for standardization.”

Some people feel that standards are stifling and rigid, Cerf noted. He said he could imagine situations where an over-dependence on standards creates an inability to move on, but standards also create commonality. “In some sense, by adopting standards, you avoid the need for hundreds, if not thousands of bilateral agreements of how you will make things work.”

In the early days, as the Internet Engineering Task Force (IETF) was formed, Cerf said one of the philosophies they tried to adopt “was not to do the same thing” two or three different ways.

Deep Knowledge

Openness of design allows for deep knowledge of how things work, Cerf said, which creates a lot of educated engineers and will be very helpful going forward. The ability to describe the functionality of a switching device, for example, “removes ambiguity from the functionality of the system. If you can literally compile the same program to run on multiple platforms, then you will have unambiguously described the functionality of each of those devices.”

This creates a uniformity that is very helpful when you’re trying to build a large and growing and complex system, Cerf said.

“There’s lots of competition in this field right now, and I think that’s healthy, but I hope that those of you who are feeling these competitive juices also keep in mind that by finding standards that create this commonality, that you will actually enrich the environment in which you’re selling into. You’ll be able to make products and services that will scale better than they might otherwise.”

Hear more insights from Vint Cerf in the complete presentation below:

Open networking summit

Submit your proposal to speak at Open Networking Summit Europe, happening September 25-27 in Amsterdam.

Open Networking Summit Europe (ONS EU) is the first combined Technical and Business Networking conference for Carriers, Cloud and Enterprises in Europe. The call for proposals for ONS EU 2018 is now open, and we invite you to share your expertise.

Based on feedback we received at Open Networking Summit North America 2018, our restructured agenda will include project-based technical sessions as well.

Share your knowledge with over 700 architects, developers, and thought leaders paving the future of network integration, acceleration and deployment. Proposals are due Sunday, June 24, 2018.

Suggested Topics:

Networking Futures: Share innovative ideas and submissions that will disrupt and change the landscape of networking, as well as networking enabled markets, in the next 3-5 years. Submissions can be for Enterprise IT, Service Providers or Cloud Markets.

Network General Sessions: Common business, architecture, process or people issues that are important to move the Networking agenda forward in the next 1-2 years.

(Technical) Service Provider & Cloud Networking: We want to hear what you have to say about the containerization of service provider workloads, multi-cloud, 5G, fog, and edge access cloud networking.

(Business & Architecture) Service Provider & Cloud Networking: We’re seeking proposals on software-defined packet-optical, mobile edge computing, 4G video/CDN, 5G networking, and incorporating legacy systems (legacy enterprise workload migration, role of networking in cloud migration, and interworking of carrier OSS/BSS/FCAPS systems).

Submit a Talk >>

Get Inspired!

Watch presentations from Open Networking Summit North America 2018

SD-WAN

Shunmin Zhu, Head of Alibaba Cloud Network Services, offers insights on the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

The 2018 Open Networking Summit is rapidly approaching. In anticipation of this event, we spoke to Shunmin Zhu, Head of Alibaba Cloud Network Services to get more insights on two of the hot topics that will be discussed at the event: the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

“SDN is a network design approach beyond just a technology protocol. The core idea is decoupling the forwarding plane from the control plane and management plane. In this way, network switches and routers only focus on packet forwarding,” said Zhu.

“The forwarding policies and rules are centrally managed by a controller. From a cloud service provider’s perspective, SDN enables customers to manage their private networks in a more intelligent manner through API.”

Shunmin Zhu

Shunmin Zhu, Head of Alibaba Cloud Network Services

This newfound approach to networks that were previously thought to be nearly unfathomable black boxes brings welcome transparency and flexibility. And, that naturally leads to more innovation such as SD-WAN and Hybrid-WAN.

Zhu shared more information on both of those cutting-edge developments later in this interview. Here is what he had to say about how all these things come together to shape the future of the networking.

Linux.com:  Please tell us a little more about SDN for the benefit of readers who may not be familiar with it.

Shunmin Zhu: Today, cloud services make it very convenient for a user to buy a virtual machine, set up the VM, change the configurations at any time, and choose the most suitable billing method. SDN offers the flexibility of using network products the same way as using a VM. Such degree of flexibility was not seen in networks before the advent of SDN.

Before, it was unlikely for a user to divide his cloud network into several private subnets. In the SDN era, however, with VPC (Virtual Private Cloud) users are able to customize their cloud networks by choosing the private subnets and dividing them further. In short, SDN puts the power of cloud network self-management into the hands of users.

Linux.com: What were the drivers behind the development of SDN? What are the drivers spurring its adoption now?

Zhu: Traditional networks prior to SDN find it hard to support the rapid development of business applications. The past few decades witnessed fast growth in the computing industry but not so much innovation was seen in the networking sector. With emerging trends, such as cloud computing and virtualization, organizations need their networks to become as flexible as the cloud computing and storage resources in order to respond to IT and business requirements. Meanwhile the hardware, operating system, and network application of the traditional network are tightly coupled and not accessible to an outsider. The three components are usually controlled by the same OEM. Any innovation or update is thus heavily dependent on the device OEMs.

The shortcomings of the traditional network are apparent from a user’s perspective. First and foremost is the speed of delivery. Network capacity extension usually takes several months, and even a simple network configuration could take several days, which is hard for customers to accept today.

From the perspective of an Internet Service Provider (ISP), the traditional network could hardly satisfy the need of their customers. Additionally, heterogeneous network devices from multiple vendors complicate network management. There’s little that ISPs could do to improve the situation as the network functions are controlled by the device OEMs. User and carrier’s urgent need for SDN has made this technology popular. In a large extent, SDN overcomes the heterogeneity of the physical network devices and opens up network functions via APIs. Business applications can call APIs to turn on network services on demand, which is revolutionary in the network industry.

Linux.com: What are the business benefits overall?

Zhu: The benefits of SDN are twofold. On the one hand, it helps to reduce cost, increase productivity, and reuse the network resources. SDN makes the use of networking products and services very easy and flexible. It gives users the option to pay by usage or by duration. The cost reduction and productivity boost empowers the users to invest more time and money into core business and application innovations. SDN also increases the reuse of the overall network resources in an organization.

On the other hand, SDN brings new innovations and business opportunities to the networking industry. SDN technology is fundamentally reshaping networking toward a more open and prosperous ecosystem. Traditionally, only a few network device manufacturers and ISPs were the major players in the networking industry. With the arrival of SDN, more participants are encouraged to create new networking applications and services, generating tons of new business opportunities.

Linux.com: Why is SDN gaining in popularity now?

Zhu: SDN is gaining momentum because it brings revolutionary changes and tremendous business value to the networking industry. The rise of cloud computing is another factor that accelerates the adoption of SDN. The cloud computing network offers the perfect usage scenario for SDN to quickly land as a real-world application. The vast scale, large scope, and various needs of the cloud network pose a big challenge to the traditional network. SDN technology works very well with cloud computing in terms of elasticity. SDN virtualizes the underlay physical network to provide richer and more customized services to the vast number of cloud computing users.

Linux.com: What are future trends in SDN and the emerging SD-WAN technology?

Zhu: First of all, I think SDN will be adopted in more networking usage scenarios. Most of the future networks will be designed by the rule of SDN. In addition to cloud computing data centers, WAN, carrier networks, campus networks, and even wireless networks will increasingly embrace the adoption of SDN.

Secondly, network infrastructure based on SDN will further combine the power of hardware and software. By definition, SDN is software defined network. The technology seems to be prone to the software side. On the flipside, SDN cannot leave the physical network devices upon which it builds the virtual network. The difficulty to improve performance is another disadvantage of a pure software-based solution. In my vision, SDN technology will evolve towards a tighter combination with hardware.

The more powerful next generation network will be built upon the mutually reinforcing software and hardware. Some cloud service providers have already started to use SmartNIC as a core component in their SDN solution for performance boost.

The next trend is the rapid development of SDN-based network applications. SDN helps build an open industry environment. It’s a good time for technology companies to start businesses around innovative network applications such as network monitoring, network analytics, cyber security and NFV (Network Function Virtualization).

SD-WAN is the application of SDN technology in the wide area network (WAN) space. Generally speaking, WAN refers to a communications network that connects multiple remote local area networks (LANs) with a distance of tens to thousands of miles to each other. For example, a corporate WAN may connect the networks of its headquarters, branch offices, and cloud service providers. Traditional WAN solutions, such as MPLS, could be expensive and require a long period before service provisioning. Wireless networks, on the other hand, fall short in bandwidth capacity and stability. The invention of SD-WAN fixes these problems to a large extent.

For instance, a company can build its corporate WAN by connecting branch offices to the headquarters via virtual dedicated line and internet, also known as a Hybrid-WAN solution. The Internet link brings convenience to network connections between the branches to the headquarters while the virtual dedicated line guarantees the quality of the network service. The Hybrid-WAN solution balances cost, efficiency, and quality in creating a corporate WAN. Other benefits of SD-WAN include SLA, QoS, and application-aware routing rules – key applications are tagged and prioritized in network communication for a better performance. With these benefits, SD-WAN is getting increasing attention and popularity.

Linux.com: What kind of user experience do you think is expected regarding SDN products and services?

Zhu: There are three things that are most important to SDN user experience. First is the simplicity. Networking technologies and products sometimes impress users as over complicated and hard to manage. The SDN network products should be radically simplified. Even a user with limited knowledge in networking should be able to use and configure the product.

Second is the intelligence. SDN network products should be smart enough to identify incidents and fix the issues by itself. This will minimize the impact to the customer’s business and reduce the management costs.

The third most important thing is the transparency. The network is the underlying infrastructure to all applications. The lack of transparency sometimes makes users feel that their network is a black box. A successful SDN product should give more transparency to the network administrators and other network users.

This article was sponsored by Alibaba and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

open source networking

Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed open source networking trends at Open Source Summit Europe.

Ever since the birth of local area networks, open source tools and components have driven faster and more capable network technologies forward. At the recent Open Source Summit event in Europe, Arpit Joshipura, Networking General Manager at The Linux Foundation, discussed his vision of open source networks and how they are being driven by full automation.

“Networking is cool again,” he said, opening his keynote address with observations on software-defined networks, virtualization, and more. Joshipura is no stranger to network trends. He has led major technology deployments across enterprises, carriers, and cloud architectures, and has been a steady proponent of open source.

“This is an extremely important time for our industry,” he said. “There are more than 23 million open source developers, and we are in an environment where everyone is asking for faster and more reliable services.”

Transforming telecom

As an example of transformative change that is now underway, Joshipura pointed to the telecom industry. “For the past 137 years, we saw proprietary solutions,” he said. “But in the past several years, disaggregation has arrived, where hardware is separated from software. If you are a hardware engineer you build things like software developers do, with APIs and reusable modules.  In the telecom industry, all of this is helping to scale networking deployments in brand new, automated ways.”

Joshipura especially emphasized that automating cloud, network and IoT services will be imperative going forward. He noted that enterprise data centers are working with software-defined networking models, but stressed that too much fragmented and disjointed manual tooling is required to optimize modern networks.

Automating services

“In a 5G world, it is mandatory that we automate services,” he said. “You can’t have an IoT device sitting on the phone and waiting for a service.” In order to automate network services, Joshipura foresees data rates increasing by 100x over the next several years, bandwidth increasing by 10x, and latencies decreasing to one-fifth of what we tolerate now.

The Linux Foundation hosts several open source projects that are key to driving networking automation. For example, Joshipura noted EdgeX Foundry and its work on IoT automation, and Cloud Foundry’s work with cloud-native applications and platforms. He also pointed to broad classes of open source networking tools driving automation, including:

  • Application layer/app server technologies
  • Network data analytics
  • Orchestration and management
  • Cloud and virtual management
  • Network control
  • Operating systems
  • IO abstraction & data path tools
  • Disaggregated hardware

Tools and platforms

Joshipura also discussed emerging, open network automation tools. In particular, he described ONAP (Open Network Automation Platform), a Linux Foundation project that provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. Joshipura noted that ONAP is ushering in faster services on demand, including 4G, 5G and business/enterprise solutions.

“ONAP is one of the fastest growing networking projects at The Linux Foundation,” he said, pointing to companies working with ONAP ranging from AT&T to VMware.

Additionally, Joshipura highlighted OPNFV, a project that facilitates the development and evolution of NFV components across open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks. He noted that OPNFV now offers container support and that organizations are leveraging it in conjunction with Kubernetes and OpenStack.

To learn more about the open source tools and trends that are driving network automation, watch Joshipura’s entire keynote address below:
Additionally, registration is open for the Open Networking Summit North America. Taking place March 26-29 in Los Angeles, its the industry’s premier open networking event that brings together enterprises, carriers and cloud service providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking.

Learn more and register now!

Open Networking Summit

Speak at the largest open networking and orchestration event of 2018.

The Linux Foundation has just opened the Open Networking Summit North America (ONS NA) 2018 Call for Proposals, and we invite you to share your expertise with over 2,000 technical and business leaders in the networking ecosystem. Proposals are due by 11:59pm PT on Jan. 14, 2018.

Over 2,000 attendees are expected to attend ONS North America 2018, taking place March 26-29 in Los Angeles, including technical and business leaders across enterprise, service providers, and cloud providers. ONS North America is the only event of its kind, bringing networking and orchestration innovations together with a focus on the convergence of business (CIO/CTO/Architects) and technical (DevOps) communities.

Sign up to get the latest updates on ONS NA 2018!

Open Networking Summit NA conference tracks will include the following topical areas:

Track 1: (General Interest) Networking Futures in IoT, AI, and Network Learning. Including discussions on the progress in standards and open source interworking to drive the industry forward. We’re also seeking topics on networking as it relates to Kubernetes, cloud native, network automation, containers, microservices, and the network’s role in connected cars and connected things.

Track 2: (General Interest) Networking Business and Architecture. We’re looking for proposals on how to effectively evaluate the total cost of ownership of hybrid (public/private, SDN/NFV + traditional, proprietary/open source) environments, including acquisition strategies and good cost models for open source solutions. We’re also interested in case studies of open source business models for solution providers.

Track 3: (Technical) Service Provider & Cloud Networking. We want to hear what you have to say about the containerization of service provider workloads, multi-cloud, 5G, fog, and edge access cloud networking.

Track 4: (Business & Architecture) Service Provider & Cloud Networking. We’re seeking proposals on software-defined packet-optical, mobile edge computing, 4G video/CDN, 5G networking, and incorporating legacy systems (legacy enterprise workload migration, role of networking in cloud migration, and interworking of carrier OSS/BSS/FCAPS systems).

Track 5: (Technical) Enterprise IT & DevOps. Share your experience on scale and performance in SDN deployments, expanding container networking, maintaining stability in migration, networking needs of a hybrid cloud/virtualized environment, and figuring out the roadmap from a cost perspective.

Track 6: (Business and Architecture) Enterprise IT (CXO/IT Architects). Do you have use cases to share on IoT and networking from the retail, transportation, utility, healthcare or government sectors? We’re looking for proposals on cost modeling for hybrid environments, automation (network and beyond), analytics, security and risk management/modeling with ML, and NFV for the enterprise.

View here for more details on suggested topics, and submit your proposal before the January 14 deadline.

Get inspired! Watch presentations from ONS 2017.

See all keynotes from ONS 2017.

Not submitting but planning to attend? Register by Feb. 11 and save $800!

Arpit Joshipura, GM of Networking and Orchestration at the Linux Foundation, shares his 2018 predictions for the networking industry.

1. 2015’s buzzwords are 2018’s course curriculum.

SDN, NFV, VNF, containers, microservices — the hype crested in 2016 and receded in 2017. But don’t mistake quiet for inactivity; solution providers and users alike have been hard at work with re-architecting and maturing solutions for key networking challenges. And now that these projects are nearing production, these topics are our most requested areas for training.

2. Open Source networking is crossing the chasm – from POCs to Production.

The ability for users and developers to work side by side in open source has helped projects mature quickly — and vendors to rapidly deliver highly relevant solutions to their customers. For example:

3. Top networking vendors are embracing a shift in their business models…

  • Hardware-centric to software-centric: value-add from rapid customization
  • Proprietary development to open-source, shared development
  • Co-development with end users, reducing time to deployment from 2 years to 6 months

4. Industry-wide adoption of 1-2 Network Automation platforms will enable unprecedented mass customization.

The need to integrate multiple platforms, taking into account each of their unique feature sets and limitations, has traditionally been a massive barrier to rapid service delivery.

In 2018, mature abstractions and standardizing processes will enable user organizations to rapidly onboard and orchestrate a diverse set of best-of-breed VNFs and PNFs at need.

5. Advances in cloud and carrier networking are driving skills and purchasing shifts in the enterprise.

The ease and ubiquity of public cloud for simple workloads has reset end user expectations for Enterprise IT. The carrier space has driven maturity of open networking solutions and processes. Enterprise IT departments are now at a crossroads:

  • How many and which of their workloads and processes do they want to outsource?
  • How can they effectively support those workloads remaining in-house with the same ease and speed users expect?
  • What skills will IT staff need, and how will they get them?

Which brings us to….

6. Prediction #1 will also lead off our Predictions list for 2019.

This article originally appeared on the ONAP website.

 

ONAP

“Bell has been engaged in the ONAP journey from day one and committed to get it to production to demonstrate its value,” said Tamer Shenouda, Director of Network Transformation for Bell.

Bell, Canada’s largest communications company, is the first in the world to deploy the open source version of the Open Network Automation Platform (ONAP) in production. Bell has built the capability to automate its data center tenant network provisioning on top of the ONAP Platform, providing its operations teams with a new tool to improve efficiency and time to market. This is the first step in using ONAP as a common platform across Bell’s networks on its journey towards a multi-partner DevOps model.

As part of the company’s Network 3.0 transformation initiative, Bell and its partners used Agile delivery to launch a minimum viable product with the platform and will continue to adapt it to ensure that it best supports the needs of Bell customers. This significant development sends a clear message to the industry that ONAP is ready and usable, and that carriers don’t need to implement all ONAP components from day one to start production. Bell has also leveraged the capabilities of ONAP Operations Manager to simplify deployments, drastically reduce footprint and enable continuous delivery.

“Bell has been engaged in the ONAP journey from day one and committed to get it to production to demonstrate its value,” said Tamer Shenouda, Director of Network Transformation for Bell. “This demonstration will encourage other partners to take a similar incremental approach in delivery and operations of the platform, and we look forward to other telecoms launching ONAP to production.”

ONAP is a Linux Foundation project that unites two major open networking and orchestration projects – Open Source ECOMP and the Open Orchestrator Project (OPEN-O). ONAP brings together top global carriers and vendors, using shared knowledge to build a unified architecture that allows any network operator to automate, design, orchestrate and manage services and virtual functions.

“We’re very proud to be the first member of the ONAP Project to demonstrate the viability of the platform live on our network,” said Petri Lyytikainen, Bell’s Vice President, Network Strategy, Services and Management. “The evolution of our advanced software-defined networks will enable us to respond even faster to the unique needs of our customers.” 

Bell is a founding Platinum Member of ONAP. Platinum members include: Amdocs, AT&T, China Mobile, China Telecom, Cisco, Cloudify, Ericsson, Huawei, IBM, Intel, Jio, Nokia, Orange, Tech Mahindra, Türk Telekom, Vmware, Vodafone, and ZTE.