Posts

Learn about the principles required to achieve a successful industry pivot to open source.

Linux and open source have changed the computer industry (among many others) forever.  Today, there are tens of millions of open source projects. A valid question is “Why?” How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it?  I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.

An Industry Pivot

Software has become the most important component in many industries, and it is needed in very large quantities. When an entire industry needs to make a technology “pivot,” they often do as much of that as possible in software. For example, the telecommunications industry must make such a pivot in order to support 5G, the next generation of mobile phone network.  Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of Internet-connected sensors and other devices (aka IoT), etc.  To do that, telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users.

Given the drastic changing usage of the network, operators need to be able to deploy, move and/or tear-down services near instantaneously running them on those far-flung compute resources and route the network traffic to and through those service applications in a fully automated fashion. That’s a tremendous amount of software.  In the “old” model of complete competition, each vendor would build their solution to this customer need from the ground up and sell it to their telecom operator customers. It would take forever, cost a huge amount of money, and the customers would be nearly assured that one vendor’s system wouldn’t interoperate with another vendor’s solution.  The market demands solutions that don’t take that long or cost that much and, if they don’t work together, their value is much less for the customer.

So, instead, all the members of the telecom industry, both vendors and customers are collaborating to build a large portion of the foundational platform software together, just once.  Then, each vendor and operator will take that foundation of code and add whatever functionality they feel is differentiating for their customers, test it, harden it, and turn it into a full solution. This way, everyone gets to a solution much more quickly and with much less expense than would otherwise be possible. The mutual benefit of this is obvious. But how can they work together? How can they ensure that each participant in this community can get out of it what they need to be successful? These companies have never worked together before. Worse yet, they are fierce lifelong competitors with the only prior goal of putting the other out of business.

A Level Playing Field

This is what my team does at The Linux Foundation. We create and maintain that level playing field. We are both referee and janitor. We teach what we have learned from the long-term success of the Linux project, among others. Stay tuned for more blog posts detailing those principles and my experiences living those principles both as a participant in open source projects and as the referee.

So, bringing dozens of very large, fierce competitors, both vendors and customers, together and seeding the development effort with several million lines of code that usually only come from one or two companies is the task at hand.  That’s never been done before by anyone. The set of projects under the Linux Foundation Networking umbrella is one large experiment in corporate collaborative development. Take ONAP as an example; its successful outcome is not assured in any way.  Don’t get me wrong. The project has had an excellent start with three releases under its belt, and in general, things are going very well. However, there is much work to do and many ways for this community, and the organizations behind it, to become more efficient, and get to our end goal faster.  Again, such a huge industry pivot has not been done as an open source collaboration before. To get there, we are applying the principles of fairness, technical excellence, and transparency that are the cornerstone of truly collaborative open source development ecosystems. As such, I am optimistic that we will succeed.

This industry-wide technology pivot is not isolated to the telecom sector.  We are seeing it in many others. My goal in writing these articles on open source collaborative development principals, best practices, and experiences is to better explain to those new to this model, how it works, why these principals are in place and what to expect when things are working well, and when they are not.  There are a variety of non-obvious behaviors that organizational leaders need to adopt and instill in their workforce to be successful in one of these open source efforts. I hope these articles will give you the tools and insight to help you facilitate this culture shift within your organization.

Top minds in API development and strategy, social justice in tech, and conscious coding bring a robust set of ideas to the keynote stage

 SAN FRANCISCO, September 6, 2018The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and the OpenAPI Initiative, a Linux Foundation project created to advance API technology, today announced the full schedule for APIStrat 2018, taking place September 24-26 in Nashville, Tennessee.

The API Strategy & Practice Conference, known as APIStrat, is a conference focused on the future of the API economy. The ninth edition of the conference will bring together everyone – from the API curious to today’s leaders – to discuss opportunities and challenges in the API space. The event covers 13 different topic areas in the API economy, including microservices, API as products, API portals, API design, GraphQL and friends, API usability and, more.

Keynotes for the event include leading API voices from across the space as well as conversations that are important to the wider tech sector. Keynotes include:

  • Cristiano Betta, Senior Developer Advocate at Box, discussing A Live API
  • Virginia Eubanks, Associate Professor of Political Science at the University at Albany, SUNY discussing Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor
  • James Higginbotham, Executive API Consultant at LaunchAny discussing Lessons in Transforming the Enterprise to an API Platform
  • Kate O’Neill, author of Pixels and Place and lead at KO Insights, discussing Tech Humanism: Integration, Automation, and the Future of the Human Experience
  • Jenn Schiffer, Community Engineer at Glitch discussing Putting Your Best “Hello World” Forward
  • Steven Willmott, Senior Director and head of API Infrastructure at Red Hat discussing APIs meet Enterprise: Surfing the wave between Chaos and Innovation

Along with panels, sessions and keynotes, APIStrat hosts hands-on workshops, including:

  • Taming Your API from Sachin Agarwal, Principal Product Manager at LaunchDarkly
  • Usable APIs at Scale with Protocol Buffers and gRPCfrom Tim Burks, Staff Software Engineer at Google
  • A Tour of Mobile API Projection from Skip Hovsmith, VP of Growth at CriticalBlue
  • Practical SecDevOps for APIs from Isabelle Mauny, CTO at 42Crunch
  • Turning External Services to Internal APIs from Chris Phillips, SWAT Integration Architect at IBM
  • Secure API Development from Krishan Veer, Technical Leader and Security evangelist at Cisco DevNet

The full lineup of sessions can be viewed here. The event also offers a nursing room, complimentary childcare onsite (pre-registration is requested by September 7), a quiet room and non-binary restrooms.

Registration is discounted to $599 through September 14. Additional academic discounts are available as well; details are available on the event registration page. If you have an interest in becoming a diversity partner for this event, please email apistratevents@linuxfoundation.org.

Members of the media interested in attending can email Dan Brown at dbrown@linuxfoundation.orgto request a complimentary press pass.

APIStrat is made possible by Platinum Sponsors Red Hat and WS02; Silver Sponsor Oracle; Bronze Sponsors 42Crunch, API Fortress, Authlete, Postman, SmartBear and Stoplight; and Break Sponsor, Capital One DevExchange.

Sponsorship opportunities are still available. More information here.

 Additional Resources

Transparency, openness and collaboration will never go out of fashion, says HackerOne’s Mårten Mickos.

Mårten Mickos has been around the open source world for a long time. He has seen the early days when open source was not taken very seriously, but now he is heading HackerOne, a company that’s building a massive community of white hat hackers to help companies create secure systems. Security and open source might seem like different worlds, but Mickos sees strong influences from one to the other.

Mårten Mickos, CEO of HackerOne

Today, open source has become the de facto software development model, but it has not always been that way.  “In 2001, when I joined my MySQL as its CEO, people didn’t believe in open source. It looked cute, like a toy. We looked like a small startup. They didn’t have the courage to follow us, but slowly and surely it started growing,” said Mickos.

Now the question is not who is using open source but who is not using it. 

Open source impact

Many people may see the benefits of open source from a technological perspective, but open source has had a deeper impact on people, culture, and our society.

“One of the greatest benefits of open source is that it has created a model where smart people who disagree with each other can collaborate with each other. It’s easy to collaborate if we agree, but open source enables collaboration even when people disagree,” Mickos said. “That is the true beauty of this model.”

A common myth about open source is that it survives out of altruism and selfless work by some community members. It might have been true in the beginning, but it’s not true anymore. “It’s not dependent on any charity. It’s not dependent on altruism. It’s not dependent on friendship. It’s not dependent on being kind. I mean, hopefully we are kind and friends, but it’s not dependent on it,” said Mickos, “It’s so smartly built that even as we are yelling and screaming at each other, we can still get work done.”

Open source is powerful but that doesn’t mean it will survive without effort. Like any other component of our civilization, it takes work. “We have to educate everybody, like any civilization needs to keep educating the population on what’s important. You educate them about history, language, mathematics, and other things. We have to do that and the new generation will completely get it,” he said.

Open source and security

Open source is known for being more secure than proprietary technology, but there is no magic there either. Just openness and hard work. “It’s more secure than closed source because you are developing it in the open. Your code is subject to the scrutiny of everybody, and I think it has been scientifically shown to be correct,” he said.

Another factor that contributes to the security of open source is the fact that the community is not afraid of talking about its problems. “It also means we know about all the problems in open source. You might think there are a lot of problems, a lot of serious problems, but as a percentage of the total number of lines of code, I would argue that open source is much more secure than closed source because when there is a vulnerability or a weakness in open source software, everybody will know about it. On the contrary, if there is something like that in closed source, it is kept secret and not fixed,” he said.

Mickos thinks the security industry can learn something from open source. “It can learn how to better collaborate on vital initiatives,” he said.

Conclusion

Today, our world is powered by open source. New technologies are arriving and new business models are evolving, yet, proprietary software will persist.

When asked if our future will be powered by open source, Mickos replied, “Transparency, openness and collaboration will never go out of fashion. It’s also true that every now and then, evolution will go backwards; it will be less open, less collaborative. But open source is an unstoppable force. It will come back and break those models and bring back collaboration, openness and sharing.”

Mickos concluded with these words, “I don’t think we can change it because we are humans and our evolution has made us such. Every now and then, there will be self-centered people driven by their own desire, driving us in a different direction so they can be in power, but then we come back. We are bigger in numbers, we never give up and it is the most productive way to build and sustain a society. That’s what we’re here on this planet to do.”

building leadership

The latest Open Source Guide for the Enterprise from The TODO Group provides practical advice for building leadership in open source projects and communities.

Contributing code is just one aspect of creating a successful open source project. The open source culture is fundamentally collaborative, and active involvement in shaping a project’s direction is equally important. The path toward leadership is not always straightforward, however, so the latest Open Source Guide for the Enterprise from The TODO Group provides practical advice for building leadership in open source projects and communities.  

Being a good leader and earning trust within a community takes time and effort, and this free guide discusses various aspects of leadership within a project, including matters of governance, compliance, and culture. Building Leadership in an Open Source Community, featuring contributions from Gil Yehuda of Oath and Guy Martin of Autodesk, looks at how decisions are made, how to attract talent, when to join vs. when to create an open source project, and it offers specific approaches to becoming a good leader in open source communities.

Leadership Mindset

According to the guide, the open source leadership mindset involves:

  • Influence, not control
  • Transparency as a means of crowd-sourcing solutions, not as exposure
  • Leading, not herding

Building leadership can happen at all levels — from managers to developers to volunteers. Developers, for example, are often highly motivated to contribute to open source projects that matter to them and to build their reputations within the community. According to the guide, “open source is so hotly in demand that developers actively seek opportunities to develop or hone their open source chops.”

Guy Martin, Director, Open at Autodesk, Autodesk, says that when interviewing developers, he is frequently asked how the company will help the developer build his or her own open source brand.

Increase Visibility

“Raising your own company’s visibility in its open source work can thus also help recruit developers. Some companies even offer open source training to add to the appeal. Presenting the company’s open source projects at conferences and contributing code in communities are the best ways to raise your company’s visibility. Asking your developers to network with other developers and invite them aboard also tends to work well,” the guide states.

Read the complete guide to Building Leadership in an Open Source Community online now. And, see the list of all Open Source Guides for the Enterprise to learn more.  The information contained in these guides is based on years of experience and best practices from industry leaders. They are developed by The TODO Group in collaboration with The Linux Foundation and the larger open source community.  

Vint Cerf

Vint Cerf, a “Father of the Internet,” spoke at the recent Open Networking Summit. Watch the complete presentation below.

The secret behind Internet protocol is that it has no idea what it’s carrying – it just a bag of bits going from point A to point B. So said Vint Cerf, vice president and chief internet evangelist at Google, speaking at the recent Open Networking Summit.

Cerf, who is generally acknowledged as a “Father of the Internet” said that one of the objectives of this project, which was turned on in 1983, was to explore the implications of open networking, including “open source, open standards and the process for which the standards were developed, open protocol architectures, which allowed for new protocols to be invented and inserted into this layered architecture.” This was important, he said, because people who wanted to do new things with the network were not constrained to its original design but could add functionality.

Open Access

When he and Bob Kahn (co-creator for the TCP/IP protocol) were doing the original design, Cerf said, they hoped that this approach would lead to a kind of organic growth of the Internet, which is exactly what has been seen.  

They also envisioned another kind of openness, that of open access to the resources of the network, where people were free both to access information or services and to inject their own information into the system. Cerf said they hoped that, by lowering the barriers to access this technology, they would open the floodgates for the sharing of content, and, again, that is exactly what happened.

There is, however, a side effect of reducing these barriers, which, Cerf said, we are living through today, which includes the proliferation of fake news, malware, and other malicious content. It has also created a set of interesting socioeconomic problems, one of which is dealing with content in a way that allows you decide which content to accept and which to reject, Cerf said. “This practice is called critical thinking, and we don’t do enough of it. It’s hard work, and it’s the price we pay for the open environment that we have collectively created.”

Internet Architecture

Cerf then shifted gears to talk about the properties of Internet design. “One of the most interesting things about the Internet architecture is the layering structure and the tremendous amount of attention being paid to interfaces between the layers,’’ he noted. There are two kinds: vertical interfaces and the end-to-end interactions that take place. Adoption of standardized protocols essentially creates a kind of interoperability among various components in the system, he said.

“One interesting factor in the early Internet design is that each of the networks that made up the Internet, the mobile packet radio net, the packet satellite net, and the ARPANET, were very different inside,” with different addressing structures, data rates and latencies. Cerf said when he and Bob Kahn were trying to figure out how to make this look uniform, they concluded that “we should not try to change the networks themselves to know anything about the Internet.”

Instead, Cerf said, they decided the hosts would create Internet packets to say where things were supposed to go. They had the hosts take the Internet packets (which Cerf likened to postcards) and put them inside an envelope, which the network would understand how to route. The postcard inside the envelope would be routed through the networks and would eventually reach a gateway or destination host; there, the envelope would be opened and the postcard would be sent up a layer of protocol to the recipient or put into a new envelope and sent on.

“This encapsulation and decapsulation isolated the networks from each other, but the standard, the IP layer in particular, created compatibility, and it made these networks effectively interoperable, even though you couldn’t directly connect them together,’’ Cerf explained. Every time an interface or a boundary was created, the byproduct was “an opportunity for standardization, for the possibility of creating compatibility and interoperability among the components.”

Now, routers can be disaggregated, such as in the example of creating a data plane and a control plane that are distinct and separate and then creating interfaces to those functions. Once we standardize those things, Cerf said, devices that exhibit the same interfaces can be used in a mix. He said we should “be looking now to other ways in which disaggregation and interface creation creates opportunities for us to build equipment” that can be deployed in a variety of ways.

Cerf said he likes the types of switches being built today – bare hardware with switching capabilities inside – that don’t do anything until they are told what to do, he said. “I have to admit to you that when I heard the term ‘software-defined network,’ my first reaction was ‘It’s a buzzword, it’s marketing,’ it’s always been about software.”

But, he continued, “I think that was an unfair and too shallow assessment.” His main interest in basic switching engines is that “they don’t do anything until we tell them what to do with the packets.”

Adopting Standards

Being able to describe the functionality of the switching system and how it should treat packets, if standardized, creates an opportunity to mix different switching systems in a common network, he said. As a result, “I think as you explore the possibilities of open networking and switching platforms, basic hardware switching platforms, you are creating some new opportunities for standardization.”

Some people feel that standards are stifling and rigid, Cerf noted. He said he could imagine situations where an over-dependence on standards creates an inability to move on, but standards also create commonality. “In some sense, by adopting standards, you avoid the need for hundreds, if not thousands of bilateral agreements of how you will make things work.”

In the early days, as the Internet Engineering Task Force (IETF) was formed, Cerf said one of the philosophies they tried to adopt “was not to do the same thing” two or three different ways.

Deep Knowledge

Openness of design allows for deep knowledge of how things work, Cerf said, which creates a lot of educated engineers and will be very helpful going forward. The ability to describe the functionality of a switching device, for example, “removes ambiguity from the functionality of the system. If you can literally compile the same program to run on multiple platforms, then you will have unambiguously described the functionality of each of those devices.”

This creates a uniformity that is very helpful when you’re trying to build a large and growing and complex system, Cerf said.

“There’s lots of competition in this field right now, and I think that’s healthy, but I hope that those of you who are feeling these competitive juices also keep in mind that by finding standards that create this commonality, that you will actually enrich the environment in which you’re selling into. You’ll be able to make products and services that will scale better than they might otherwise.”

Hear more insights from Vint Cerf in the complete presentation below:

SD-WAN

Shunmin Zhu, Head of Alibaba Cloud Network Services, offers insights on the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

The 2018 Open Networking Summit is rapidly approaching. In anticipation of this event, we spoke to Shunmin Zhu, Head of Alibaba Cloud Network Services to get more insights on two of the hot topics that will be discussed at the event: the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

“SDN is a network design approach beyond just a technology protocol. The core idea is decoupling the forwarding plane from the control plane and management plane. In this way, network switches and routers only focus on packet forwarding,” said Zhu.

“The forwarding policies and rules are centrally managed by a controller. From a cloud service provider’s perspective, SDN enables customers to manage their private networks in a more intelligent manner through API.”

Shunmin Zhu

Shunmin Zhu, Head of Alibaba Cloud Network Services

This newfound approach to networks that were previously thought to be nearly unfathomable black boxes brings welcome transparency and flexibility. And, that naturally leads to more innovation such as SD-WAN and Hybrid-WAN.

Zhu shared more information on both of those cutting-edge developments later in this interview. Here is what he had to say about how all these things come together to shape the future of the networking.

Linux.com:  Please tell us a little more about SDN for the benefit of readers who may not be familiar with it.

Shunmin Zhu: Today, cloud services make it very convenient for a user to buy a virtual machine, set up the VM, change the configurations at any time, and choose the most suitable billing method. SDN offers the flexibility of using network products the same way as using a VM. Such degree of flexibility was not seen in networks before the advent of SDN.

Before, it was unlikely for a user to divide his cloud network into several private subnets. In the SDN era, however, with VPC (Virtual Private Cloud) users are able to customize their cloud networks by choosing the private subnets and dividing them further. In short, SDN puts the power of cloud network self-management into the hands of users.

Linux.com: What were the drivers behind the development of SDN? What are the drivers spurring its adoption now?

Zhu: Traditional networks prior to SDN find it hard to support the rapid development of business applications. The past few decades witnessed fast growth in the computing industry but not so much innovation was seen in the networking sector. With emerging trends, such as cloud computing and virtualization, organizations need their networks to become as flexible as the cloud computing and storage resources in order to respond to IT and business requirements. Meanwhile the hardware, operating system, and network application of the traditional network are tightly coupled and not accessible to an outsider. The three components are usually controlled by the same OEM. Any innovation or update is thus heavily dependent on the device OEMs.

The shortcomings of the traditional network are apparent from a user’s perspective. First and foremost is the speed of delivery. Network capacity extension usually takes several months, and even a simple network configuration could take several days, which is hard for customers to accept today.

From the perspective of an Internet Service Provider (ISP), the traditional network could hardly satisfy the need of their customers. Additionally, heterogeneous network devices from multiple vendors complicate network management. There’s little that ISPs could do to improve the situation as the network functions are controlled by the device OEMs. User and carrier’s urgent need for SDN has made this technology popular. In a large extent, SDN overcomes the heterogeneity of the physical network devices and opens up network functions via APIs. Business applications can call APIs to turn on network services on demand, which is revolutionary in the network industry.

Linux.com: What are the business benefits overall?

Zhu: The benefits of SDN are twofold. On the one hand, it helps to reduce cost, increase productivity, and reuse the network resources. SDN makes the use of networking products and services very easy and flexible. It gives users the option to pay by usage or by duration. The cost reduction and productivity boost empowers the users to invest more time and money into core business and application innovations. SDN also increases the reuse of the overall network resources in an organization.

On the other hand, SDN brings new innovations and business opportunities to the networking industry. SDN technology is fundamentally reshaping networking toward a more open and prosperous ecosystem. Traditionally, only a few network device manufacturers and ISPs were the major players in the networking industry. With the arrival of SDN, more participants are encouraged to create new networking applications and services, generating tons of new business opportunities.

Linux.com: Why is SDN gaining in popularity now?

Zhu: SDN is gaining momentum because it brings revolutionary changes and tremendous business value to the networking industry. The rise of cloud computing is another factor that accelerates the adoption of SDN. The cloud computing network offers the perfect usage scenario for SDN to quickly land as a real-world application. The vast scale, large scope, and various needs of the cloud network pose a big challenge to the traditional network. SDN technology works very well with cloud computing in terms of elasticity. SDN virtualizes the underlay physical network to provide richer and more customized services to the vast number of cloud computing users.

Linux.com: What are future trends in SDN and the emerging SD-WAN technology?

Zhu: First of all, I think SDN will be adopted in more networking usage scenarios. Most of the future networks will be designed by the rule of SDN. In addition to cloud computing data centers, WAN, carrier networks, campus networks, and even wireless networks will increasingly embrace the adoption of SDN.

Secondly, network infrastructure based on SDN will further combine the power of hardware and software. By definition, SDN is software defined network. The technology seems to be prone to the software side. On the flipside, SDN cannot leave the physical network devices upon which it builds the virtual network. The difficulty to improve performance is another disadvantage of a pure software-based solution. In my vision, SDN technology will evolve towards a tighter combination with hardware.

The more powerful next generation network will be built upon the mutually reinforcing software and hardware. Some cloud service providers have already started to use SmartNIC as a core component in their SDN solution for performance boost.

The next trend is the rapid development of SDN-based network applications. SDN helps build an open industry environment. It’s a good time for technology companies to start businesses around innovative network applications such as network monitoring, network analytics, cyber security and NFV (Network Function Virtualization).

SD-WAN is the application of SDN technology in the wide area network (WAN) space. Generally speaking, WAN refers to a communications network that connects multiple remote local area networks (LANs) with a distance of tens to thousands of miles to each other. For example, a corporate WAN may connect the networks of its headquarters, branch offices, and cloud service providers. Traditional WAN solutions, such as MPLS, could be expensive and require a long period before service provisioning. Wireless networks, on the other hand, fall short in bandwidth capacity and stability. The invention of SD-WAN fixes these problems to a large extent.

For instance, a company can build its corporate WAN by connecting branch offices to the headquarters via virtual dedicated line and internet, also known as a Hybrid-WAN solution. The Internet link brings convenience to network connections between the branches to the headquarters while the virtual dedicated line guarantees the quality of the network service. The Hybrid-WAN solution balances cost, efficiency, and quality in creating a corporate WAN. Other benefits of SD-WAN include SLA, QoS, and application-aware routing rules – key applications are tagged and prioritized in network communication for a better performance. With these benefits, SD-WAN is getting increasing attention and popularity.

Linux.com: What kind of user experience do you think is expected regarding SDN products and services?

Zhu: There are three things that are most important to SDN user experience. First is the simplicity. Networking technologies and products sometimes impress users as over complicated and hard to manage. The SDN network products should be radically simplified. Even a user with limited knowledge in networking should be able to use and configure the product.

Second is the intelligence. SDN network products should be smart enough to identify incidents and fix the issues by itself. This will minimize the impact to the customer’s business and reduce the management costs.

The third most important thing is the transparency. The network is the underlying infrastructure to all applications. The lack of transparency sometimes makes users feel that their network is a black box. A successful SDN product should give more transparency to the network administrators and other network users.

This article was sponsored by Alibaba and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

open networking

The industry is taking open networking to next level; learn more from Dell EMC’s Jeff Baher in this interview.

Ahead of the much anticipated 2018 Open Networking Summit, we spoke to Jeff Baher, director, Dell EMC Networking and Service Provider Solutions, about what lies ahead for open networking in the data center and beyond.

“For all that time that the client server world was gaining steam in decoupling hardware and software, networking was always in its own almost mainframe-like world, where the hardware and software were inextricably tied,” Baher explained. “Fast forward to today and there exists a critical need to usher networking into the modern world, like its server brethren, where independent decisions are made around hardware and software functions and services modules are assembled and invoked.”

Jeff Baher, director, Dell EMC Networking and Service Provider Solutions

Indeed, the decoupling is well on its way as is the expected rise of independent open network software vendors, such as Cumulus, Big Switch, IP Infusion and Pluribus, as well as Dell EMC’s OS10 Open Edition that are shaping a rapidly evolving ecosystem. Baher describes the progress in the industry thus far as Open Networking ‘1.0’, proving out the model successfully of decoupling networking hardware and software. And with this, the industry is forging ahead taking open networking to the next level.

Here are the insights Baher shared with us about where open networking is headed.

Linux.com: You refer to an industry shift around open networking, tell us about the shift that Dell EMC is talking about at ONS this year.

Jeff Baher: Well, to date we and our partners have been working hard to prove out the viability of the basic premise of open networking, disaggregating or decoupling networking hardware and software to drive an increase in customer choice and capability. This first phase, or as we say Open Networking 1.0, is four years in the making, and I would say it has been a resounding success as evidenced by some of the pioneering Tier 1 service provider deployments we’ve enabled. There is a clear-cut market fit here as we’ve witnessed both significant innovation and investment. And the industry is not standing still as it moves quickly to its 2.0 version. In this next version, the focus is shifting from decoupling the basic elements of hardware and software, to a focus on disaggregating the software stack itself.

Disaggregating the software stack involves exposing both the silicon and system software for adaption and abstraction This level of disaggregation also assumes a decoupling of the network application (i.e., routing or switching) from the platform operating system (the software that makes lights blink and fans spin). In this manner, with all the software functional elements exposed and disaggregated, independent software decisions can be made and development communities can form around flexible software composition, assembly and delivery models.

Linux.com: Why do people want this level of disaggregation?

Baher: Ultimately, it’s about more control, choice and velocity. With traditional networking systems, there’s typically a lot of code that isn’t necessarily always used. By moving to this new model predicated on disaggregated software elements, users can scale back that unused code and run a highly optimized network operating system (NOS) and applications allowing them to get peak performance, with increased security. And this can all be done independent of the underlying silicon, allowing user to be able to make independent decisions around silicon technology and software adaptation.

All of this, of course, is geared for a fairly savvy network department with most likely a large-scale operation to contend with. For the vast majority of IT shops, they won’t want to “crack the hood” of the network stack and disaggregate pieces. Instead, they will look for pre-packaged offerings derived from these larger “early adopter” experiences. For the larger early adopters, however, there can be virtually an immediate payback by customizing the networking stack, making any operational or technical hurdles well worth it.  These early adopters typically already live in a disaggregated world and hence will feel comfortable mixing and matching hardware, OS layers, and protocols to optimize their network infrastructure. A Tier 1 service provider deployment analysis by ACG Research estimates the realized gains with a disaggregated approach to be 47% lower for TCO, three time the service agility for new services at less than a third of the cost to enable them.

And it is worth noting the prominent role that open source technologies play in disaggregating the networking software stack. In fact, many would contend that open source technologies are foundational and critical to how this happens. This adds in a community aspect to innovation, arguably accelerating its pace along the way. Which brings us back full circle to why people want this level of disaggregation – to have more control over how networking software is architected and written, and how networks operate.

Linux.com: How does the disaggregation of the networking stack help fuel innovation in other areas, for example edge computing and IoT?

Baher: Edge computing is interesting as it really is the confluence of compute and networking. For some, it may look like a distributed data center, a few large hyperscale data centers with spokes out to the edge for IoT, 5G and other services. Each edge element is different in capability, form factor, software footprint and operating models. And when viewed through a compute lens, it will be assumed to be inherently a disaggregated, distributed element (with compute, networking and storage capabilities). In other words, hardware elements that are open, standards-based and without any software dependencies. And software for the IoT, 5G and enterprise edge that is also open and disaggregated such that it can be right-sized and optimized for that specific edge task. So if anything, I would say a disaggregated “composite” networking stack is a critical first step for enabling the next-generation edge.

We’re seeing this with mobile operators as they look to NFV solutions for 5G and IoT edge. We’re also seeing this at the enterprise edge, in particular with universal CPE (uCPE) solutions. Unlike previous generations where the enterprise edge meant a proprietary piece of hardware and monolithic software, it is now rapidly transforming into a compute-oriented open model where select networking functions are selected as needed. All of this is made possible by disaggregating the networking functions and applications from the underlying operating system. A ‘not so big a deal’ thing if from a server-minded vantage point, monumental if you come from “networking land”. Exciting times once again in the world of open networking!

This article was sponsored by Dell EMC and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

Submit your proposal to speak at OS Summit Japan before the March 18th deadline.

Open Source Summit Japan and Automotive Linux Summit 2018 are once again co-located and will be held June 20-22 at the Tokyo Conference Center Ariake in Tokyo. Both events offer participants the opportunity to learn about the latest projects, technologies, and developments taking place across the open source ecosystem, and specifically in the Automotive Linux arena.

The deadline to submit a proposal is just 3 weeks away on Sunday, March 18, 2018. Don’t miss the opportunity to educate and influence hundreds of technologists and open source professionals by speaking at one of these events.

Tracks for Open Source Summit Japan include:

  • Cloud Native Apps/Serverless/Microservices
  • Infrastructure and Automation (Cloud/Cloud Native/DevOps)
  • Artificial Intelligence and Data Analytics
  • Linux Systems
  • Networking and Orchestration
  • Blockchain
  • Open Source Leadership, Compliance, Strategy and Governance

View a list of suggested topics and submit your proposal now

Suggested topics for Automotive Linux Summit include:

  • Connected Car, Vehicle-to-Vehicle (V2V), Vehicle-to-Cloud (V2C)
  • Security And Privacy
  • In-Vehicle Infotainment (IVI) & Advanced Driver Assistance Systems (ADAS)
  • Augmented Reality, Heads-Up Display
  • Delivering Live Content  And Updates To Vehicles In Motion
  • Legal Issues
  • Functional Safety And Open Source Software
  • W3C for Automotive
  • Non-AGL Technical Projects (e.g. Smart Roads, Self-Driving Vehicles, CarPlay, Android Auto)

View a full list of suggested topics and submit your proposal now

Get inspired! Watch presentations from Automotive Linux Summit & Open Source Summit Japan 2017

Watch all keynotes from Open Source Summit Japan >>

Watch all keynotes from ALS >>

Want to see your name on the list this year? Submit your proposal before the March 18 deadline.

Planning to attend? Register Now now to save $175 before early bird pricing ends!

Linux Foundation members and LF Project members receive an additional 20% discount off current registration pricing, and academic and non-profit discounts are available as well. Email events@linuxfoundation.org for discount information.

Applications for diversity and needs-based scholarships are also being accepted. Click here for information on eligibility and how to apply.

Sign up to get the latest updates on Open Source Summit Japan!

 

Strong growth follows second milestone launch of production ready blockchain framework, Hyperledger Sawtooth 1.0

SAN FRANCISCO – (February 27, 2018) Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, announced today that 11 new organizations have joined the project. As a multi-project, multi-stakeholder effort, Hyperledger incubates nine business blockchain and distributed ledger technologies, including Hyperledger Fabric, Hyperledger Iroha, Hyperledger Indy, Hyperledger Burrow, Hyperledger Quilt and Hyperledger Sawtooth, among others.

“It’s very gratifying to see the momentum behind Hyperledger continue in 2018, two years after the project first started,” said Brian Behlendorf, Executive Director, Hyperledger. “The community’s development efforts have led us to release two production-ready frameworks and we’ve grown to more than 200 members in that time. Members add a great amount of value to our ecosystem and I look forward to contributions by this new set of organizations as more production deployments take shape later this year.”

Hyperledger aims to enable organizations to build robust, industry-specific applications, platforms and hardware systems to support their individual business transactions by creating enterprise-grade, open source distributed ledger frameworks and code bases. It is a global collaboration of more than 200 organizations including leaders in finance, banking, IoT, supply chain, manufacturing and technology. The latest general members are: 8Common, ArcBlock, Data Deposit Box, FORFIRM, ForgeRock, Inspur, Nexiot, ~sedna GmbH and Smart Block Laboratory.

Hyperledger supports an open community that values contributions and participation from various entities. As such, pre-approved non-profits, open source projects and government entities can join Hyperledger at no cost as Associate members. Associate members joining this month include: Peking University and ShareIT.io.

New member quotes:

8Common

“8common is very excited to join the Hyperledger family and we look forward to contributing our experience to design, develop and operationalise blockchain solutions,” said Nic Lim, Executive Chairman, 8common Limited. “Our core business, expense8, a leading government and large enterprise platform for credit card, travel and expense management in Australia, is well positioned to leverage blockchain to future proof our platform and collaborate with fellow Hyperledger members to deliver new platforms.”

ArcBlock

“Joining ​Hyperledger is a significant step for ArcBlock in making our platform enterprise-ready and accelerating our customers’ journey to production,” said Robert Mao, Founder and CEO, ArcBlock, Inc. “Our commitment to ​Hyperledger will help enterprise-grade blockchain technology adoption and will enable clients to innovate in most demanding applications through our innovative ArcBlock platform.”

Data Deposit Box

“We are very pleased to join the Hyperledger community and look forward to collaborating in our goal to deliver innovative blockchain solutions for our partners and clients worldwide,” said Tim Jewell, CEO, Data Deposit Box. “It is important to be part of such a diverse and talented community as we innovate at such a rapid pace. Our partners and clients are depending on applications to use common interfaces so they can select between blockchain service providers without the need for implementation changes. It took many years for S3 to become widely accepted as a standard interface for storage. We hope to help in the evolution of a simple blockchain service (SBS) and we know Hyperledger will be at the core.”

FORFIRM

“We are honored to become members of Hyperledger,” said Gaspare Corona, Blockchain Leader, FORFIRM. “Having actively worked with Hyperledger for some time, this decision is reflective of our desire to take our involvement to the next level and become a participant and contributor to the foundation. FORFIRM has developed a European network of blockchain specialists, working with clients across multiple industries to explore this disruptive technology and deliver it now.”

ForgeRock

“ForgeRock is delighted to join Hyperledger,” said Lasse Andresen, Chief Technology Officer, Co-founder, ForgeRock. “Hyperledger’s many strengths, such as performance, scalability, a modular architecture, and strong cryptography features provide many synergies with the ForgeRock Identity Platform’s goal to ‘connect, protect and respect’ — building trusted digital relationships with customers and the mobile devices and smart things with which they interact.”

Inspur

“As a revolutionary technology, blockchain will bring great changes to various industries and we’re excited to be part of Hyperledger,” said Mr. XiaoXue, Vice President, Inspur. “Currently, based on blockchain, Inspur is building a quality improvement ecosystem of multi-participation, connectivity and co-governance and sharing to facilitate implementation of a ‘Quality China’ Strategy.”

Nexiot

“Nexiot provides an end-to-end logistics platform that brings transparency and accountability to the supply chain. We are proud to join the Hyperledger community and see this as a significant step towards delivering blockchain enterprise-ready solutions for the logistics industry,” said Tzvetan Horozov, CTO, Nexiot. “Our technology enables a unique set of use-cases by providing real-time asset visibility, smart events processing and advanced data augmentation. We look forward to working with the diverse Hyperledger community and business partners to advance industrial process automation, technology innovation and maximize ROI.”

~sedna GmbH

“We are thrilled to become members of Hyperledger and The Linux Foundation,” said Rolf Maurer and Guido Matzer, Founders of ~sedna GmbH. “The integration of our solutions into hybrid corporate networks of things and creating stable and secure managed services for multichannel content distribution and playout including lifecycle management with p2p solutions for distributed locations is key to our future work. We are looking forward to collaborating with other members and contributing to the Hyperledger community.”

Smart Block Laboratory

“We’re thrilled to join Hyperledger and are proud to be part of this effort to create an open standard for distributed ledger technology,” said Pavel Lvov, Founder and CEO, Smart Block Laboratory. “We at Smart Block Laboratory, as a business partner of IBM, believe blockchain is the next evolution in how data will be stored and shared and we’re looking forward to working with this diverse community. Hyperledger membership will definitely provide us with the opportunity to onrush our newest CRYPTOENTER blockchain-based payment system and IoTNet platform, which is powered by Hyperledger Fabric blockchain technology.”

The call for papers is also now open for the inaugural Hyperledger Global Forum, taking place later this year, December 12-15 in Basel, Switzerland. Submit a talk today: https://events.linuxfoundation.org/events/hyperledger-global-forum-2018/program/cfp/

About Hyperledger

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Linux Foundation hosts Hyperledger under the foundation. To learn more, visit: https://www.hyperledger.org/.

When it comes to launching an open source project, free information abounds online, on topics ranging from picking the right license to building a community. But what about when an organization needs to shutter or move away from an unneeded project? There are many complexities to handling this situation correctly, and many companies with successful open source programs plan for the end of a project even before launching one. Now, The Linux Foundation has published a free online guide for the enterprise examining the various considerations: Winding Down an Open Source Project.

“By shutting down a project gracefully or by transitioning it to others who can continue the work, your enterprise can responsibly oversee the life cycle of the effort,” the guide notes. “In this way, you can also set proper expectations for users, ensure that long-term project code dependencies are supported, and preserve your company’s reputation within the open source community as a responsible participant.”

The free guide includes sound advice on the following topics:

  1. Life cycle planning for your open source project
  2. What does a dead open source project look like?
  3. Why plan for the end of a project, before you even launch it?
  4. Deciding when to end or pull out of a project
  5. How to end an open source project

You’ll also find direct advice from open source experts in the guide. Contributors include: Guy Martin, Director of Open at Autodesk, Autodesk; David A. Wheeler of Core Infrastructure Initiative (CII); Jared Smith, Open Source Community Manager, Capital One; Christine Abernathy, Open Source Developer Advocate, Facebook; and Chris Aniszczyk, COO of Cloud Native Computing Foundation.

“When you’re starting your project, you’re trying to get people to trust you and allay their fears about joining the project and using your code,” David Wheeler notes, in the guide. “Later, if you say, ‘Hey, this project’s going to go away soon,’ that is not going to help with trust. Instead, you should say you’re going to do your best to make it work out if it will ever be ended, and that you promise not to just drop users. Tell them you’ll let them know what is happening at each step. Give them time to transition, and work on ways to help with the transition. That can be very helpful.”

“It doesn’t happen all the time, but in the past with one of our projects we moved it over to a different company,” Abernathy noted, in discussing Facebook’s practices. “We don’t have any hard and fast rules about doing this. Typically, we’ll just move it to a different organization. When it comes to moving within groups, we sort of shop around internally and find out whether it is still being used by someone. With our open source projects, we strive toward internal adoption. So, it might be used by an entirely different team. If they are willing to maintain it, then we move it to a different team, and that’s very easy. That just means changing a label somewhere where it says who’s the owner.”

Are you interested in more good advice? Check out the free online guide today. Additionally, The Linux Foundation and The TODO Group (Talk Openly Develop Openly) have published an entire collection of enterprise guides to assist in developing open source programs and understanding how best to work with open source tools. The guides are available for free, and they cover everything from How to Create an Open Source Program to Starting an Open Source Project.