Posts

linux kernel development

Part of the ongoing Linux development work involves hardening the kernel against attack.

Security is paramount these days for any computer system, including those running on Linux. Thus, part of the ongoing Linux development work involves hardening the kernel against attack, according to the recent Linux Kernel Development Report.

This work, according to report authors Jonathan Corbet and Greg Kroah-Hartman, involves the addition of several new technologies, many of which have their origin in the grsecurity and PaX patch sets. “New hardening features include virtually mapped kernel stacks, the use of the GCC plugin mechanism for structure-layout randomization, the hardened usercopy mechanism, and a new reference-count mechanism that detects and defuses reference-count overflows. Each of these features makes the kernel more resistant to attack,” the report states.

Linux kernel

Kees Cook

In this series, we are highlighting some of the hard-working developers who contribute to the Linux kernel. Here, Kees Cook, Software Engineer at Google, answers a few questions about his work.

Linux Foundation: What role do you play in the community and what subsystem(s) do you work on?

Kees Cook: Recently, I organized the Kernel Self-Protection Project (KSPP), which has helped focus lots of other developers to work together to harden the kernel against attack. I’m also the maintainer of seccomp, pstore, LKDTM, and gcc-plugin subsystems, and a co-maintainer of sysctl.

Linux Foundation: What have you been working on this year?

Cook: I’ve been focused on KSPP work. I’ve assisted many other developers by helping port, develop, test, and shepherd things like hardened usercopy, gcc plugins, KASLR improvements, PAN emulation, refcount_t conversion, and stack protector improvements.

Linux Foundation: What do you think the kernel community needs to work on in the upcoming year?

Cook: I think we’ve got a lot of work ahead in standardizing the definitions of syscalls (to help run-time checkers), and continuing to identify and eliminate error-prone code patterns (to avoid common flaws). Doing these kinds of tree-wide changes continues to be quite a challenge for contributors because the kernel development model tends to focus on per-subsystem development.

Linux Foundation: Why do you contribute to the Linux kernel?

Cook: I’ve always loved working with low-level software, close to the hardware boundary. I love the challenges it presents. Additionally, since Linux is used in all corners of the world, it’s hard to find a better project to contribute to that has such an impact on so many people’s lives.

You can learn more about the Linux kernel development process and read more developer profiles in the full report. Download the 2017 Linux Kernel Development Report now.

openchain

OpenChain makes open source compliance more predictable, understandable, and efficient for all participants in the software supply chain.

Communities form in open source all the time to address challenges. The majority of these communities are based around code, but others cover topics as diverse as design or governance. The OpenChain Project is a great example of the latter. What began three years ago as a conversation about reducing overlap, confusion, and wasted resources with respect to open source compliance is now poised to become an industry standard.

The idea to develop an overarching standard to describe what organizations could and should do to address open source compliance efficiently gained momentum until the formal project was born. The basic idea was simple: identify key recommended processes for effective open source management. The goal was equally clear: reduce bottlenecks and risk when using third-party code to make open source license compliance simple and consistent across the supply chain. The key was to pull things together in a manner that balanced comprehensiveness, broad applicability, and real-world usability.

Main Pillars of the Project

The OpenChain Project has three pillars supported by dedicated work teams. The OpenChain Specification defines a core set of requirements every quality compliance program must satisfy. OpenChain Conformance allows organizations to display their adherence to these requirements. The OpenChain Curriculum provides the educational foundation for open source processes and solutions, while meeting a key requirement of the OpenChain Specification. The result is that open source license compliance becomes more predictable, understandable, and efficient for all participants in the software supply chain.

Reasons to Engage

The OpenChain Project is designed to be useful and adoptable for all types of entities in the supply chain. As such, it is important to distill its value proposition for various potential partners. Our volunteer community created a list of five practical reasons to engage:

  1. OpenChain makes free and open source software (FOSS) more accessible to your developers. OpenChain provides a framework for shared, compliant use of FOSS. Conforming companies create an environment that supports use of FOSS internally and sharing of FOSS with partners.
  2. OpenChain reduces overall compliance effort, saving time and legal and engineering resources. OpenChain allows companies in a supply chain to work together toward FOSS compliance and provides a consistent standard to which all must perform. By contrast, in a typical supply chain, each member of the chain has to perform FOSS compliance for software of others in the chain, wasting time and resources in a duplication of effort.
  3. OpenChain may be adapted to your existing systems. OpenChain allows you to choose your own processes to meet its requirements. OpenChain provides resources that help you design new processes from the ground up, or you may choose to use the systems you have in place.
  4. OpenChain helps your business teams work together toward a common goal. OpenChain provides a blueprint for your legal, engineering, and business teams to work together toward FOSS compliance.
  5. OpenChain allows you to conform to a stable, community-backed specification. When you adopt OpenChain, you conform to a stable specification that is widely backed by industry and community participants. OpenChain was developed in an open, collaborative process, with contributors from a wide range of industries across Asia, Europe and North America. OpenChain is being formally adopted by a growing number of both small and larger companies.

Today, the OpenChain Project is addressing its goals and moving towards wider market adoption with the support of 14 Platinum members: Adobe, Arm, Cisco, Comcast, GitHub, Harman, Hitachi, HPE, Qualcomm, Siemens, Sony, Toyota, Western Digital, and Wind River. The project also has a broad community of volunteers helping to make open source compliance easier for a multitude of market segments. As we move into 2018, the OpenChain Project is well positioned for adoption by Tier 1, Tier 2, and Tier 3 suppliers in multiple sectors, ranging from embedded to mobile to automotive to enterprise to infrastructure.

Entities of all sizes are welcome to participate in the OpenChain Project. Everyone is welcome and encouraged to join our mailing list at:

https://lists.linuxfoundation.org/mailman/listinfo/openchain

You can also send private email to the Project Director, Shane Coughlan, at coughlan@linux.com.

Launching a project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help.

Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers.

Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide to Starting an Open Source Project comes in.

This free guide was created to help organizations already versed in open source learn how to start their own open source projects. It starts at the beginning of the process, including deciding what to open source, and moves on to budget and legal considerations, and more. The road to creating an open source project may be foreign, but major companies, from Google to Facebook, have opened up resources and provided guidance. In fact, Google has an extensive online destination dedicated to open source best practices and how to open source projects.

“No matter how many smart people we hire inside the company, there’s always smarter people on the outside,” notes Jared Smith, Open Source Community Manager at Capital One. “We find it is worth it to us to open source and share our code with the outside world in exchange for getting some great advice from people on the outside who have expertise and are willing to share back with us.”

In the new guide, noted open source expert Ibrahim Haddad provides five reasons why an organization might open source a new project:

  1.    Accelerate an open solution; provide a reference implementation to a standard; share development costs for strategic functions
  2.    Commoditize a market; reduce prices of non-strategic software components.
  3.    Drive demand by building an ecosystem for your products.
  4.    Partner with others; engage customers; strengthen relationships with common goals.
  5.    Offer your customers the ability to self-support: the ability to adapt your code without waiting for you.

The guide notes: “The decision to release or create a new open source project depends on your circumstances. Your company should first achieve a certain level of open source mastery by using open source software and contributing to existing projects. This is because consuming can teach you how to leverage external projects and developers to build your products. And participation can bring more fluency in the conventions and culture of open source communities. (See our guides on Using Open Source Code and Participating in Open Source Communities) But once you have achieved open source fluency, the best time to start launching your own open source projects is simply ‘early’ and ‘often.’”

The guide also notes that planning can keep you and your organization out of legal trouble. Issues pertaining to licensing, distribution, support options, and even branding require thinking ahead if you want your project to flourish.

“I think it is a crucial thing for a company to be thinking about what they’re hoping to achieve with a new open source project,” said John Mertic, Director of Program Management at The Linux Foundation. “They must think about the value of it to the community and developers out there and what outcomes they’re hoping to get out of it. And then they must understand all the pieces they must have in place to do this the right way, including legal, governance, infrastructure and a starting community. Those are the things I always stress the most when you’re putting an open source project out there.”

The Starting an Open Source Project guide can help you with everything from licensing issues to best development practices, and it explores how to seamlessly and safely weave existing open components into your open source projects. It is one of a new collection of free guides from The Linux Foundation and The TODO Group that are all extremely valuable for any organization running an open source program. The guides are available now to help you run an open source program office where open source is supported, shared, and leveraged. With such an office, organizations can establish and execute on their open source strategies efficiently, with clear terms.

These free resources were produced based on expertise from open source leaders. Check out all the guides here and stay tuned for our continuing coverage.

Also, don’t miss the previous articles in the series:

How to Create an Open Source Program

Tools for Managing Open Source Programs

Measuring Your Open Source Program’s Success

Effective Strategies for Recruiting Open Source Developers

Participating in Open Source Communities

Using Open Source Code

OPNFV

The OPNFV project provides users with open source technology they can use and tailor for their purposes; learn how to get involved.

Over the past several weeks, we have been discussing the Understanding OPNFV book (see links to previous articles below). In this last article in the series, we will look at why you should care about the project and how you can get involved.

OPNFV provides both tangible and intangible benefits to end users. Tangible benefits include those that directly impact business metrics, whereas the intangibles include benefits that speed up the overall NFV transformation journey but are harder to measure. The nature of the OPNFV project, where it primarily focuses on integration and testing of upstream projects and adds carrier-grade features to these upstream projects, can make it difficult to understand these benefits.

To understand this more clearly, let’s go back to the era before OPNFV. Open source projects do not, as a matter of routine, perform integration and testing with other open source projects. So, the burden of taking multiple disparate projects and making the stack work for NFV primarily fell on Communications Service Providers (CSPs), although in some cases vendors shouldered part of the burden. For CSPs or vendors to do the same integration and testing didn’t make sense.

Furthermore, upstream communities are often horizontal in their approach and do not investigate or prioritize requirements for a particular industry vertical. In other words, there was no person or entity driving carrier grade features in many of these same upstream projects. OPNFV was created to fill these gaps.

Tangible and Intangible Benefits

With this background, OPNFV benefits become more clear. Chapter 10 of the book breaks down the tangible and intangible benefits further. Tangible benefits to CSPs include:

  • Faster rollout of new network services
  • Vendor-agnostic platform to onboard and certify VNFs
  • Stack based on best-in-class open source components
  • Reduced vendor lock-in
  • Ability to drive relevant features in upstream projects

Additionally, the OPNFV community operates using DevOps principles and is organized into small, independent and distributed teams. In doing so, OPNFV embodies many of the same practices used by the web giants. CSPs can gain valuable insight into people and process changes required for NFV transformation by engaging with OPNFV. These intangible benefits include insights into:

  • Organizational changes
  • Process changes
  • Technology changes
  • Skillset acquisition

OPNFV is useful not only for CSPs, however; it also provides benefits to vendors (technology providers) and individuals. Vendors can benefit from interoperability testing (directly if their products are open source, or indirectly through private testing or plugfests), and gain insights into carrier-grade requirements and industry needs. Individuals can improve their skills by gaining broad exposure to open source NFV. Additionally, users can learn how to organize their teams and retool their processes for successful NFV transformation.

The primary objective of the OPNFV project is to provide users with open source technology they can use and tailor for their purposes, and the Understanding OPNFV book covers the various aspects to help you get started with and get the most out of OPNFV. The last section of the book also explains how  you might get involved with OPNFV and provides links to additional OPNFV resources.

Want to learn more? You can download the Understanding OPNFV ebook in PDF (in English or Chinese), or order a printed version on Amazon. Or you can check out the previous blogs:

Mauro Carvalho Chehab answers a few questions about his work on the Linux kernel.

According to the recent Linux Kernel Development Report, the Linux operating system runs 90 percent of the public cloud workload, has 62 percent of the embedded market share, and 100 percent of the TOP500 supercomputers. It also runs 82 percent of the world’s smartphones and nine of the top ten public clouds. However, the sustained growth of this open source ecosystem would not be possible without the steady development of the Linux kernel.

In this series, we are highlighting the ongoing work of some Linux kernel contributors. Here, Mauro Carvalho Chehab, Open Source Director at Samsung Research Brazil, answers a few questions about his work on the kernel.

Linux Foundation: What role do you play in the community and what subsystem(s) do you work on?

I’m responsible for the Open Source efforts at Samsung Research Brazil, as part of Samsung’s Open Source Group. I maintain the media and EDAC (Error Detection and Correction) kernel subsystems.

Linux Foundation: What have you been working on this year?

This year, I did a lot of patches that improves Linux documentation. A lot of them were related to the conversion from the XML-based DocBook docs to a markup language (Restructured Text). Thanks to that, no documents use the legacy document system anymore. I also finally closed the documentation gap at the DVB API, with was out of sync for more than 10 years! I also did several bug fixes at the media subsystem, including the 4.9 breakage of many drivers that were doing DMA via stack.

Linux Foundation: What do you think the kernel community needs to work on in the upcoming year?

We should continue our work to support new device drivers and get rid of out of tree stuff. At the media subsystem, we should work to add support for newer TV standards, like ATSC version 3 and to improve support for embedded systems, on both DVB and V4L2 APIs.

Linux Foundation: Why do you contribute to the Linux kernel?

Because it is fun! Seriously, I strongly believe that the innovation process on computer engineering is currently driven by Linux. Working on its kernel has provided me the opportunity of working with great developers and helping to improve the top operating system.

You can learn more about the Linux kernel development process and read more developer profiles in the full report. Download the 2017 Linux Kernel Development Report now.

open source community

Zachary Dupont wrote a letter to his hero Linus Torvalds back in 2014. Here, they catch up on stage at Open Source Summit NA 2017.

The Linux Foundation works through our projects, training and certification programs, events and more to bring people of all backgrounds into open source. We meet a lot of people, but find the drive and enthusiasm of some of our youngest community members to be especially infectious. In the past couple of months, we’ve invited 13-year-old algorithmist and cognitive developer Tanmay Bakshi, 11-year-old hacker and cybersecurity ambassador Reuben Paul, and 15-year-old programmer Keila Banks to speak at Linux Foundation conferences.

In 2014 when he was 12, Zachary Dupont wrote a letter to his hero Linus Torvalds. We arranged for Zach to meet Linus–a visit that helped clinch his love for Linux. This year, Zach came to Open Source Summit in Los Angeles to catch up with Linus and let us know what he’s been up to. He’s kept busy with an internship at SAP and early acceptance to the Computer Networking and Digital Forensics program at the Delaware County Technical School.

The open source community encouraged Zach to pursue his passions. They’ve inspired him, and he plans to give back in the future.

We encourage everyone to find ways to bring more people of all ages into open source. Volunteer your time to teach students or people making mid-career changes how to code, spend time on writing documentation for your open source project so others can get to know it better, or simply take the time to answer beginner questions on message boards. The more people we bring into the community, the stronger we will be in the years ahead.

[vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]1. LinuxCon + ContainerCon + CloudOpen China
Developers, architects, sysadmins, DevOps experts, business leaders, and other professionals gathered in June to discuss open source technology and trends at the first-ever LinuxCon + ContainerCon + CloudOpen (LC3) event in China. At the event, Linus Torvalds spoke about how Linux still surprises and motivates him.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23077″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]2. Toyota Camry Will Feature Automotive Grade Linux
At Automotive Linux Summit in Japan, Dan Cauchy, Executive Director of Automotive Grade Linux (AGL), announced that Toyota has adopted the AGL platform for their next-generation infotainment system.The 2018 Camry will be the first Toyota vehicle on the market with the AGL-based system in the United States.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23078″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]3. Open Source Summit Debuts
As announced at last year’s LinuxCon in Toronto, this annual event hosted by The Linux Foundation is now called Open Source Summit. It combines LinuxCon, ContainerCon, and CloudOpen conferences along with two new conferences: Open Community Conference and Diversity Empowerment Summit.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23079″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]4. Joseph Gordon-Levitt at OS Summit North America
Actor Joseph Gordon-Levitt, founder and director of the online production company HITRECORD, spoke at Open Source Summit in Los Angeles about his experiences with collaborative technologies. Gordon-Levitt shared lessons learned along with a video created through the company.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23080″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]5. Diversity Empowerment Summit
Tameika Reed, founder of Women in Linux, spoke at the Diversity Empowerment Summit in Los Angeles about the need for diversity in all facets of tech, including education, training, conferences, and mentoring. The new event aims to help promote and facilitate an increase in diversity, inclusion, empowerment, and social innovation in the open source community.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23081″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]6. Hyperledger Growth
Hyperledger — the largest open blockchain consortium — now includes 180 diverse organizations and has recently partnered with edX to launch an online MOOC. At Open Source Summit in Los Angeles, Executive Director Brian Behlendorf spoke with theCUBE about the project’s growth and potential to solve important problems.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23082″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]7. Lyft and Uber on Stage at Open Source Summit
At Open Source Summit in Los Angeles, ride-sharing rivals Lyft and Uber appeared on stage to introduce two new projects donated to the Cloud Native Computing Foundation. Chris Lambert, CTO of Lyft (on left), and Yuri Shkuro, Staff Engineer at Uber, introduced the projects, which help CNCF fill some gaps in the landscape of technologies used to adopt a cloud-native computing model.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23083″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]8. Attendee Reception at Paramount Studios
The Open Source Summit North America evening reception for all attendees was held at iconic Paramount Studios in Hollywood. Attendees enjoyed a behind-the-scenes studio tour featuring authentic Paramount movie props and costumes.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23084″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]9. 2017 Linux Kernel Summit and Kernel Development Report
Open source technologists gathered in the city of Prague, Czech Republic in October for Open Source Summit and Embedded Linux Conference Europe. Co-located events included MesosCon Europe, KVM Forum, and Linux Kernel Summit, where The Linux Foundation released the latest Linux Kernel Development Report highlighting some of the dedicated kernel contributors.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23085″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]10. The Next Generation of Open Source Technologists
The Linux Foundation 2017 events aimed to inspire the younger generation with an interest in open source technologies through activities like Kids Day and special keynotes, such as those from 13-year-old algorithmist and cognitive developer Tanmay Bakshi, 11-year-old hacker and cybersecurity ambassador Reuben Paul (pictured here), and 15-year-old programmer and technologist Keila Banks.[/vc_column_text][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/2″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][image_with_animation image_url=”23086″ alignment=”” animation=”Fade In” box_shadow=”none” max_width=”100%”][/vc_column][/vc_row][vc_row type=”in_container” full_screen_row_position=”middle” scene_position=”center” text_color=”dark” text_align=”left” overlay_strength=”0.3″][vc_column column_padding=”no-extra-padding” column_padding_position=”all” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_shadow=”none” width=”1/1″ tablet_text_alignment=”default” phone_text_alignment=”default” column_border_width=”none” column_border_style=”solid”][vc_column_text]You can look forward to more exciting events in 2018. Check out the newly released 2018 Events calendar and make plans now to attend or to speak at an upcoming conference.

Speaking proposals are now being accepted for the following 2018 events:

Submit a Proposal[/vc_column_text][/vc_column][/vc_row]

open source culture

Open source involves a culture of understanding change. It’s about evolution as a group, says Mesosphere’s CMO Peter Guagenti.

In the early days of open source, one of the primary goals of the open source community was educating people about the benefits of open source and why they should use it. Today, open source is ubiquitous. Almost everyone is using it. That has created a unique challenge around educating new users about the open source development model and ensuring that open source projects are sustainable.

Peter Guagenti, CMO at Mesosphere, Inc.

Peter Guagenti, the Chief Marketing Officer at Mesosphere, Inc., has comprehensive experience with how open source works, having been involved with several leading open source projects. He has been a coder, but says that he considers himself a hustler. We talked with him about his role at Mesosphere, how to help companies become good open source citizens, and about the role of culture in open source. Here is an edited version of that interview.

The Linux Foundation: What’s the role of a CMO in an open source software company?

Peter Guagenti: The role of a CMO in a software company is fundamentally different from that in any other category.  We have a really interesting role in marketing and technology, and it’s one of education and guidance. There used to be a place 20 years ago where, as a marketer, you would come up with a simple pithy message and buy a bunch of advertising and people would believe it.

That’s not true anymore. Now we have to position ourselves alongside the architectures and the thought leadership that our customers are interested in to prove our value.

The Linux Foundation: Can you explain more about this approach?

Guagenti: I love that instead of focusing on marketing taglines, you really have to know the technology so customers have the confidence that they will get the support we promise. Since this space is changing so quickly, we spend probably half our time simply on educating and informing about the market and the challenges that customers face.

I don’t think about talking about DCOS, for example; I think about how connected cars are really important but nobody really knows how to build them. We serve six of the largest car makers in the world. So getting them to talk about how they’re approaching this problem — what they think about Edge computing, what they think about computing in the car, or what they think about data and moving that data around. These are the real exciting things.

The Linux Foundation: Can you talk about other work you have done in open source?

Guagenti:  I’m a long-time open source advocate. I’ve been in open source for over 10 years. I built an open source services practice in a large digital agency called Razorfish when I was at a client services there. I’ve spent time at three open source companies: Acquia, which is in the Drupal open source project; Nginx, which is the world’s most popular web server and application delivery controller; and now I am at Mesosphere, the container company.

The Linux Foundation: Open source has become the de facto software development model — almost everyone is consuming open source these days. That creates a new challenge as many new consumers don’t fully understand how open source works, which can lead to problems like not being part of the ecosystem and creating technical debt. Have you come across this problem?

Guagenti: Open source has evolved dramatically over the past 20 years. I would argue 10 years ago you were crazy if you were a Fortune 500 company and you were the CIO and said I’m going to integrate open source everywhere. But now open source is the default. I’ve worked in large state and national governments around the world. I’ve worked in the Fortune 500, and they all have adopted open source. But how they adopt open source successfully is different. If you look company by company, if you look at projects, there is a difference.

There are community-driven models, there are corporate-driven models, and there are things in between where you see things like Kubernetes, where you’ve multiple companies contributing at scale. There is a great mix, but companies don’t always know how to make the best use of that. It becomes critical for them to find the right enterprise that helps them understand how to use and deploy it. More important than that is to help them ensure they are making good decisions with that software and driving the roadmap forward by contributing or at least by being a voice in that.

We take for granted that open source exists, but open source requires involvement—either contribution of code or cash—to keep those projects healthy. We are at a point where open source has been around long enough that we have seen early open source projects just die because they didn’t have core maintainers able to earn a salary.

I was told that every great technology company needs a hacker and a hustler. I was a good coder early on, but I wasn’t great. I’m more of a hustler. I loved being able to see businesses build around open source and then have have that really be the heart of a healthy ecosystem where everyone is able to benefit from that code.

The Linux Foundation: What role does culture play in open source adoption?

Guagenti: It matters. Look at the digital transformation that we have been going through for the last 20 years. Look at the companies that have done it best. You will notice that the old stalwarts have now reinvented themselves in a meaningful way. They are continuing to evolve with the time and are competing effectively. They had a culture where they could embrace and accept a lot of these things.  

If you look at hiring the great technology talent, what’s the number one thing great technology talent expects? They want to work with the tools they want to use. They want to do it in a way that fits their pattern of behavior, their pattern of building these things. It’s not the money, it’s not the stock options, it’s not the fancy work. It’s about the kind of work I want to do everyday and and the way I want to do it.  

I work with some of the largest banks, I work with some of the largest government entities. What I have noticed, with some of the most successful ones, is that they have a culture internally where they understand this stuff. They understand what it means to not just use open source but to be a part of an open source community. Sometimes you do run into hurdles. I work with a lot of large companies that are either not comfortable contributing code back or just simply don’t feel they have the time to do it. But they do their bit in a different way; they may do things like contribute  financially to projects, send people to to events, or actually go and tell their story.

That’s what we do a lot at Mesosphere. Since this space is changing, we love having our largest customers talking about what they’re doing with open source. Their culture matters because it’s not just the culture of open source and using open source. It’s a culture of innovation. It’s a culture of understanding change.  And that’s what open source is all about. It’s about evolution as a group.

Learn more about best practices for sustainable open source in the free Open Source Guides for the Enterprise from The Linux Foundation.

Via collaboration of global, sustainable community, ONAP Amsterdam release addresses real-world SDN, NFV and VNFs just in time for 5G

San Francisco, November 20, 2017– The Open Network Automation Platform (ONAP) Project today announced the availability of its first platform release, ONAP “Amsterdam,” which delivers a unified architecture for end-to-end, closed-loop network automation. ONAP is transforming the service delivery lifecycle for network, cable and cloud providers. ONAP is the first open source project to unite the majority of operators (end users) with the majority of vendors (integrators) in building a real service automation and orchestration platform, and already, 55 percent of the world’s mobile subscribers are supported by its members.

“Amsterdam represents significant progress for both the ONAP community and the greater open source networking ecosystem at large,” said Arpit Joshipura, general manager, Networking and Orchestration, The Linux Foundation. “By bringing together member resources, Amsterdam is the first step toward realization of a globally shared architecture and implementation for network automation, based on open source and open standards. It’s exciting to see a new era of industry collaboration and architectural convergence – via a healthy, rapidly diversifying ecosystem – begin to take shape with the release of ONAP Amsterdam.”

The Amsterdam release provides a unified architecture which includes production-proven code from open source ECOMP and OPEN-O to provide design-time and run-time environments within a single, policy-driven service orchestration platform. Common, vendor-agnostic models allow users to quickly design and implement new services using best-of-breed components, even within existing brownfield environments. Real-time inventory and analytics support monitoring, end-to-end troubleshooting, and closed-loop feedback to ensure SLAs as well as rapid optimization of service design and implementations. Additionally, ONAP is able to manage and orchestrate both virtualized and physical network functions.

The entire platform has been explicitly architected to address current real-world challenges in operating tier-one networks. Amsterdam provides verified blueprints for two initial use cases, with more to be developed and tested in future releases. This includes VoLTE (Voice Over LTE), which allows voice to be unified onto IP networks. By virtualizing the the core network, ONAP is used to design, deploy, monitor and manage the lifecycle of a complex end-to-end VoLTE service. The second use case is Residential vCPE. With ONAP, all services are provided in-network, which means CSPs can add new services rapidly and on-demand to their residential customers to create new revenue streams and counter competitors.

“In six short months, the community has rallied together to produce a platform that transforms the service delivery lifecycle via closed-loop automation,” said Mazin Gilbert, ONAP Technical Steering Committee (TSC) chair, and vice president, Advanced Technology, AT&T Labs.This initial release provides blueprints for service provider use cases, representing the collaboration and innovation of the community.”

Ecosystem Growth Produces ONAP PoCs

With more than 55 percent of global mobile subscribers represented by member carriers, ONAP is poised to become the de facto automation platform for telecom carriers. This common, open platform greatly reduces development costs and time for VNF vendors, while allowing network operators to optimize their selection of best-of-breed commercial VNF offerings for each of their services. Standardized models and interfaces greatly simplify integration time and cost, allowing telecom and cloud providers to deliver new offerings quickly and competitively.

Member companies which represent every aspect of the ecosystem (vendors, telecommunication providers, cable and cloud operators, NFV vendors, solution providers) are already leveraging ONAP for commercial products and services. Amsterdam code is also integrated into proof of concepts.

Additionally, ONAP is part of a thriving global community; more than 450 people attended the recent Open Source Networking Days events to learn how ONAP and other open source networking projects are changing network operations.

More detailsincluding download information, white papers, solutions briefs and videoson Amsterdam are available here. Comments from members, including those who contributed technically to Amsterdam, can be found here.

What’s Next for ONAP

Looking ahead, the community is already beginning plans for the second ONAP release, “Beijing.” Scheduled for release in summer 2018, Beijing will include “S3P” (scale, stability, security and performance) enhancements, more use cases to support today’s service provider needs, key 5G features, and inter- cloud connectivity. Interest from large enterprises will likely further shape the platform and use cases in future releases.

ONAP will continue to evolve harmonization with SDOs and other other source projects, with a focus on aligning APIs/Information Models as well as OSS/BSS integration.

ONAP Beijing Release Developer Forum will take place on Dec. 11-13 in Santa Clara, California, and will include topics for end users, VNF providers, and the ONAP developer community via a variety of sessions including presentations, panels and hands-on labs.

ONAP community members and developers are encouraged to submit a proposal to share knowledge and expertise with the rest of the community: https://www.onap.org/event/submit-a-proposal-for-the-onap-beijing-release-developer-forum-santa-clara-ca

Additionally, ONAP will host a Workshop on “Container Networking with ONAP”  in conjunction with CloudNativeCon + KubeCon December 5 in Austin, Texas. The workshop is designed to bring together networking and cloud application developers to discuss their needs, ideas and aspirations for automating the deployment of secure network services on demand. Details and registration information: https://www.onap.org/event/cfp-submit-a-proposal-to-onap-mini-summit-at-cloudnativecon-kubecon-north-america-tuesday-december-5-2017

About the Open Network Automation Platform

The Open Network Automation Platform (ONAP) Project brings together top global carriers and vendors with the goal of allowing end users to automate, design, orchestrate and manage services and virtual functions. ONAP unites two major open networking and orchestration projects, open source ECOMP and the Open Orchestrator Project (OPEN-O), with the mission of creating a unified architecture and implementation and supporting collaboration across the open source community. The ONAP Project is a Linux Foundation project. For more information, visit https://www.onap.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

Additional Resources

Download ONAP Amsterdam

Amsterdam Architecture Overview

VoLTE Solution Brief

VCPE Solution Brief

Related videos

ONAP Blog

Join as a Member

 

Media Contact

Sarah Conway

The Linux Foundation

(978) 578-5300

sconway@linuxfoundation.org

A guest blog post by Mike Goodwin.

What is threat modeling?

Application threat modeling is a structured approach to identifying ways that an adversary might try to attack an application and then designing mitigations to prevent, detect or reduce the impact of those attacks. The description of an application’s threat model is identified as one of the criteria for the Linux CII Best Practises Silver badge.

Why threat modeling?

It is well established that defense-in-depth is a key principle for network security and the same is true for application security. But although most application developers will intuitively understand this as a concept, it can be hard to put it into practice. After many years and sleepless nights, worrying and fretting about application security, one thing I have learned is that threat modeling is an exceptionally powerful technique for building defense-in-depth into an application design. This is what first attracted me to threat modeling. It is also great for identifying security flaws at design time where they are cheap and easy to correct. These kinds of flaws are often subtle and hard to detect by traditional testing approaches, especially if they are buried in the innards of your application.

Three stages of threat modeling

There are several ways of doing threat modeling ranging from formal methodologies with nice acronyms (e.g. PASTA) through card games (e.g. OWASP Cornucopia) to informal whiteboard sessions. Generally though, the technique has three core stages:

Decompose your application – This is almost always done using some kind of diagram. I have seen successful threat modeling done using many types of diagrams from UML sequence diagrams to informal architecture sketches. Whatever format you choose, it is important that the diagram shows how different internal components of your application and external users/systems interact to deliver its functionality. My preferred type of diagram is a Data Flow Diagram with trust boundaries:

Identify threats – In this stage, the threat modeling team ask questions about the component parts of the application and (very importantly) the interactions or data flows between them to guess how someone might try to attack it. The answers to these questions are the threats. Typical questions and resulting threats are:

Question Threat
What assumptions is this process making about incoming data? What if they are wrong? An attacker could send a request pretending to be another person and access that person’s data.
What could an attacker do to this message queue? An attacker could place a poison message on the queue causing the receiving process to crash.
Where might an attacker tamper with the data in the application? An attacker could modify an account number in the database to divert payment to their own account.

Design mitigations – Once some threats have been identified the team designs ways to block, avoid or minimize the threats. Some threats may have more than one mitigation. Some mitigations might be preventative and some might be detective. The team could choose to accept some low-risk threats without mitigations. Of course, some mitigations imply design changes, so the threat model diagram might have to be revisited.

Threat Mitigation
An attacker could send a request pretending to be another person and access that person’s data. Identify the requestor using a session cookie and apply authorization logic.
An attacker could place a poison message on the queue causing the receiving process to crash. Digitally sign message on the queue and validate their signature before processing.
Maintain a retry count on message and discard them after three retries.
An attacker could modify an account number in the database to divert payment to their own account. Preventative: Restrict access to the database using a firewall.
Detective: Log all changes to bank account numbers and audit the changes.

OWASP Threat Dragon

Threat modeling can be usefully done with a pen, whiteboard and one or more security-aware people who understand how their application is built, and this is MUCH better than not threat modeling at all. However, to do it effectively with multiple people and multiple project iterations you need a tool. Commercial tools are available, and Microsoft provides a free tool for Windows only, but established, free, open-source, cross-platform tools are non-existent. OWASP Threat Dragon aims to fill this gap. The aims of the project are:

  • Great UX – Using Threat Dragon should be simple, engaging and fun
  • A powerful threat/mitigation rule engine – This will lower the barrier to entry for teams and encourage non-specialists to contribute
  • Integration with other development lifecycle tools – This will ensure that models slot easily into the developer workflows and remain relevant as the project evolves
  • To always be free, open-source (like all OWASP projects) and cross-platform. The full source code is available on GitHub

The tool comes in two variants:

End-user documentation is available for both variants and, most importantly, it has a cute logo called Cupcakes…

Threat Dragon is an OWASP Incubator Project – so it is still early stage but it can already support effective threat modeling. The near-term roadmap for the tool is to:

  • Achieve a Linux CII Best Practices badge for the project
  • Implement the threat/mitigation rule engine
  • Continue to evolve the usability of the tool based on real-world feedback from users
  • Establish a sustainable hosting model for the web application

If you want to harden your application designs you should definitely give threat modeling a try. If you want a tool to help you, try OWASP Threat Dragon! All feedback, comments, issue reports and pull requests are very welcome.

About the author: Mike Goodwin is a full-time security professional at the Sage Group where he leads the team responsible for product security. Most of his spare time is spent working on Threat Dragon or co-leading his local OWASP chapter.

This article originally appeared on the Core Infrastructure Initiative website.