kubernetes developer aws python microservices architecture api gateway

The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that actual tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The real world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balance between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two general rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their real world loads and communication needs will be.

1 + 1 = 3

At last week’’s Open Source Summit North America, Robin Ginn, Executive Director of the OpenJS Foundation, relayed a principle her mentor taught: “1+1=3”. No, this isn’t ‘new math,’ it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work. 

This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project. 

Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the Open Programmable Infrastructure Project. As I was reading up on it, they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?” 

What are DPUs/IPUs? 

First – and this is important – they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.

In most personal computers, you have a separate graphic processing unit(s) that helps the central 1 + 1 = 3 processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately. 

Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best. 

In order words, 1+1=3. 

DPUs/IPUs are highly customizable

While separate processing units have been around for some time, like your PC’s GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly. 

Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasn’t too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a virtual server in a matter of minutes for virtually no initial cost. 

DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability. 

What is Open Programmable Infrastructure (OPI)?

OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”

In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.

I, for one, can’t wait to see the innovation.

A special thanks to Yan Fisher of Red Hat for helping me understand open programmable infrastructure concepts. He and his colleague, Kris Murphy, have a more technical blog post on Red Hat’s blog. Check it out. 

For more information on the OPI Project, visit their website and start contributing at https://github.com/opiproject/opi.  

Click here to add your own text

SAN FRANCISCO—June 21, 2022—  Project Nephio, an open source initiative of partners across the telecommunications industry working towards true cloud-native automation , today announced rapid community growth and momentum.  

Since launching in April 2022 in partnership with Google Cloud, support has grown with 28 new organizations now part of the project (with over 50 contributing organizations), progress towards Technical Steering Committee (TSC) formation, and an upcoming Nephio Technical Summit, June 22-23, in Sunnyvale, Calif. New supporters include: A5G Networks, Alicon Sweden, Amdocs, ARGELA, CapGemini Technology, CIMI Corporation, Cohere Technologies, Coredge.io, CPQD, Deutsche Telekom, HPE, Keysight Technologies, KT, Kubermatic, Kydea, MantisNet, Matrixx, Minsait, Nabstract, Prodapt, Sandvine, SigScale, Spirent Communications, Telefónica, Tata Elxsi, TechMahidra, Verizon, Vodafone, Wind River, and Wipro. 

Nephio’s goal is to deliver carrier-grade, simple, open, Kubernetes-based cloud-native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments. Nephio enables faster onboarding of network functions to production including provisioning of underlying cloud infrastructure with a true cloud native approach, and reduces costs of adoption of cloud and network infrastructure.

“We are pleased to see Nephio experience such rapid growth in such a short time,” said Arpit Joshipura, general manager, Networking, Edge, and IoT, the Linux Foundation. “This is testament to the market need for open, collaborative initiatives that simplify network functions and cloud infrastructure across edge deployments.”

“We are heartened by the robust engagement from our growing Nephio community, and look forward to continuing to work together to set a new open standard for cloud-native networks to advance automation, network function deployment, and the management of user journeys,” said Gabriele Di Piazza, Senior Director, Telecom Product Management, Google Cloud.

Developer collaboration is underway with the Technical Steering Committee formation in progress. And the Nephio technical community will gather in-person and virtually for the first Nephio Technical Summit, June 22-23 in Sunnyvale, Calif. The goal is to discuss strategy, technology enhancements, roadmap, and operational aspects of cloud native automation in the Telecommunication world. More details, including how to register, are available here: https://nephio.org/events/

More information about Nephio is available at www.nephio.org

Support from contributing organizations

A5G Networks

“A5G Networks is a leader and innovator in autonomous and distributed mobile core network software over hybrid and multi-cloud. Our unique IP helps realize significant savings in capital and operating expenditures, reduces energy requirements, improves quality of user experience and catalyze adoption of new business models. A5G Networks is excited to join the Nephio initiative for intent based automation and unlock the true potential of 5G networks,” said Kaitki Agarwal, founder, president and CTO of A5G Networks, Inc.

Amdocs

“Amdocs is excited to join the Nephio community and accelerate the Telecom industry’s journey towards a cloud-native, Kubernetes-based, automation and orchestration solutions. As a leader in telco automation and a founding member of Linux  Foundation’s ONAP and EMCO projects, Amdocs is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said  Eyal Shaked, General Manager, Open Network PBU, Amdocs. 

Capgemini

“Capgemini is excited to join the Nephio community and join the Nephio working groups to facilitate the deployments of telecom operators by moving the Telecom industries towards a cloud-native platform and provide the automation and orchestration solutions with the help of Nephio. Capgemini is an expert in O-RAN standards and has FAPI compliant O-CU and O-DU implementations. Capgemini is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said Sandip Sarkar, senior director, CTO Organization, Capgemini.

CIMI Corporation

“The Nephio project promises to provide an open-source implementation of network operator service lifecycle automation based on the cloud-standard Kubernetes orchestration platform.  That’s absolutely critical for the convergence of network and cloud software,” said Tom Nolle, president, CIMI Corporation. 

Coredge.io

Arif Khan, CEO, Coredge.io said, “Bringing agility is delivering services and centrally managing the geographically distributed cloud, keeping cost in control is the key focus right now for operators. Nephio project is meant to achieve this with Kubernetes-based cloud-native intent automation and automation templates. We are glad to contribute to Nephio with our learnings in management of multi-cloud and distributed edge using intent driven automation inside the Coredge.”

Deutsche Telekom

“Large-scale automation is pivotal on our Software Telco journey. It is important that we work together as an industry on standards that will enable and simplify the cloud native automation of network functions. And we believe the Nephio project can play a fundamental role to speed up this process,” said Jochen Appel, VP Network Automation, Deutsche Telekom.

KT

“Cloud native is a next step on the journey of telcos’ path to successful digital transformation. Also the automated management to enable multi-vendor support and reduce cost by efficiency and agility is a key factor for operation of the cloud based network systems. The project Nephio will help open, wide, and easy adoption of such infrastructure. By co-working with partners in the project, we look forward to solving the interworking issues among multi-vendors and building up the efficient and agile orchestrated management system easily,” said Jongsik Lee, senior vice president, head of Infrastructure DX R&D Center, KT.

MantisNet

“MantisNet supports the Nephio initiative, specifically realizing the vision of autonomous networks. The Nephio project is complementary with the kinds of full-stack, end-to-end, programmable visibility, powered by an open, standards-based, event-driven, composable architecture that we are developing for a broad range of new and emerging use-cases to help ensure the secure and reliable operation of cloud-native 5G applications,”said  Peter Dougherty, CEO MantisNet. 

Matrixx Software

“Continued advancements in the automation of distributed Cloud Native Network Functions will be critical to delivering on the promises of new differentiated 5G services, and key to new industry revenue models,” said Marc Price, CTO, Matrixx Software. 

Minsait

“As a company helping Telcos to onboard their 5G network functions, we are aware of the current challenges they are facing. Nephio is a key initiative to fulfill the promises of truly cloud native deployment and operation that specifically addresses the unique pain points  of the Telco industry,” said Francisco Rodríguez, head of network virtualization at Minsait. 

Nabstract.io

“Harmonization and availability of common practices that facilitate intent driven automation for deployment and management of infrastructure and cloud native Network Functions will boost the consumption of 5G connectivity capabilities across market verticals through abstracted open APIs,” said Vaibhav Mehta, Founder, Nabstract.io.

Proadapt

“Prodapt is the leading SI for connectedness industry with a laser focus on software intensive networks. Together as a key contributor to the Project Nephios, we will jointly accelerate TelCo’s journey towards becoming a TechCo by co-innovating, -building, -deploying, and -operating distributed multi-cloud network functions. We believe our collaboration would set the foundation of a fully automated intent driven cloud-native networks supporting differentiated 5G & distributed edge experience,” said Rajiv Papneja, SVP & global head, Cloud & Network Services, Prodapt.

Sandvine

“Sandvine Application and Network Intelligence solutions provide machine learning-based 5G analytics over hybrid cloud, multicloud, and edge deployments, empowering service-providers and enterprise customers to analyze, optimize, and monetize application experiences. Sandvine is proud to be a part of the Nephio initiative for intent-based automation, a prelude to Network-as-a-Service offerings that will scale autonomously, even when comprised of different vendors’ Infrastructure/Platform/Software-aaS components,” said Samir Marwaha, Chief Strategy Officer, Sandvine.

SigScale

“SigScale believes Nephio could be instrumental in achieving a management continuum across multi-cloud, multi-vendor networks,” said Vance Shipley, CEO, SigScale.

Vodafone

“Building, deploying, and operating Telco workloads across distributed cloud environments is complex, so it is important to adopt cloud native best practices as we evolve, to enable us to achieve our goals for agility, automation, and optimisation,” said Tom Kivlin, principal Cloud Architect, Vodafone. “Project Nephio presents a great opportunity to drive the cloud native orchestration of our networks.  We look forward to working with our partners and the Nephio community to further develop and accelerate the simplification of network function orchestration.” 

Wind River

“As active supporters and contributors of key telco cloud-native open source projects such as StarlingX and the O-RAN Alliance, Wind River is excited to join Nephio. Nephio’s mission of simplifying the deployment and management of multi-vendor cloud infrastructure across large scale deployments is directly aligned with our strategy,” said Gil Hellmann, vice president, Telecom Solutions Engineering, Wind River. 

About Nephio

More information can be found at www.nephio.org.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

#####

A brief about my experience with the Linux Foundation Mentorship.

The post originally appeared on deprov477’s blog. The author, Anubhav Choudhary, particpated in the Linux Foundation’s Mentorship Program in 2022. The program is designed to help developers — many of whom are first-time open source contributors — with necessary skills and resources to learn, experiment, and contribute effectively to open source communities. By participating in a mentorship program, mentees have the opportunity to learn from experienced open source contributors as a segue to get internship and job opportunities upon graduation. If you are interested, we invite you to learn more and apply today here.


Hi everyone, I recently completed my LFX Mentorship project. I was a mentee for the LFXM summer term of 2022 at Pixie, a CNCF sandbox project donated by The New Relic.

In this blog, I will be sharing my experience of mentorship. (TLDR; just awesome, one-of-a-kind experience <3) If you're also applying for this (which every open-source newbie should), or have a doubt, feel free to drop me a message. I’d be more than happy to help.

What is LFX Mentorship?

Let’s start this by knowing about The Linux Foundation. The Linux Foundation (LF) is a non-profit organization, that standardizes the development of the Linux kernel and also promotes open source projects such as Kubernetes, GraphQL, Hyperledger, RISC-V, Xen project, etc.

The Linux Foundation Mentorship is a program run by LF, which helps developers with the necessary skills and resources to learn and contribute to open source projects, through 3 or 6 months of internship. During this period, the mentee is guided through the development workflow and methodologies used by open source organizations, through a project.

Selection procedure

I’ve been involved in open source for some time and have been applying for the mentorship, but got rejected every time.

This time also I was going through the projects and found a particularly interesting project. It was about parsing a protocol. This took my eye as at that time I was learning networking and experimenting a lot with communications. So naturally, I got interested. After reading the project details, I went to the project’s slack channel to find a mentor. Omid, one of Pixie’s founding engineers, was kind enough to reply to my message and asked for a quick call.

I talked to him and told him about my interest and how I made a preliminary Mongo wire protocol parser using Node.js as preparation. He seemed satisfied with this and told me about further steps and time commitment.

Other formalities included submitting a cover letter, and my resume.

A few days later got this:

LFX Hi Anubhav

Finally, after applying so many times, got selected !!!

Month 1

Started, and was introduced to my mentor Yaxiong Zhao, another founding engineer at Pixie. He told me about what we were going to do in the next 3 months. He demoed me the Pixie UI and explained to me the working of it, and how pixie catches packets (hint: eBPF). And then sent me the AMQP spec sheet, and how it needs to be implemented using C++.

Yes, the protocol changed from Mongo to AMQP, and the language from Node.js to C++. But I guess a very important survival quality of industry is being flexible.

So, in the first month, I got a theoretical knowledge about AMQP wire spec and experimented with it by deploying a local RabbitMQ server, and monitoring packets using Wireshark. My mentor also tried helping me build Pixie on my local machine, but we failed, even after switching distros. At last, we were able to set up my dev environment inside a container.

…quite a month

Month 2

In the first half of this month, I continued my research on AMQP (apparently implementing a protocol required a lot of extensive reading) and found analogies of it with protocols I was already familiar with, and kept on manually experimenting with packet translation.

3rd week of the month, It was finally time for me to start writing some code. Okay, so this was the difficult part. Having very limited knowledge of C++, continued forward. But my mentor was being an angel at this point, very patiently explaining to me, and pointing me in the right direction, making me understand every lex required. I started with implementing a data structure for storing and creating relations between packets. After some effort, finally got my PR merged.

AMQP types header file

Month 3

Continuing my code work, I started building a parser code. Yaxiong was very patient and helpful during this time, sending me blogs, and guides and explaining to me every little doubt I had. Thanks to him I was able to finally submit my preliminary code for parsing the code.

And a final thing for this was to write tests. Learned google’s C++ testing library. Wrote code, pushed.

Concluding the program

Like every good thing, this also came to an end. 12 weeks just fly by — faster than you can think — The program opened up a new world of open source and got me introduced to a lot of professional tools and etiquette. I appreciate the time and efforts my mentor put into this program.

Completing this internship was a dream come true, dodging tonnes of problems: internet, college, placement preparation, exams, everything. At many points in the internship, I was very certain I won’t be able to complete the project. but:

At some point, everything’s gonna go south on you… everything’s going to go south and you’re going to say, this is it. This is how I end. Now you can either accept that, or you can get to work. That’s all it is. You just begin. You do the math. You solve one problem… and you solve the next one… and then the next. And If you solve enough problems, you get to come home.

— Tail ender, The Martian.

SODA Foundation logo - dolphins

Welcomes SoftBank Group to its member ranks

TOKYO, May 25, 2022 – The SODA Foundation, which hosts the SODA Open Data Framework (ODF) for data mobility from edge to core to cloud, today announced two new open source projects: Kahu and Como. Kahu streamlines data protection for Kubernetes and its application data, and Como is a virtual data lake project to enable seamless access to data stored in different clouds. The SODA Foundation also welcomes SoftBank Group as an end-user supporter and key collaboration partner on the Como project.

According to the 2021 SODA Data and Storage Trends Report, two of the top challenges in managing data in containers and cloud-native environments are availability (46%) and management tools (38%).  In direct response to the report findings, the SODA Foundation community collaborated to introduce new tooling options through the Kahu project to improve backup and restore practices critical to data availability.  Furthermore, as enterprises become more data-driven and data growth for some enterprises can exceed 10PB per year, object data management offered by the Como Project will play an important role in performance and scalability requirements for cloud-native environments.

“Data collection, management, and consumption is becoming the new competitive battlefield in IT”, said Steven Tan, chairman, SODA Foundation. “We’re excited to announce Kahu and Como as the latest advances in open source data management and storage. Our 28 members are also excited to welcome the engineers and open source community within SoftBank Group to the Foundation.” 

“Data is the fuel of our global digital economy and harnessing its power requires collaboration on a massive scale”, said Kuniyoshi Suzuki, Senior Director, Cloud Engineering , SoftBank Group.  “Softbank is excited to be joining a community of open source software developers focused on enabling improvements toward data storage, recovery, and retention in cloud environments. We look forward to collaborating with the SODA Foundation and its members, while contributing to the future of this important community.”

New Open Source Releases

In addition to the announcement of Kahu and Como projects, the SODA Foundation also announced the:

  • Release of SODA Framework Madagascar v1.7.0: Formerly Open Data Framework (ODF), SODA Framework comprises independent projects initiated by the community to solve common data and storage problems faced by end users. It includes:
    • Terra: a universal SDS controller for connecting storage to Kubernetes, OpenStack, and VMware environments.
    • Delfin: a performance monitor for heterogeneous storage infrastructure in a single pane of glass.
    • Strato: a multi-cloud data controller using a common S3-compatible interface to connect to cloud storage.
    • Kahu : new project to streamline data protection for Kubernetes and application data.
  • Expansion of its Eco Project Initiative with the introduction of more open source projects: 

DAOS: a software-defined object store designed from the ground up for massively distributed Non Volatile Memory (NVM), providing features such as transactional non-blocking I/O, advanced data protection with self-healing on top of commodity hardware, end-to-end data integrity, fine-grained data control and elastic storage.

YIG: extends Minio backend storage aggregating multiple Ceph clusters to form a massive storage resource pool that can easily scale up to exabyte (EB) levels with minimal performance disruption.

CubeFS: a cloud-native storage platform used as the underlying storage infrastructure for online applications, database or data processing services and machine learning jobs orchestrated by Kubernetes.

Karmada: a Kubernetes management system that enables organizations to run cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications.

SBK: an open source software framework for the performance benchmarking of any storage system.

Conferences and Survey

  • SODACODE: this week, developers from around the world will participate in SODACODE 2022 – the Data & Storage Hackathon on May 25 – 26.  The first-of-its-kind coding event organized by SODA Foundation is open to developers from all levels ranging from beginner to advanced. The hackathon will conclude with project demonstrations, presentation sessions, panel discussions and an award ceremony for the hackathon winners.
  • Trend Survey: The SODA Foundation will release its second-annual Data and Storage Trends Survey on June 30, 2022.
  • SODACON: a technical conference held by SODA Foundation, will be held this year in Yokohama, Japan on December 7, 2022. The conference will bring together industry leaders, developers and end users to present and discuss the most recent innovations, trends, and concerns as well as practical challenges and solutions in the field of Data and Storage Management in the era of cloud-native, IoT, big data, machine learning, and more.

Additional Resources

  • Join the SODA Foundation
  • Attend SODACODE 2022 – The Data & Storage Hackathon
  • Read the 2021 Data and Storage Trends Report

About the SODA Foundation

Previously OpenSDS, the SODA Foundation is part of the Linux Foundation and includes both open source software and standards to support the increasing need for data autonomy. SODA Foundation Premiere members include China Unicom, Fujitsu, Huawei, NTT Communications and Toyota Motor Corporation. Other members include China Construction Bank Fintech, Click2Cloud, GMO Pepabo, IIJ, MayaData, LinBit, Scality, Sony, Wipro and Yahoo Japan.

Media Contact

info@sodafoundation.io

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

faseela kubernetes certification

This article originally appeared on the LF Training Blog. You can access all of the LF Training resources and courses, including Kubernetes certifications, at here

Faseela K. is a platform development engineer with a background in open source networking. As she saw the use of containers growing more than the VMs she was working with, she began studying Kubernetes and eventually decided to pursue a Certified Kubernetes Administrator (CKA). We spoke to her about her experience.

Linux Foundation: What was the experience like taking the CKA exam?

Faseela K: I was actually nervous, as this was the first online certification exam I was taking from home, so there was some uncertainty going in. Would the proctor turn up on time? Will the cloud platform where we are taking the exam get stuck? Will I be able to finish the exam on time? Those and several other such questions ran through my mind. But I turned down all my concerns, had a very smooth exam experience, and was able to finish it without any difficulties.

LF: How did you prepare for the exam?

FK: I am a person who uses Kubernetes in my day to day work, so the topics in the syllabus were familiar to me. On top of that I did some practice tests and online courses. Preparing for the exam made so many of my day to day work related tasks much easier, and my level of expertise on K8s increased considerably.

LF: How did preparing for and taking CKA help you improve your skills?

FK: Though I work on K8s regularly, the range of concepts and capabilities I was using were minimal. Preparing for CKA helped me touch upon all areas of K8s, and the experience which I already had helped me get a complete end to end view of things. I can troubleshoot Kubernetes issues in a better way now, and go deep into each problem to find a solution.

LF: Tell us more about your current job role. What types of activities are you engaged in and how has the CKA helped with them?

FK: I currently work as a platform development engineer at Cisco, where we develop and maintain an enterprise Kubernetes platform. Troubleshooting, upgrading, networking, and system management of containerized platforms are part of our daily tasks, and CKA has helped me master all these areas with perfection. The training which I took to prepare for the CKA phenomenally transformed my perspective about Kubernetes administration, and this has helped me attain an end to end view of the product. Debugging any issues in the platform has become easier than ever, and the certification has given me even more confidence with fixing issues in a time sensitive manner.

LF: You mentioned to us previously you’d like to take the Certified Kubernetes Application Developer (CKAD) next; what appeals to you about that certification?

FK: I am planning to go deeper into containerized application development in my career, and hence CKAD was appealing to me. In fact, I already completed CKAD and became CKAD certified within less than a month of achieving my CKA certification. The confidence I gained after CKA helped me try the second one also faster.

LF: Tell us about your experience working on the OpenDaylight project. What prompted you to move from focusing on SDN to Kubernetes?

FK: I was previously a member of the Technical Steering Committee of the OpenDaylight project at The Linux Foundation, and made a lot of contributions to OpenDaylight. Working in open source has been the most amazing experience I have ever had in my life, and OpenDaylight gave me exposure to the various activities under LF Networking, while being a part of The Linux Foundation generally helped me engage with some of the top notch brains across organizations.

Coming together from across the globe during various conferences and DDFs, and working together across the company boundaries to solve common SDN problems has given me so much satisfaction. Over a period of time, containers were gaining traction over VMs, and I wanted to get more involved with containerization and platform development, where Kubernetes looked more promising.

LF: What are your future career goals?

FK: I intend to learn more about K8s internal implementation, and also to get involved with projects like istio, servicemesh and networkservicemesh in the future. My dream is to become a cloud native software developer, who promotes containerized application development in a cloud native way.

LF: What technology are you most interested in studying next?

FK: I am currently pursuing a course on the golang programming language. I also plan to take the Certified Kubernetes Security Specialist (CKS) exam if time permits.

testing cloud native best practices with cnf test suite

Here at The Linux Foundation’s blog, we share content from our projects, such as this article by Joel Hans from the Cloud Native Computing Foundation’s blog

The telecommunications industry is the backbone of today’s increasingly-digital economies, but it faces a difficult new challenge in evolving to meet modern infrastructure practices. How did telecommunications get itself into this situation? Because the risks of incidents or downtime are so severe, the industry has focused almost exclusively on system designs that minimize risk and maximize reliability. That’s fantastic for mission-critical services, whether public air traffic control or private high-speed banking, but it emphasizes stability over productivity and the adoption of new technologies that might make their operations more resilient and performant.

Telecommunications is playing catch-up on cloud native technology, and the downstream effects are starting to show. These organizations are now behind the times on the de facto choices for enterprise and IT, which means they’re less likely to recruit the top-tier engineering talent they need. In increasingly competitive landscapes, they need to escalate productivity and deploy new telephony platforms to market faster, not get quagmired in old custom solutions built in-house.

To make that leap from internally-trusted to industry-trusted tooling, telecommunications organizations need confidence that they’re on track to properly evolve their virtual network function (VNF) infrastructure to enable cloud native functions using Kubernetes. That’s where CNCF aims to help.

Enter the CNF Test Suite for telecommunications

A cloud native network function (CNF) is an application that implements or facilitates network functionality in a cloud native way, developed using standardized principles and consisting of at least one microservice.

And the CNF Test Suite (cncf/cnf-testsuite) is an open source test suite for telcos to know exactly how cloud native their CNFs are. It’s designed for telecommunications developers and network operators, building with Kubernetes and other cloud native technology, to validate how well they’re following cloud native principles and best practices, like immutable infrastructure, declarative APIs, and a “repeatable deployment process.”

The CNCF is bringing together the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG) to implement the CNF Test Suite, which helps telco developers and ops teams build faster feedback loops thanks to the suite’s flexible testing and optimized execution time. Because it can be integrated into any CI/CD pipeline, whether in development or pre-production checks, or run as a standalone test for a single CNF, telecommunications development teams get at-a-glance understanding of how their new deployments align with the cloud native ecosystem, including CNCF-hosted projects, technologies, and concepts.

It’s a powerful answer to a difficult question: How cloud native are we?

The CNF Test Suite leverages 10 CNCF-hosted projects and several open source tools. A modified version of CoreDNS is used as an example CNF for end users to get familiar with the test suite in five steps, and Prometheus is utilized in an observability test to check the best practice for CNFs to actively expose metrics. And it packages other upstream tools, like OPA GatekeeperHelm linter, and Promtool to make installation, configuration, and versioning repeatable. The CNF Test Suite team is also grateful to contributions from Kyverno on security tests, LitmusChaos for resilience tests, and Kubescope for security policies.

The minimal install for the CNF Test Suite requires only a running Kubernetes cluster, kubectl, curl, and helm, and even supports running CNF tests on air-gapped machines or those who might need to self-host the image repositories. Once installed, you can use an example CNF or bring your own—all you need is to supply the .yml file and run `cnf-testsuite all` to run all the available tests. There’s even a quick five-step process for deploying the suite and getting recommendations in less than 15 minutes.

What the CNF Test Suite covers and why

At the start of 2022, the CNF Test Suite can run approximately 60 workload tests, which are segmented into 7 different categories.

Best practices

Compatibility, Installability & Upgradability: CNFs should work with any Certified Kubernetes product and any CNI-compatible network that meet their functionality requirements while using standard, in-band deployment tools such as Helm (version 3) charts. The CNF Test Suite checks whether the CNF can be horizontally and vertically scaled using `kubectl` to ensure it can leverage Kubernetes’ built-in functionality.

Microservice: The CNF should be developed and delivered as a microservice for improved agility, or the development time required between deployments. Agile organizations can deploy new features more frequently or allow multiple teams to safely deploy patches based on their functional area, like fixing security vulnerabilities, without having to sync with other teams first.

State: A cloud native infrastructure should be immutable, environmentally-agnostic, and resilient to node failure, which means properly managing configuration, persistent data, and state. A CNF’s configuration should be stateless, stored in a custom resource definition or a separate database over local storage, with any persistent data managed by StatefulSets. Separate stateful and stateless information makes for infrastructure that’s easily reproduced, consistent, disposable, and always deployed in a repeatable way.

Reliability, Resilience & Availability: Reliability in telco infrastructure is the same as standard IT—it needs to be highly secure and reliable and support ultra-low latencies. Cloud native best practices try to reduce mean time between failure (MTBF) by relying on redundant subcomponents with higher serviceability (mean time to recover (MTTR)), and then testing those assumptions through chaos engineering and self-healing configurations. The Test Suite uses a type of chaos testing to ensure CNFs are resilient to the inevitable failures of public cloud environments or issues on an orchestrator level, such as what happens when pods are unexpectedly deleted or run out of computing resources. These tests ensure CNFs meet the telco industry’s standards for reliability on non-carrier-grade shared cloud hardware/software platforms.

Observability & Diagnostics: Each piece of production cloud native infrastructure must make its internal states observable through metrics, tracing, and logging. The CNF Test suite looks for compatibility with FluentdJaegerPromtoolPrometheus, and OpenMetrics, which help DevOps or SRE teams maintain, debug, and gather insights about the health of their production environments, which must be versioned, maintained in source control, and altered only through deployment pipelines.

Security: Cloud native security requires attention from experts at the operating system, container runtime, orchestration, application, and cloud platform levels. While many of these fall outside the scope of the CNF Test Suite, it still validates whether containers are isolated from one another and the host, do not allow privilege escalation, have defined resource limits, and are verified against common CVEs.

Configuration: Teams should manage a CNF’s configuration in a declarative manner—using ConfigMaps, Operators, or other declarative interfaces—to design the desired outcome, not how to achieve said outcome. Declarative configuration doesn’t have to be executed to be understood, making it far less prone to error than imperative configuration or even the most well-maintained sequences of `kubectl` commands.

After deploying numerous tests in each category, the CNF Test Suite outputs flexible scoring and suggestions for remediation for each category (or one category if you chose that in the CLI), giving you practical next steps on improving your CNF to better follow cloud native best practices. It’s a powerful—and still growing—solution for the telecommunications industry to embrace the cloud native in a way that’s controllable, observable, and validated by all the expertise under the CNCF umbrella.

What’s next for the CNF Test Suite?

The Test Suite initiative will continue to work closely with the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG), collecting feedback based on real-world use cases and evolving the project. As the CNF WG publishes more recommended practices for cloud native telcos, the CNF Test Suite team will add more tests to validate each.

In fact, v0.26.0, released on February 25, 2022, includes six new workload tests, bug fixes, and improved documentation around platform tests. If you’d like to get involved and shape the future of the CNF Test Suite, there are already several ways to provide feedback or contribute code, documentation, or example CNFs:

Looking ahead: The CNF Certification Program

The CNF Test Suite is just the first exciting step in the upcoming Cloud Native Network Function (CNF) Certification Program. We’re looking forward to making the CNF Test Suite the de facto tool for network equipment providers and CNF development teams to prove—and then certify—that they’re adopting cloud native best practices in new products and services.

The wins for the telecommunications industry are clear:

  • Providers get verification that their cloud native applications and architectures adhere to cloud native best practices.
  • Their customers get verification that the cloud native services or networks they’re procuring are actually cloud native.

And they both get even better reliability, reduced risk, and lowered capital/operating costs.

We’re planning on supporting any product that runs in a certified Kubernetes environment to make sure organizations build CNFs that are compatible with any major public cloud providers or on-premises environments. We haven’t yet published the certification requirements, but they will be similar to the k8s-conformance process, where you can submit results via pull request and receive updates on your certification process over email.

As the CNF Certification Program develops, both the TUG and CNF-WG will engage with organizations that use the Test Suite heavily to make improvements and stay up-to-date on the latest cloud native best practices. We’re excited to see how the telecommunications industry evolves by adopting more cloud native principles, like loosely-coupled systems and immutability, and gathering proof of their hard work via the CNF Test Suite. That’s how we ensure a complex and essential industry makes the right next steps away toward the best technology infrastructure has to offer—without sacrificing an inch on reliability.

To take the next steps with the CNF Test Suite and prepare your organization for the upcoming CNF Certification Program, schedule a personalized CNF Test Suite demo or attend Cloud Native Telco Day, a co-located Event at KubeCon + CloudNativeCon Europe 2022 on May 16, 2022.

There is an exciting convergence in the networking industry around open source, and the energy is palpable. At LF Networking, we have a unique perspective as the largest open source initiative in the networking space with the broadest set of projects that make up the diverse and evolving open source networking stack. LF Networking provides platforms and building blocks across the networking industry that enable rapid interoperability, deployment, and adoption and is the nexus for 5G innovation and integration. 

LF Networking has now tapped confluence on industry efforts to structure a new initiative to develop 5G Super Blueprints for the ecosystem. Major integrations between the building blocks are now underway–between ONAP and ORAN, Akraino and Magma, Anuket and Kubernetes, and more. 

“Super” means that we’re integrating multiple projects, umbrellas (such as LF Edge, Magma, CNCF, O-RAN Alliance, LF Energy, and more) with an end-to-end framework for the underlying infrastructure and application layers across edge, access, and core. This end-to-end integration enables top industry use cases, such as fixed wireless, mobile broadband, private 5G, multi-access, IoT, voice services, network slicing, and more. In short, 5G Super Blueprints are a vehicle to collaborate and create end-to-end 5G solutions.

Major industry verticals banking on this convergence and roadmap include the global telcos that you’d expect, but 5G knows no boundaries, and we’re seeing deep engagement from cloud service providers, enterprise IT, governments, and even energy.

5G is poised to modernize today’s energy grid with awareness monitoring across Distribution Systems and more.

This will roll out in 3 phases, the first encompassing 5G Core + Multi-access Edge Computing (MEC) using emulators. The second phase introduces commercial RANs to end-to-end 5G, and the third phase will integrate Open Radio Access Network (O-RAN). 

The 5G Super Blueprint is an open initiative, and participation is open to anyone. To learn more, please see the 5G Super Blueprint FAQ and watch the video, What is the 5G Super Blueprint? from Next Gen Infra

Participation in this group has tripled over the last few weeks! If you’re ready to join us, please indicate your interest in participation on the 5G Super Blueprint webpage, and follow the onboarding steps on the 5G Super Blueprint Wiki. Send any questions to superblueprint@lfnetworking.org

In mid-February, the Linux Foundation announced it had signed a collaboration agreement with the Defense Advanced Research Projects Agency (DARPA), enabling US Government suppliers to collaborate on a common open source platform that will enable the adoption of 5G wireless and edge technologies by the government. Governments face similar issues to enterprise end-users — if all their suppliers deliver incompatible solutions, the integration burden escalates exponentially.  

The first collaboration, Open Programmable Secure 5G (OPS-5G), currently in the formative stages, will be used to create open source software and systems enabling end-to-end 5G and follow-on mobile networks. 

The road to open source influencing 5G: The First, Second, and Third Waves of Open Source

If we examine the history of open source, it is informative to observe it from the perspective of evolutionary waves. Many open-source projects began as single technical projects, with specific objectives, such as building an operating system kernel or an application. This isolated, single project approach can be viewed as the first wave of open source.

We can view the second wave of open source as creating platforms seeking to address a broad horizontal solution, such as a cloud or networking stack or a machine learning and data platform.

The third wave of open source collaboration goes beyond isolated projects and integrates them for a common platform for a specific industry vertical. Additionally, the third wave often focuses on reducing fragmentation — you commonly will see a conformance program or a specification or standard that anyone in the industry can cite in procurement contracts.

Industry conformance becomes important as specific solutions are taken to market and how cross-industry solutions are being built — especially now that we have technologies requiring cross-industry interaction, such as end-to-end 5G, the edge, or even cloud-native applications and environments that span any industry vertical. 

The third wave of open source also seeks to provide comprehensive end-to-end solutions for enterprises and verticals, large institutional organizations, and government agencies. In this case, the community of government suppliers will be building an open source 5G stack used in enterprise networking applications. The end-to-end open source integration and collaboration supported by commercial investment with innovative products, services, and solutions accelerate the technology adoption and transformation.

Why DARPA chose to partner with the Linux Foundation

DARPA at the US Department of Defense has tens of thousands of contractors supplying networking solutions for government facilities and remote locations. However, it doesn’t want dozens, hundreds, or thousands of unique and incompatible hardware and software solutions originating from its large contractor and supplier ecosystem. Instead, it desires a portable and open access standard to provide transparency to enable advanced software tools and systems to be applied to a common code base various groups in the government could build on. The goal is to have a common framework that decouples hardware and software requirements and enabling adoption by more groups within the government.

Naturally, as a large end-user, the government wants its suppliers to focus on delivering secure solutions. A common framework can ideally decrease the security complexity versus having disparate, fragmented systems. 

The Linux Foundation is also the home of nearly all the important open source projects in the 5G and networking space. Out of the $54B of the Linux Foundation community software projects that have been valued using the COCOMO2 model, the open source projects assisting with building a 5G stack are estimated to be worth about $25B in shared technology investment. The LF Networking projects have been valued at $7.4B just by themselves. 

The support programs at Linux Foundation provide the key foundations for a shared community innovations pool. These programs include IP structure and legal frameworks, an open and transparent development process, neutral governance, conformance, and DevOps infrastructure for end-to-end project lifecycle and code management. Therefore, it is uniquely suited to be the home for a community-driven effort to define an open source 5G end-to-end architecture, create and run the open source projects that embody that architecture, and support its integration for scaling-out and accelerating adoption.

The foundations of a complete open source 5G stack

The Linux Foundation worked in the telecommunications industry early on in its existence, starting with the Carrier Grade Linux initiatives to identify requirements and building features to enable the Linux kernel to address telco requirements. In 2013, The Linux Foundation’s open source networking platform started with bespoke projects such as OpenDaylight, the software-defined networking controller. OPNFV (now Anuket), the network function virtualization stack, was introduced in 2014-2015, followed by the first release of Tungsten Fabric, the automated software-defined networking stack. FD.io, the secure networking data plane, was announced in 2016, a sister project of the Data Plane Development Kit (DPDK) released into open source in 2010.


Linux Foundation & Other Open Source Component Projects for 5G

At the time, the telecom/network and wireless carrier industry sought to commoditize and accelerate innovation across a specific piece of the stack as software-defined networking became part of their digital transformation. Since the introduction of these projects at LFN, the industry has seen heavy adoption and significant community contribution by the largest telecom carriers and service providers worldwide. This history is chronicled in detail in our whitepaper, Software-Defined Vertical Industries: Transformation Through Open Source.

The work that the member companies will focus on will require robust frameworks for ensuring changes to these projects are contributed back upstream into the source projects. Upstreaming, which is a key benefit to open source collaboration, allows the contributions specific to this 5G effort to roll back into their originating projects, thus improving the software for every end-user and effort that uses them.

The Linux Foundation networking stack continues to evolve and expand into additional projects due to an increased desire to innovate and commoditize across key technology areas through shared investments among its members. In February of 2021, Facebook contributed the Magma project, which transcends platform infrastructure such as the others listed above. Instead, it is a network function application that is core to 5G network operations. 

The E2E 5G Super Blueprint is being developed by the LFN Demo working group. This is an open collaboration and we encourage you to join us. Learn more here.

Building through organic growth and cross-pollination of the open source networking and cloud community

Tier 2 operators, rural operators, and governments worldwide want to reap the benefits of economic innovation as well as potential cost-savings from 5G. How is this accomplished?

With this joint announcement and its DARPA supplier community collaboration, the Linux Foundation’s existing projects can help serve the requirements of other large end-users. Open source communities are advancing and innovating some of the most important and exciting technologies of our time. It’s always interesting to have an opportunity to apply the results of these communities to new use cases. 

The Linux Foundation understands the critical dynamic of cross-pollination between community-driven open source projects needed to help make an ecosystem successful. Its proven governance model has demonstrated the ability to maintain and mature open source projects over time and make them all work together in one single, cohesive ecosystem. 

As a broad set of contributors work on components of an open source stack for 5G, there will be cross-community interactions. For example, that means that Project EVE, the cloud-native edge computing platform, will potentially be working with Project Zephyr, the scalable real-time operating system (RTOS) kernel, so that Eve can potentially orchestrate Zephyr devices. It’s all based on contributors’ self-interests and motivations to contribute functionality that enables these projects to work together. Similarly, ONAP, the network automation/orchestration platform, is tightly integrated with Akraino so that it has architectural deployment templates built around network edge clouds and multi-edge clouds. 

An open source platform has implications not just for new business opportunities for government suppliers but also for other institutions. The projects within an open source platform have open interfaces that can be integrated and used with other software so that other large end-users like the World Bank, can have validated and tested architectural blueprints, with which can go ahead and deploy effective 5G solutions in the marketplace in many host countries, providing them a turnkey stack. This will enable them to encourage providers through competition or challenges native to their in-country commercial ecosystem to implement those networks. 

This is a true solutions-oriented open source for 5G stack for enterprises, governments, and the world. 

New Janssen Project seeks to build the world’s fastest and most comprehensive cloud native identity and access management software platform

SAN FRANCISCO, Calif., December 8, 2020 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the Janssen Project, a cloud native identity and access management software platform that prioritizes security and performance for our digital society. Janssen is based on the Gluu Server and benefits from a rich set of signing and encryption functionalities. Engineers from IDEMIA, F5, BioID, Couchbase and Gluu will make up the Technical Steering Committee.

Online trust is a fundamental challenge to our digital society. The Internet has connected us. But at the same time, it has undermined trust. Digital identity starts with a connection between a person and a digital device. Identity software conveys the integrity of that connection from the user’s device to a complex web of backend services. Solving the challenge of digital identity is foundational to achieving trustworthy online security.

While other identity and access management platforms exist, the Janssen Project seeks to tackle the most challenging security and performance requirements. Based on the latest code that powers the Gluu Server–which has passed more OpenID self-certification tests than any other platform–Janssen starts with a rich set of signing and encryption functionality that can be used for high assurance transactions. Having shown throughput of more than one billion authentications per day, the software can also handle the most demanding requirements for concurrency thanks to Kubernetes auto-scaling and advances in persistence.

“Trust and security are not competitive advantages–no one wins in an insecure society with low trust,” said Mike Schwartz, Chair of the Janssen Project Technical Steering Committee. “In the world of software, nothing builds trust like the open source development methodology. For organizations who cannot outsource trust, the Janssen Project strives to bring transparency, best practices and collective governance to the long-term maintenance of this important effort. The Linux Foundation provides the neutral and proven forum for organizations to collaborate on this work.”

The Gluu engineering teams chose the Linux Foundation to host this community because of the Foundation’s priority of transparency in the development process and its formal framework for governance to facilitate collaboration among commercial partners.

New digital identity challenges arise constantly, and new standards are developed to address them. Open source ecosystems are an engine for innovation to filter and adapt to changing requirements. The Janssen Project Technical Steering Committee (“TSC”) will help govern priorities according to the charter.  The initial TSC includes:

  • Michael Schwartz, TSC Chair, CEO Gluu
  • Rajesh Bavanantham, Domain Architect at F5 Networks/NGiNX
  • Rod Boothby, Head of Digital Trust at Santander
  • Will Cayo, Director of Software Engineering at IDEMIA Digital Labs
  • Ian McCloy, Principal Product Manager at Couchbase
  • Alexander Werner, Software Engineer at BioID

For more information, see the project Github site: https://github.com/JanssenProject

Supporting Comments

BioID

“BioID’s biometric authentication service provides GDPR compliant, device independent, 3D liveness detection and facial recognition APIs, supported out-of-the-box by the Janssen project. Exposing BioID’s capabilities via OpenID Connect makes sense in many cases, especially as part of the rollout for a large organization.  The availability of a high-quality open source implementation of OpenID Connect gives us more options to build products and to expand the options for our customers to deploy our technology,” said Alexander Werner, Software Engineer at BioID.

Couchbase

“The Couchbase database is supported today in the Janssen project for both caching and persistence. This makes sense given the distributed, elastic, in-memory requirements for a multi-cloud, hyper-scale identity service. Contributing to this project aligns with our goal to advance open source infrastructure software that results in more options for the Couchbase community,” said Ian McCloy, Principal Product Manager at Couchbase.

F5

“It’s an immense pleasure to join the Janssen Project, as it’s aimed to improve the performance, reliability and security on OAuth2 Components that are similar to NGINX Principles. Being part of Linux Foundation, the Janssen Project will be well governed and evolve with the open source community to achieve its goals,” said Rajesh Bavanantham, F5.

IDEMIA

“I have been a part of the Gluu community for many years. I’m excited to see the project moving to the Linux Foundation where we can collaborate with an even larger ecosystem of individuals and companies,” said Will Cayo, IDEMIA.

 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,500 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

Media Contact
Jennifer Cloer
Story Changes Culture
503-867-2304
jennifer@storychangesculture.com