sigstore logo

This post is authored by Hayden Blauzvern and originally appeared on Sigstore’s blog. Sigstore is a new standard for signing, verifying, and protecting software. It is a project of the Linux Foundation. 

Developers, package maintainers, and enterprises that would like to sigstore logo adopt Sigstore may already sign published artifacts. Signers may have existing procedures to securely store and use signing keys. Sigstore can be used to sign artifacts with existing self-managed, long-lived signing keys. Sigstore provides a simple user experience for signing, verification, and generating structured signature metadata for artifacts and container signatures. Sigstore also offers a community-operated, free-to-use transparency log for auditing signature generation.

Sigstore additionally has the ability to use code signing certificates with short-lived signing keys bound to OpenID Connect identities. This signing approach offers simplicity due to the lack of key management; however, this may be too drastic of a change for enterprises that have existing infrastructure for signing. This blog post outlines strategies to ease adoption of Sigstore while still using existing signing approaches.

Signing with self-managed, long-lived keys

Developers that maintain their own signing keys but want to migrate to Sigstore can first switch to using Cosign to generate a signature over an artifact. Cosign supports importing an existing RSA, ECDSA, or ED25519 PEM-encoded PKCS#1 or PKCS#8 key with cosign import-key-pair –key key.pem, and can sign and verify with cosign sign-blob –key cosign.key artifact-path and cosign verify-blob –key cosign.pub artifact-path.

Benefits

  • Developers can get accustomed to Sigstore tooling to sign and verify artifacts.
  • Sigstore tooling can be integrated into CI/CD pipelines.
  • For signing containers, signature metadata is published with the OCI image in an OCI registry.

Signing with self-managed keys with auditability

While maintaining their own signing keys, developers can increase auditability of signing events by publishing signatures to the Sigstore transparency log, Rekor. This allows developers to audit when signatures are generated for artifacts they maintain, and also monitor when their signing key is used to create a signature.

Developers can upload a signature to the transparency log during signing with COSIGN_EXPERIMENTAL=1 cosign sign-blob –key cosign.key artifact-path. If developers would like to use their own signing infrastructure while still publishing to a transparency log, developers can use the Rekor CLI or API. To upload an artifact and cryptographically verify its inclusion in the log using the Rekor CLI:

rekor-cli upload --rekor_server https://rekor.sigstore.dev \
  --signature <artifact_signature> \
  --public-key <your_public_key> \
  --artifact <url_to_artifact|local_path>

rekor-cli verify --rekor_server https://rekor.sigstore.dev \
  --signature <artifact-signature> \
  --public-key <your_public_key> \
  --artifact <url_to_artifact|local_path>

In addition to PEM-encoded certificates and public keys, Sigstore supports uploading many different key formats, including PGP, Minisign, SSH, PKCS#7, and TUF. When uploading using the Rekor CLI, specify the –pki-format flag. For example, to upload an artifact signed with a PGP key:

gpg --armor -u user@example.com --output signature.asc --detach-sig package.tar.gz

gpg --export --armor "user@example.com" > public.key

rekor-cli upload --rekor_server https://rekor.sigstore.dev \
  --signature signature.asc \
  --public-key public.key \
  --pki-format=pgp \
  --artifact package.tar.gz

Benefits

  • Developers begin to publish signing events for auditability.
  • Artifact consumers can create a verification policy that requires a signature be published to a transparency log.

Self-managed keys in identity-based code signing certificate with auditability

When requesting a code signing certificate from the Sigstore certificate authority Fulcio, Fulcio binds an OpenID Connect identity to a key, allowing for a verification policy based on identity rather than a key. Developers can request a code signing certificate from Fulcio with a self-managed long-lived key, sign an artifact with Cosign, and upload the artifact signature to the transparency log.

However, artifact consumers can still fail-open with verification (allow the artifact, while logging the failure) if they do not want to take a hard dependency on Sigstore (require that Sigstore services be used for signature generation). A developer can use their self-managed key to generate a signature. A verifier can simply extract the verification key from the certificate without verification of the certificate’s signature. (Note that verification can occur offline, since inclusion in a transparency log can be verified using a persisted signed bundle from Rekor and code signing certificates can be verified with the CA root certificate. See Cosign’s verification code for an example of verifying the Rekor bundle.)

Once a consumer takes a hard dependency on Sigstore, a CI/CD pipeline can move to fail-closed (forbid the artifact if verification fails).

Benefits

  • A stronger verification policy that enforces both the presence of the signature in a transparency log and the identity of the signer.
  • Verification policies can be enforced fail-closed.

Identity-based (“keyless”) signing

This final step is added for completeness. Signing is done using code signing certificates, and signatures must be published to a transparency log for verification. With identity-based signing, fail-closed is the only option, since Sigstore services must be online to retrieve code signing certificates and append entries to the transparency log. Developers will no longer need to maintain signing keys.

Conclusion

The Sigstore tooling and infrastructure can be used as a whole or modularly. Each separate integration can help to improve the security of artifact distribution while allowing for incremental updates and verifying each step of the integration.

Delta Lake's expanding ecosystem of connectors

This post originally appeared on the Delta Lake blog

We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark™ 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quite nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Delta 1.2 vs Delta 2.0 chart

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

Select & from events example

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

code example

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark™ performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Optimize deltaTable ZORDER BY (x, y)

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation and this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set spark.databricks.delta.properties.defaults.enableChangeDataFeed = true;

An important thing to remember is once you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGEUPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.
Options Table (v1) Change data (merged as v2) change data feed output

See the documentation for more details.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping and logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

Before column mapping and with column mapping

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:

ALTER TABLE myDeltaTable DROP COLUMN myColumn

See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Experimental support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other notable changes
    • Improve the generated column data skipping by adding the support for skipping by nested column generated column
    • Improve the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.

Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

Optimize ZOrder

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta's expanding ecosystem of connectors

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

The most widely used lakehouse format in the world

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.

Credits

Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

Join the community today

open horizon and linux foundation logos

The following post originally appeared on Medium. The author, Ruchi Pakhle, participated in our LFX Mentorship program this past spring.

Hey everyone!
I am Ruchi Pakhle currently pursuing my Bachelor’s in Computer Engineering from MGM’s College of Engineering & Technology. I am a passionate developer and an open-source enthusiast. I recently graduated from LFX Mentorship Program. In this blog post, I will share my experience of contributing to Open Horizon, a platform for deploying container-based workloads and related machine learning models to compute nodes/clusters on edge.

Background

I have been an active contributor to open-source projects via different programs like GirlScript Summer of Code, Script Winter of Code & so on.. through these programs I contributed to different beginner-level open-source projects. After almost doing this for a year, I contributed to different organizations for different projects including documentation and code. On a very random morning applications for LFX were opened up and I saw various posts on LinkedIn among that posts one post was of my very dear friend Unnati Chhabra, she had just graduated from the program and hence I went ahead and checked the organization that was a fit as per my skill set and decided to give it a shot.

Why did I apply to Open Horizon?

I was very interested in DevOps and Cloud Native technologies and I wanted to get started with them but have been procrastinating a lot and did not know how to pave my path ahead. I was constantly looking for opportunities that I can get my hands on. And as Open Horizon works exactly on DevOps and Cloud Native technologies, I straight away applied to their project and they had two slots open for the spring cohort. I joined their element channel and started becoming active by contributing to the project, engaging with the community, and also started to read more about the architecture and tried to understand it well by referring to their youtube videos. You can contribute to Open Horizon here.

Application process

Linux Foundation opens LFX mentorship applications thrice a year: one in spring, one in summer, and the winter cohort, each cohort being for a span of 3 months. I applied to the winter cohort for which the applications opened up around February 2022 and I submitted my application on 4th February 2022 for the Open Horizon Project. I remember there were three documents mandatory for submitting the application:

1. Updated Resume/CV

2. Cover Letter

(this is very very important in terms of your selection so cover everything in your cover letter and maybe add links to your projects, achievements, or wherever you think they can add great value)

The cover letter should cover these points primarily👇

  • How did you find out about our mentorship program?
  • Why are you interested in this program?
  • What experience and knowledge/skills do you have that are applicable to this program?
  • What do you hope to get out of this mentorship experience?

3. A permission document from your university stating they have no obligation over the entire span of the mentorship was also required (this depends on org to org and may not be asked as well)

Selection Mail

The LFX acceptance mail was a major achievement for me as at that period of time I was constantly getting rejections and I had absolutely no idea about how things were gonna work out for me. I was constantly doubting myself and hence this mail not only boosted my confidence but also gave me a ray of hope of achieving things by working hard towards it consistently. A major thanks to my mentor, Joe Pearson, and Troy Fine for believing in me and giving me this opportunity.⭐

My Mentorship Journey

Starting off from the day I applied to the LFX until getting selected as an LFX Mentee and working successfully for over 3 months and a half, it felt surreal. I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I still remember setting up the mgmt-hub all-in-one script locally and I thought it was just a cakewalk, well it was not. I literally used to try every single day to run the script but somehow it would end up giving some errors, I used to google them and apply the results but still, it would fail. But one thing which I consistently did was share my progress regularly with my mentor, Troy no matter if the script used to fail but still I used to communicate that with Troy, I would send him logs and he used to give me some probable solutions for the same but still the script used to fail. I then messaged in the open-horizon-examples group and Joe used to help with my doubts, a huge thanks to him and Troy for helping me figure out things patiently. After over a month on April 1st, the script got successfully executed and then I started to work on the issues assigned by Troy.

These three months taught me to be consistent no matter what the circumstances are and work patiently which I wouldn’t have learned in my college. This experience would no doubt make me a better developer and engineer along with the best practices followed. A timeline of my journey has been shared here.

  1. Checkout my contributions here
  2. Checkout open-horizon-services repo

Concluding the program

The LFX Mentorship Program was a great great experience and I did get a great learning curve which I wouldn’t have gotten any other way. The program not only encourages developers to kick-start their open-source journey but also provides some great perks like networking, and learning from the best minds. I would like to thank my mentors Joe Pearson, Troy Fine, and Glen Darling because without their support and patience this wouldn’t have been possible. I would be forever grateful for this opportunity.

Special thanks to my mentor Troy for always being patient with me. These kind words would remain with me always although the program would have ended.

And yes how can I forget to plug in the awesome swags, special thanks, and gratitude to my mentor Joe Pearson for sending me such cool swags and this super cool note ❤handwritten thank you note from joe pearson

If you have any queries, connect with me on LinkedIn or Twitter and I would be happy to help you out 😀

LF Public Health logo

This original article appeared on the LF Public Health project’s blog.

The past three years have redefined the practice and management of public health on a global scale. What will we need in order to support innovation over the next three years?

In May 2022, ASTHO (Association of State and Territorial Health Officials) held a forward-looking panel at their TechXPO on public health innovation, with a specific focus on public-private partnerships. Jim St. Clair, the Executive Director of Linux Foundation Public Health, spoke alongside representatives from MITRE, Amazon Web Services, and the Washington State Department of Health.

Three concepts appeared and reappeared in the panel’s discussion: reimagining partnerships; sustainability and governance; and design for the future of public health. In this blog post, we dive into each of these critical concepts and what they mean for open-source communities.

Reimagining partnerships

The TechXPO panel opened with a discussion on partnerships for data modernization in public health, a trending topic at the TechXPO conference. Dr. Anderson (MITRE) noted that today’s public health projects demand “not just a ‘public-private’ partnership, but a ‘public-private-community-based partnership’.” As vaccine rollouts, digital applications, and environmental health interventions continue to be deployed at scale, the need for community involvement in public health will only increase.

However, community partnerships should not be viewed as just another “box to check” in public health. Rather, partnerships with communities are a transformative way to gain feedback while improving usability and effectiveness in public-health interventions. As an example, Dr. Anderson referenced the successful VCI (Vaccination Credential Initiative) project, mentioning “When states began to partner to provide data… and offered the chance for individuals to provide feedback… the more eyeballs on the data, the more accurate the data was.”

Cardea, an LFPH project that focuses on digital identity, has also benefited from public-private-community-based partnerships. Over the past two years, Cardea has run three community hackathons to test interoperability among other tools that use Cardea’s codebase. Trevor Butterworth, VP of Cardea’s parent company, Indicio, explained his thoughts on community involvement in open source: “The more people use an open source solution, the better the solution becomes through stress testing and innovation; the better it becomes, the more it will scale because more people will want to use it.” Cardea’s public and private-sector partnerships also include Indicio, SITA, and the Aruba Health Department, demonstrating the potential for diverse stakeholders to unite around public-health goals.

Community groups are also particularly well-positioned to drive innovation in public health: they are often attuned to pressing issues that might be otherwise missed by institutional stakeholders. One standout example is the Institute for Exceptional Care (IEC), a LFPH member organization focused on serving individuals with intellectual and developmental disabilities, “founded by health care professionals, many driven by personal experience with a disabled loved one.” IEC recently presented a webinar on surfacing intellectual and developmental disabilities in healthcare data: both the webinar and Q&A showcased the on-the-ground knowledge of this deeply involved, solution-oriented community.

Sustainability and governance

Sustainability is at the heart of every viable open source project, and must begin with a complete, consensus-driven strategy. As James Daniel (AWS) mentioned in the TechXPO panel, it is crucial to determine “exactly what a public health department wants to accomplish, [and] what their goals are” before a solution is put together. Defining these needs and goals is also essential for long-term sustainability and governance, as mentioned by Dr. Umair Shah (WADOH): “You don’t want a scenario where you start something and it stutters, gets interrupted and goes away. You could even make the argument that it’s better to not have started it in the first place.”

Questions of sustainability and project direction can often be answered by bringing private and public interests to the same table before the project starts. Together, these interests can determine how a potential open-source solution could be developed and used. As Jim St. Clair mentioned in the panel: “Ascertaining where there are shared interests and shared values is something that the private sector can help broker.” Even if a solution is ultimately not adopted, or a partnership never forms, a frank discussion of concerns and ideas among private- and public-sector stakeholders can help clarify the long-term capabilities and interests of all stakeholders involved.

Moreover, a transparent discussion of public health priorities, questions, and ideas among state governments, private enterprises, and nonprofits can help drive forward innovation and improvements even when there is no specific project at hand. To this end, LFPH hosts a public Slack channel as well as weekly Technical Advisory Council (TAC) meetings in which we host new project ideas and presentations. TAC discussions have included concepts for event-driven architecture for healthcare data, a public health data sharing mesh, and “digital twins” for informatics and research.

Design for the future of public health

Better partnerships, sustainability, and governance provide exciting prospects for what can be accomplished in open-source public health projects in the coming years. As Jim St. Clair (LFPH) mentioned in the TechXPO panel: “How do we then leverage these partnerships to ask ‘What else is there about disease investigative technology that we could consider? What other diseases, what other challenges have public health authorities always had?’” These challenges will not be tackled through closed source solutions—rather, the success of interoperable, open-source credentialing and exposure notifications systems during the pandemic has shown that open-source has the upper hand when creating scalable, successful, and international solutions.

Jim St. Clair is not only optimistic about tackling new challenges, but also about taking on established challenges that remain pressing: “Now that we’ve had a crisis that enabled these capabilities around contact tracing and notifications… [they] could be leveraged to expand into and improve upon all of these other traditional areas that are still burning concerns in public health.” For example, take one long-running challenge in United States healthcare: “Where do we begin… to help drive down the cost and improve performance and efficiency with Medicaid delivery? … What new strategies could we apply in population health that begin to address cost-effective care-delivery patient-centric models?”

Large-scale healthcare and public-health challenges such as mental health, communicable diseases, diabetes—and even reforming Medicaid—will only be accomplished by consistently bringing all stakeholders to the table, determining how to sustainably support projects, and providing transparent value to patients, populations and public sector agencies. LFPH has pursued a shared vision around leveraging open source to improve our communities, carrying forward the same resolve as the diverse groups that originally came together to create COVID-19 solutions. The open-source journey in public health is only beginning.

Neville Spiteri

This post originally appeared on the Academy Software Foundation’s (ASWF) blog. The ASWF works to increase the quality and quantity of contributions to the content creation industry’s open source software base. 

Tell us a bit about yourself – how did you get your start in visual effects and/or animation? What was your major in college?

I started experimenting with the BASIC programming language when I was 12 years old on a ZX81Neville Spiteri Sinclair home computer, playing a game called “Lunar Lander” which ran on 1K of RAM, and took about 5 minutes to load from cassette tape.

I have a Bachelor’s degree in Cognitive Science and Computer Science.

My first job out of college was a Graphics Engineer at Wavefront Technologies, working on the precursor to Maya 1.0 3D animation system, still used today. Then I took a Digital Artist role at Digital Domain.

What is your current role?

Co-Founder / CEO at Wevr. I’m currently focused on Wevr Virtual Studio – a cloud platform we’re developing for interactive creators and teams to more easily build their projects on game engines.

What was the first film or show you ever worked on? What was your role?

First film credit: True Lies, Digital Artist.

What has been your favorite film or show to work on and why?

TheBlu 1.0 digital ocean platform. Why? We recently celebrated TheBlu 10 year anniversary. TheBlu franchise is still alive today. At the core of TheBlu was/is a creator platform enabling 3D interactive artists/developers around the world to co-create the 3D species and habitats in TheBlu. The app itself was a mostly decentralized peer-to-peer simulation that ran on distributed computers with fish swimming across the Internet. The core tenets of TheBlu 1.0 are still core to me and Wevr today, as we participate more and more in the evolving Metaverse.

How did you first learn about open source software?

Linux and Python were my best friends in 2000.

What do you like about open source software? What do you dislike?

Likes: Transparent, voluntary collaboration.

Dislikes: Nothing.

What is your vision for the Open Source community and the Academy Software Foundation?

Drive international awareness of the Foundation and OSS projects.

Where do you hope to see the Foundation in 5 years?

A global leader in best practices for real-time engine-based production through international training and education.

What do you like to do in your free time?

Read books, listen to podcasts, watch documentaries, meditation, swimming, and efoiling!

Follow Neville on Twitter and connect on LinkedIn.  

LF Energy OpenGEH (Green Energy Hub) project

The OpenGEH Project is one of the many projects at LF Energy. We want to share about it here on the LF blog. This originally appeared on the LF Energy site

OpenGEH ( GEH stands for Green Energy Hub ) enables fast, flexible settlement and hourly measurements of production and consumption of electricity. OpenGEH seeks to help utilities to onboard increased levels of renewables by reducing the administrative barriers of market-based coordination. By utilizing a modern DataHub, built on a modular and microservices architecture, OpenGEH is able to store billions of data points covering the entire workflow triggered by the production and consumption of electricity.

The ambition of OpenGEH is to use digitalization as a way to accelerate a market-driven transition towards a sustainable and efficient energy system. The platform provides a modern foundation for both new market participants and facilitates new business models through digital partnerships. The goal is to create access to relevant data and insights from the energy market and thereby accelerate the Energy Transition.

Initially built in partnership with Microsoft, Energinet (the Danish TSO) was seeking a critical leverage point to accelerate the Danish national commitment to 100% renewable energy in their electricity system by 2030. For most utilities, getting renewables onboard creates a technical challenge that also has choreography and administrative hurdles. Data becomes the mechanism that enables market coordination leading to increased decarbonization. The software was contributed to the LF Energy Foundation by Energinet.

Energinet sees open source and shared development as an opportunity to reduce the cost of software, while simultaneously increasing the quality and pace of development. It is an approach that they see gaining prominence in TSO cooperation. Energinet is not an IT company, and therefore does not sell systems, services, or operate other TSOs. Open source coupled with an intellectual property license that encourages collaboration, will insure that OpenGEH continues to improve, by encouraging a community of developers to add new features and functionality.


The Architectural Principles behind OpenGEH

By implementing Domain Driven Design, OpenGEH has divided the overall problem  into smaller independent domains. This gives developers the possibility to only use the domains that are necessary to solve for the needed functionality. As the domains trigger events when data changes, the other domains listen on these events to have the most updated version of data.

The architecture supports open collaboration on smaller parts of OpenGEH. New domains can be added by contributors, to extend the OpenGEH’s functionality, when needed to accelerate the green transition.

The Green Energy Hub Domains

The Green Energy Hub system consists of two different types of domains:

  • A domain that is responsible for handling a subset of business processes.
  • A domain that is responsible for handling an internal part of the system (Like log accumulation, secret sharing or similar).

Below is a list of these domains, and the business flows they are responsible for.

  • Business Process Domains
    • Metering Point
      • Create metering point
      • Submission of master data – grid company
      • Close down metering point
      • Connection of metering point with status new
      • Change of settlement method
      • Disconnection and reconnecting of metering point
      • Meter management
      • Update production obligation
      • Request for service from grid company
    • Aggregations
      • Submission of calculated energy time series
      • Request for historical data
      • Request for calculated energy time series
      • Aggregation of wholesale services
      • Request for aggregated tariffs
      • Request for settlement basis
    • Time Series
      • Submission of metered data for metering point
      • Send missing data log
      • Request for metered data for a metering point
    • Charges
      • Request for aggregated subscriptions or fees
      • Update subscription price list
      • Update fee price list
      • Update tariff price list
      • Request price list
      • Settlement master data for a metering point – subscription, fee and tariff links
      • Request for settlement master data for metering point
    • Market Roles
      • Change of supplier
      • End of supply
      • Managing an incorrect change of supplier
      • Move-in
      • Move-out
      • Incorrect move
      • Submission of customer master data by balance supplier
      • Initiate cancel change of supplier by customer
      • Change of supplier at short notice
      • Mandatory change of supplier for metering point
      • Submission of contact address from grid company
      • Change of BRP for energy supplier
    • Data Requests
      • Master data request
  • System Domains

CRob on open source software security education on TechStrong TV

In the Open Source Software Security Mobilization Plan released this past May, the very first stream – of the 10 recommended – is to “Deliver baseline secure software development education and certification to all.”

As the plan states, it is rare to find a software developer who receives formal training in writing software securely. The plan advocates that a modest amount of training – from 10 to ideally 40-50 hours – could make a significant difference in developer contributions to more secure software from the beginning of the software development life cycle. The Linux Foundation now offers a free course, Developing Secure Software, which is 15 hours of training across 3 modules (security principles, implementation considerations & software verification).

The plan proposes, “bringing together a small team to iterate and improve such training materials so they can be considered industry standard, and then driving demand for those courses and certifications through partnerships with educational institutions of all kinds, coding academies and accelerators, and major employers to both train their own employees and require certification for job applicants.”

Also in the plan is Stream 5 to, “Establish the OpenSSF Open Source Security Incident Response Team, security experts who can step in to assist open source projects during critical times when responding to a vulnerability.” They are a small team of professional software developers, vetted for security and trained on the specifics of language and frameworks being used by that OSS project. 30-40 experts would be available to go out in teams of 2-3 for any given crisis.

Christopher “CRob” Robinson is instrumental to the concepts behind, and the implementation of, both of these recommendations. He is the Director of Security Communications at Intel Product Assurance and also serves on the OpenSSF Technical Advisory Committee. At Open Source Summit North America, he sat down with TechStrong TV host Alan Shimel to talk about the origin of his nickname and, more importantly, software security education and the Open Source Product Security Incident Response Team (PSIRT) – streams 1 and 5 in the Plan.  Here are some key takeaways:

  • I’ve been with the OpenSSF for over two years, almost from the beginning. And currently I am the working group lead for the Developer Best Practices Working Group and the Vulnerability Disclosures Working Group. I sit on the Technical Advisory Committee. We help kind of shape, steer the strategy for the Foundation. I’m on the Public Policy and Government Affairs Committee. And I’m just now the owner of two brand new SIGs, special interest groups, underneath the working group. So I’m in charge of the Education SIG and the Open Source Cert SIG. We’re going to create a PSIRT for open source.
  • The idea is to try to find a collection of experts from around the industry that understand how to do incident response and also understand how to get things fixed within open source communities. . . I think, ultimately, it’s going to be kind of a mentorship program for upstream communities to teach them how to do incident response. We know and help them work with security researchers and reporters and also help make sure that they’ve got tools and processes in place so they can be successful.
  • A lot of the conference this week is talking about how we need to get more training and certification and education into the hands of developers. We’ve created another kind of Tiger team, and  we’re gonna be focusing on this. And my friend, Dr. David Wheeler, he had a big announcement where we have existing body of material, the secure coding fundamentals class, and he was able to transform that into SCORM. So now anybody who has a SCORM learning management system has the ability to leverage this free developer secure software training on their internal learning management systems.
  • We have a lot of different learners. We have brand new students, we have people in the middle of their careers, people are making career changes. We have to kind of serve all these different constituents.

Of course, he had a lot more to say. You can watch the full interview, including how CRob got his nickname, and read the transcript below.

Alan Shimel 00:06
Hey, everyone back here live in Austin at the Linux Foundation Open Source Summit. You know, we’ve had a very security-heavy lineup this past week. And for good reason, security is top of mind to everyone. The OpenSSF. Of course, Monday was OpenSSF day, but it’s more than that. More than Monday, we really talked a lot about software supply chains and SBOMs and just securing open source software. My next guest is CGrove or CRbn? No, no, you know, I had CRob in my mind, and that’s what messed me up. Let’s go back to Crob. Excuse me. Now check this out a little thing myself. So Crob was actually the emcee of OpenSSF day on Monday.

CRob 01:01
I had an amazing hat. You did. And you didn’t wear it here. I came from outside with tacos, and it was all sweaty.

Alan Shimel 01:08
We just have two bald guys here. Anyway,

CRob 01:14
safety in numbers.

Alan Shimel 01:15
Well, yeah, that’s true. It’s true. Wear the hat next time. But anyway, first of all, welcome, man. Thank you.

CRob 01:21
It’s wonderful to be here. I’m excited to have this little chat.

Alan Shimel 01:24
We are excited to have you on here. So before we jump into Monday, and OpenSSF day, in that whole thing, you’re with Intel, full disclosure, what do you do in your day job.

CRob 01:36
So my day job, I am the Director of Security Communications. So primarily our function is as incidents happen, so there’s a new vulnerability discovered, or researchers find some report on our portfolio, I help kind of evaluate that and kind of determine how we’re going to communicate it.

Alan Shimel 01:56
Love it, and your role within OpenSSF?

CRob 02:01
So I’ve been with the OpenSSF for over two years, almost from the beginning. And currently I am the working group lead for the developer best practices working group and the vulnerability disclosures working group. I sit on the technical advisory committee, so we help kind of shape, steer the strategy for the foundation. I’m on the Public Policy and Government Affairs Committee. And I’m just now the owner of two brand new SIGs, special interest groups underneath the working group. So I’m in charge of the education SIG, and the open source cert SIG. So we’re going to create a PSIRT for open source.

Alan Shimel 02:38
That’s beautiful man. That is really and let’s talk about that SIRT. Yeah, it’ll be through Linux Foundation.

Unknown Speaker 02:47
Yeah, we are still. So back in May the foundation and some contributors created the mobilization plan. I’m sure people have talked about it this week. 10 point plan addressing trying to help respond to things like the White House executive order. And it’s a plan that says these 10 different work streams we feel we can improve the security posture of open source software. And the open source SIRT was stream five. And the idea is to try to find a collection of experts from around the industry that understand how to do incident response, and also understand how to get things fixed within open source communities.

CRob 03:27
So we’re we have our first meeting for the SIG the first week of July. And we’re going to try to refine the initial plan and kind of spec it out and see how we want to react. But I think ultimately, it’s going to be kind of a mentorship program for upstream communities to teach them how to do incident response. We know and help them work with security researchers and reporters, and also help make sure that they’ve got tools and processes in place so they can be successful.

Alan Shimel 03:56
I love it. Yeah. Let’s be honest, this is a piece of work you cut out for yourself.

Unknown Speaker 04:04
Yes, one of my other groups I work with is a group called First, the Form of Incident Response and Security Teams. And I’m one of the authors of the PSIRT services framework. So I have a little help. So I understand that you got a vendor back on that, right? Yeah, we’re gonna lean into that as kind of a model to start with, and kind of see what we need to change to make it work for open source communities.

Alan Shimel 04:27
I actually love that good thing. When do you think we might see something on this? No pressure.

Unknown Speaker 04:32
No pressure? Oh, definitely. The meetings will be public. So all of that will go up into YouTube. So you’ll be able to observe kind of the progress of the group. I expect we’re going to take probably at least a month to refine the current plan and submit a proposal back to the governing board. We think this is actionable. So hopefully before the end of the year, maybe late fall, we’ll actually be able to start taking action.

Alan Shimel 04:57
All right. Love it. Love it. Gotta ask you, Where does the name come from?

Unknown Speaker 05:03
So the name comes from Novell GroupWise. So back in the day, our network was run by an HP VAX. But our email system plugged into the VAX and you were limited by the characters of your name. So my name Chris Robinson. So his first little first letter, first name, next seven of your last, so I ended up being Crobinsoe. And we hired a developer that walked in, he looked at it, and he’s like, ah, Crobinso the chromosome, right? Got shortened to Crob.

Alan Shimel 05:36
Okay, not very cool. So thank you. Not Crob. That’s right. Thank you Novell is right. That was very interesting days. Remember.

Unknown Speaker 05:45
I love that stuff. I was Novell engineer for many years.

Alan Shimel 05:49
That’s when certs really meant something certified Novell. You are? Yeah. Where are they now? See, I think the last time I was out in Utah. Now I was I think it was 2005. I was out in Utah, they would do if there was something they were working on.

Unknown Speaker 06:14
They bought SUSE. And we thought that that would be pretty amazing to kind of incorporate this Novell had some amazing tools. Absolutely. So we thought that would be really awesome than the NDS was the best. But we were hoping that through SUSE they be able to channel these tools and get broader adoption.

Alan Shimel 06:30
No, I think for whatever reason. There’s a lot of companies from back in those days, right, that we think about, indeed, Yeah. Anyway,

Unknown Speaker 06:45
My other working group. So we have more, but wait, there’s more, we have more. So the developer best practices working group is spinning off and education sake. So a lot of the conference this week is talking about how we need to get more training and certification and education into the hands of developers. So again, we’ve created another kind of Tiger team, are we’re gonna be focusing on this. And my friend, Dr. David Wheeler, David A. Wheeler, he had a big announcement where we have existing body of material, the secure coding fundamentals class, and he was able to transform that into SCORM. So now that anybody who has a SCORM learning management system has the ability to leverage this free developer secure software training, really, yes.

Alan Shimel 07:35
And that’s the SCORM. system. If you have SCORM, you can leverage this.

Unknown Speaker 07:39
free, there’s some rules behind it. But yeah, absolutely. It’s plugged in, we’re looking to get that donated to higher education, historically black colleges and universities (HBCU), trade schools like DeVry, wherever

Alan Shimel 07:52
Get it into people’s hands. That’s the thing to do. So that get that kind of stuff gets me really excited. I’ll be honest with you, you know, all too often, we’re good in the tech industry for forming a foundation and, and a SIG and an advisory board. But rubber meets the road, when you can teach people coming up. Right, so they come in with the right habits, because you know, it’s harder to teach the old dogs, the new tricks, right.

CRob 08:23
I can’t take the class. I know the brains full.

Alan Shimel 08:26
Yeah, no, I hear you. But no, but not only that, look, if you’ve been developing software for 25 years, and I’m gonna come and tell you, Well, what you doing is wrong. And I need you to start doing it this way. Now, I’m gonna make some progress. Because no one wants to say I know everything. And I’m not changing. People don’t just say that. But it’s just almost subconsciously, it’s a lot harder.

Unknown Speaker 08:51
It definitely is. And that’s kind of informing our approach. So we have a traditional, about 20 hours worth of traditional class material. So we’re looking at how we can transform that material into things like webinars and podcasts, and maybe a boot camp. So maybe next year, at the Open Source Summit, we might be able to offer a training class where you walk in, take the class, and walk out with a certification.

CRob 09:17
And then thinking about, you know, we have a lot of different learners. We have, you know, brand new students, we have people in the middle of their careers, people are making career changes. So we have to kind of serve all these different constituents. And that’s absolutely true. And that is one of the problems. Kind of the user journeys we’re trying to fulfill is this. I’m an existing developer, how do I gain new skills or refine what I have?

Alan Shimel 09:40
Let me ask you a question. So, I come from the security side of that. Nothing the matter with putting the emphasis on developers developing more secure software. But shouldn’t we also be developing for security people to better secure open source software.

CRob 10:02
And the foundation itself does have many, it’s multipronged. And so to help like a practitioner, we have things like our scorecard and all stars. And then we have a project criticality score. And actually, we just I, there was a great session just a couple hours ago, by one of my peers, Jacque Chester, and it was kind of a, if you’re a risk guy, it was kind of based off of Open Fair, which is a risk management methodology, kind of explaining how we can evaluate open source projects, share that information with downstream consumers and risk management teams or procurement teams, and kind of give them a quantitative assessment of this is what risks you could incur by these projects.

CRob 10:44
So if you have two projects that do the same thing, one might have a higher or lower score will provide you the data that you could make your own assessment off of that and make your own judgment. So that the foundation is also looking at just many different avenues to get this out there, focused on practitioners and developers, and hopefully by this kind of hydraulic approach, it will be successful. It’ll stick.

Alan Shimel 11:07
you know what you just put as much stuff on the wall and whatever sticks sticks man up. So anyway, hey Crob. Right. I got it right. Yep. All right. Thank you for stopping by. So thank you for all you do, right. I mean, it’s a community thing. These are not paid type of gigs, right. Sure. Yeah. No, and I thank you for your for your time and efforts on that.

CRob 11:30
Thank you very much. All right.

Alan Shimel 11:31
Hey, keep up the great work. We’re gonna take a break. I think we’ve got another interview coming up in a moment. And we’re here live in Austin.

By Ashwin Ramaswami

Last month, we just concluded the Linux Foundation’s 2022 Open Source Summit North America (OSS NA), when developers, technologists, and community leaders from industry, academia, and government converged in Austin, Texas, from June 21-24 to talk about all things open source. Participants and speakers highlighted open source innovation and efforts to ensure a sustainable open source ecosystem.

What did the summit tell us about the state of OSS security? Several parts of the conference addressed different aspects of this issue – OpenSSF Day, Critical Software Summit, SupplyChainSecurityCon, and the Global Security Vulnerability Summit. Overall, the summit demonstrated an increased emphasis on open source security as a community effort with various stakeholders. More ambitious and innovative approaches to handling the open source security problem – including collaboration, tools, and training – were also introduced. Finally, the summit highlighted the importance for open source users to give back to the community and contribute upstream to the projects they depend on.

Let’s explore these ideas in more detail!

Click on the list on the upper right of this video to view the entire OpenSSF Day playlist (13 videos)

Open source security as a community effort

Brian Behlendorf, GM Open Source Security Foundation at OpenSSF Day, June 2022

Open source security is not just an isolated effort by users or maintainers of open source software. As OSS NA showed, the stakes of open source security have turned it into a community effort, where a wide variety of diverse stakeholders have an interest and are beginning to get involved.

  • As Todd Moore (IBM) mentioned in his keynote, incidents such as log4shell have made open source security a bigger priority for governments – and it is important for existing open source stakeholders, both users and maintainers, to work as a community to take a cohesive message back to the government to articulate our community’s needs and how we are responding to this challenge.
  • Speakers at a panel discussion with the Atlantic Council’s Cyber Statecraft Initiative and the Open Source Security Foundation (OpenSSF) discussed the summit held by OpenSSF in Washington, DC on May 12 and 13, where representatives from industry and government met to develop the Open Source Software Security Mobilization Plan, a $150 million plan for better securing the open source ecosystem.
  • A panel discussion explored how major businesses are working together to improve the security of the open source supply chain, particularly through the governance structure of the OpenSSF.

New approaches to address open source security

Eric Brewer, VP Infrastructure, Google at Open Source Summit North America Keynote, June 2022

OSS NA featured several initiatives to address fundamental open source security issues, many of which were particularly ambitious and innovative.

  • The OpenSSF’s Alpha-Omega Project was announced to address software vulnerabilities for OSS projects that are most critical (alpha) and at the long tail (omega).
  • Eric Brewer (Google) gave a keynote discussing the fundamental problem of ensuring accountability in the open source software supply chain. One way of solving this is through curation: creating a repository of vetted and secure packages.
  • Standards continue to be important, as always: Art Manion (CERT/CC) discussed the history and future of the CVE Program, while Jennings Aske (New York-Presbyterian Hospital) and Melba Lopez (IBM) discussed the importance of a Software Bill of Materials (SBOM).
  • Kate Stewart (Linux Foundation VP of Dependable Systems) led a BOF session on SBOMs within the emerging regulatory landscape, how SBOMs are already being generated today for embedded systems by open source projects such as Zephyr, Yocto, and others,  addressing the gaps practitioners are encountering in the field, and the ways we might tackle them.
  • The importance of security tooling was emphasized, with discussions on tools such as sigstore, automation of security checks through Infrastructure as Code tools, and CI/CD pipelines.
  • David Wheeler (Linux Foundation Director of Open Source Security) discussed how education in secure software development is critical to ensuring open source software security. Courses like the OpenSSF’s Secure Software Development Fundamentals Courses are available to help developers learn this topic.

Giving back to the community

Open Source Summit North America Audience, June 2022

Participants at the summit recognized that open source security is ultimately a matter of community, governance, and sustainability. Projects that don’t have the right resources or governance structure may not be able to ensure their projects are secure or accept the right funding to do so.

  • Steve Hendrick (Linux Foundation) and Matt Jarvis (Snyk) discussed the release of the 2022 State of Open Source Security report from Snyk and the Linux Foundation. The report noted that open source software is often a one-way street where users see significant benefits with minimal cost or investment. It is recommended that organizations need to close the loop and give back to OSS projects they use for larger open source projects to meet user expectations.
  • Aeva Black (Microsoft) discussed approaches to community risk management through drafting and enforcing a code of conduct, and how ignoring community health can lead to sometimes catastrophic technical outcomes for OSS Projects.
  • Sean Goggins (CHAOSS) discussed the relationship between community health and vulnerability mitigation in open source projects by using metrics models from the CHAOSS projects.
  • Margaret Tucker and Justin Colannino (GitHub) discussed the role that package registries have in open source security, beginning to formulate some principles that would balance these registries’ responsibility for safety and reliability with the freedom and creativity of package maintainers.
  • Naveen Srinivasan (Endor Labs) and Laurent Simon (Google) explored the OpenSSF Scorecard to more easily analyze the security of open source projects and proactively improve their security.
  • Amir Montazery (OSTIF) discussed the Open Source Technology Improvement Fund’s efforts to help OSS maintainers to work with security experts to improve their projects’ security posture.

Conclusion

In sum, the talks and conversations at OSS Summit NA help paint a picture of how key stakeholders in the open source software ecosystem – OSS communities, industry, academia, and government – are thinking about conceptualizing big-picture issues and directing efforts around OSS security.

But these initiatives and talks still have a lot of room for input! Whether individually or through your institution, consider adding your voice to this discussion as we continue to support the open source software community. Join an OpenSSF working group, another initiative, or contribute upstream to open source projects that you depend on.

Speak at ONE Summit

The top reasons to share your expertise at ONE Summit, the Industry’s leading Open Networking & Edge Event

To submit a presentation proposal, please visit our Call For Proposals-but hurry! Submissions are due July 29. 

ONE Summit 2022

ONE Summit is the ONE networking technology event connecting Access, Edge, Core and Cloud. It brings together technical and business decision makers for in-depth, interactive conversations around cutting-edge innovations and the operational support necessary to leverage them.

Newly revamped post-pandemic, ONE Summit’s focus is to enable interactive, real-world conversations on the evolution of technology in the distributed networking space. From Communications Service Providers to Government and civil infrastructure, from Retail to the leaders of Industry 4.0, you will be able to collaborate on innovations to truly support your digital transformation.

Inspired by the impact of integration efforts like 5G Super Blueprint, ONE Summit fosters collaborative discussion required to truly scale software for 5G, IoT, the enterprise, and beyond. 

Top 5 reasons to speak at ONE Summit:

1) Collaborate with thought leaders from across a growing global ecosystem. 

ONE Summit enables the technical and business collaboration necessary to shape the future of open networking and edge computing. The free exchange and presentation of ideas is crucial for the growth of all open source projects and their continued ability to innovate.

2) Immerse yourself in innovative technologies such as 5G, Open RAN, IoT, Enterprise, Cloud Native and more.

Learn about and build on on the successes of Linux Foundation networking & edge project communities, with collaboration across LF Networking, LF Edge, O-RAN- SC, Magma, CNCF, LF AI & Data, and more, to enable attendees to visualize and build their new networking stacks.

3) Learn from your peers across industry verticals solving common challenges. 

Networking decision makers gather to address architectural and technical issues, and business use case needs. ONE Summit provides a forum where solutions, best practices, use cases and more – based on open source projects under the Linux Foundation Networking and across the industry– can be shared with the global ecosystem.

4) Unleash the power of open. In a market now built on open source, this is critical.

Virtually all industries have embraced open source in their operations. Collaboration among industry peers is what makes the use of open source in business and the related business models possible.

5) Demonstrate your leadership.

ONE Summit attendees come from all across a growing ecosystem of enterprises, governments, global service providers (including telcos, enterprises, government, global service providers and cloud). With a targeted focus on architects and technical decision makers, ONE Summit is a great place to get your message out

Meet the Program Committee

ONE Summit would not be possible without the involvement and support of our community. The Program Committee is composed of business and open source leaders who are actively involved in the work of developing the next generation of networking and edge technologies for all market verticals. This year’s ONE Summit Program Committee is composed of:

  • Rabi Abdel, Principal Consultant, Global Telecom Practice, Amazon Web Services
  • Lisa Caywood, Senior Principal Community Architect, RedHat
  • Wenjing Chu, Senior Director of Technology Strategy – Trust for the Internet of the Future, Futurewei Technologies
  • Roy Chua, Founder and Principal, AvidThink
  • Beth Cohen, Cloud Product Technologist, Verizon
  • Marc Fiedler, Architect for Real-time Network Service Management, Deutsche Telekom
  • Daniel Havey, Program Manager, Microsoft
  • Kandan Kathirvel, Product Lead, Telco Cloud & Orchestration, Google Cloud
  • Trishan de Lanerolle, Principal Technical Program Manager, Office of the CTO, Equinix
  • Catherine Lefevre, AVP, Technology Services – Network Systems Common Platform & Services, AT&T
  • Tom Nadeau, Fellow, Vice President & Chief Cloud Architect, Spirent Communications
  • Joe Pearson, Edge Computing and Technology Strategist, IBM Networking & Edge Computing CTO Group, IBM
  • Jim St. Leger, Director, Open Strategy, Intel
  • Tracy Van Brakle, Principal Member of Technical Staff, AT&T
  • Olivier Smith, Office of the CTO, Director, Matrixx Software
  • Cedric Thienot, Co-Founder and CTO, Firecell
  • Qihui Zhao, NFV Researcher & Network Engineer, CMCC
  • Amy Zwarico, Director, CyberSecurity, Chief Security Office, AT&T

Who attends

chart of who attends the one summit

Past ONE Summit attendee demographics. Source: ONE Summit 2022 prospectus

Join with attendees from all market verticals and all organizational levels from all over the world. Attendees don’t have to be part of a project to contribute to the discussion and to participate in open collaboration sessions with other attendees. In fact, joining planned sessions and open discussions and collaboration sessions is the best way to get involved with open source projects under the LFNetworking Umbrella.

To learn more about ONE Summit 2022 in Seattle, please visit the ONE Summit site

About LF Networking

Now in its fifth year as an umbrella organization, LF Networking (LFN) and its projects enable organizations across the globe to more quickly and effectively achieve digital transformation via the community’s shared development efforts. This includes companies of all sizes and types that rely on LFN’s breadth of commercially-ready ecosystem offerings, all based on open source innovation spearheaded within the LF Networking community. To learn more about LFN, please visit https://www.lfnetworking.org. To learn more about the Linux Foundation, please visit https://linuxfoundation.org

The author, Heather Kirksey, VP, Community & Ecosystem, LF Networking.

Step-by-step process leads Bosch to Hyperledger Labs with Perun layer-2 protocol

This post originally appeared on the Hyperledger Foundation’s blog. You can read the full case study here

Some years ago, researchers realized that IoT devices would need to buy and sell from one another. In this “Economy of Things,” the items to be traded will include power, data, and connectivity. Most transactions will be fast, low value, and high frequency.

For a company like The Bosch Group that’s active in everything from autonomous vehicles to thermal plants, the Economy of Things will touch many lines of business. That’s why, in 2017, the company’s advanced research group, Bosch Research, was looking to find a way to scale up blockchain transactions to support the Economy of Things.

Bosch set out to do meet that requirement by leveraging a specific, step-by-step open source strategy for developing new markets:

  1. Identify a requirement
  2. Set goals
  3. Consider the terrain
  4. Build a partnership
  5. Pick a suitable license
  6. Use open source archetypes

The goals were to lead an effort to create standards for the Economy of Things and to build a framework where different partners could work together.

A survey for likely partners led the Bosch team to Perun, an early layer-2 protocol that passes state information off-chain through virtual channels. Bosch joined forces with several academics to implement this protocol and start creating an ecosystem.

As part of the process, Perun needed a stable home where everyone could access the latest code, and other people could find it. Hyperledger Labs provides a space where developments can be started without the overhead of creating an official Hyperledger project.

In Q3 2020, Perun was welcomed into Hyperledger Labs, and development has continued with work from the team at Boch and PolyCrypt GbmH, a startup spun out of the Technical University Darmstadt, where much of the academic research behind Perun began.

The Bosch team was eager to talk about its approaches and contributions to Hyperledger Foundation. To that end, they worked with Hyperledger marketing and others in the Perun community on a case study that details not only the business and technology challenges they’ve set out to tackle but also the strategic way they are leveraging open source development to advance the industry for all.

We never know what technology will turn into the Next Big Thing.

Perhaps Perun will be one of them, powering billions of micropayments between IoT devices or enabling people to shop with Central Bank Digital Currencies (CBDCs) that are still on the drawing board today.

Read the full case study here.