Boeing to lead New Aerospace Working Group

SAN FRANCISCO – August 11, 2022 –  Today, the ELISA (Enabling Linux in Safety Applications) Project announced that Boeing has joined as a Premier member, marking its commitment to Linux and its effective use in safety critical applications. Hosted by the Linux Foundation, ELISA is an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems.

“Boeing is modernizing software to accelerate innovation and provide greater value to our customers,” said Jinnah Hosein, Vice President of Software Engineering at the Boeing Company. “The demand for safe and secure software requires rapid iteration, integration, and validation. Standardizing around open source products enhanced for safety-critical avionics applications is a key aspect of our adoption of state-of-the-art techniques and processes.”

As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products, and space systems for customers in more than 150 countries. It’s already using Linux in current avionics systems, including commercial systems certified to DO-178C Design Assurance Level D. Joining the ELISA Project will help pursue the vision for generational change in software development at Boeing. Additionally, Boeing will work with the ELISA Technical Steering Committee (TSC) to launch a new Aerospace Working Group that will work in parallel with the other working groups like automotive, medical devices, and others.

“We want to improve industry-standard tools related to certification and assurance artifacts in order to standardize improvements and contribute new features back to the open source community. We hope to leverage open source tooling (such as a cloud-based DevSecOps software factory) and industry standards to build world class software and provide an environment that attracts industry leaders to drive cultural change at Boeing,” said Hosein.

Linux is used in all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes. Launched in February 2019, ELISA works with Linux kernel and safety communities to agree on what should be considered when Linux is used in safety-critical systems. The project has several dedicated working groups that focus on providing resources for system integrators to apply and use to analyze qualitatively and quantitatively on their systems.

“Linux has a history of being a reliable and stable development platform that advances innovation for a wide range of industries,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “With Boeing’s membership, ELISA will start a new focus in the aerospace industry, which is already using Linux in selected applications. We look forward to working with Boeing and others in the aerospace sector, to build up best practices for working with Linux in this space.”

Other ELISA Project members include ADIT, AISIN AW CO., Arm, Automotive Grade Linux, Automotive Intelligence and Control of China, Banma, BMW Car IT GmbH, Codethink, Elektrobit, Horizon Robotics, Huawei Technologies, Intel, Lotus Cars, Toyota, Kuka, Linuxtronix. Mentor, NVIDIA, SUSE, Suzuki, Wind River, OTH Regensburg, Toyota and ZTE.

Upcoming ELISA Events

The ELISA Project has several upcoming events for the community to learn more or to get involved including:

  • ELISA Summit – Hosted virtually for participants around the world on September 7-8, this event will feature overview of the project, the mission and goals for each working group and an opportunity for attendees to ask questions and network with ELISA leaders. The schedule is now live and includes speakers from Aptiv Services Deutschland GmbH, Boeing, CodeThink, The Linux Foundation, Mobileye, Red Hat and Robert Bosch GmbH. Check out the schedule here: https://events.linuxfoundation.org/elisa-summit/program/schedule/. Registration is free and open to the public. https://elisa.tech/event/elisa-summit-virtual/
  • ELISA Forum – Hosted in-person in Dublin, Ireland, on September 12, this event takes place the day before Open Source Summit Europe begins. It will feature an update on all of the working groups, an interactive System-Theoretic Process Analysis (STPA) use case and an Ask Me Anything session.  Pre-registration is required. To register for ELISA Forum, add it to your Open Source Summit Europe registration.
  • Open Source Summit Europe – Hosted in-person in Dublin and virtually on September 13-16, ELISA will have two dedicated presentations about enabling safety in safety-critical applications and safety and open source software. Learn more.

For more information about ELISA, visit https://elisa.tech/.

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

sigstore logo

This post is authored by Hayden Blauzvern and originally appeared on Sigstore’s blog. Sigstore is a new standard for signing, verifying, and protecting software. It is a project of the Linux Foundation. 

Developers, package maintainers, and enterprises that would like to sigstore logo adopt Sigstore may already sign published artifacts. Signers may have existing procedures to securely store and use signing keys. Sigstore can be used to sign artifacts with existing self-managed, long-lived signing keys. Sigstore provides a simple user experience for signing, verification, and generating structured signature metadata for artifacts and container signatures. Sigstore also offers a community-operated, free-to-use transparency log for auditing signature generation.

Sigstore additionally has the ability to use code signing certificates with short-lived signing keys bound to OpenID Connect identities. This signing approach offers simplicity due to the lack of key management; however, this may be too drastic of a change for enterprises that have existing infrastructure for signing. This blog post outlines strategies to ease adoption of Sigstore while still using existing signing approaches.

Signing with self-managed, long-lived keys

Developers that maintain their own signing keys but want to migrate to Sigstore can first switch to using Cosign to generate a signature over an artifact. Cosign supports importing an existing RSA, ECDSA, or ED25519 PEM-encoded PKCS#1 or PKCS#8 key with cosign import-key-pair –key key.pem, and can sign and verify with cosign sign-blob –key cosign.key artifact-path and cosign verify-blob –key cosign.pub artifact-path.

Benefits

  • Developers can get accustomed to Sigstore tooling to sign and verify artifacts.
  • Sigstore tooling can be integrated into CI/CD pipelines.
  • For signing containers, signature metadata is published with the OCI image in an OCI registry.

Signing with self-managed keys with auditability

While maintaining their own signing keys, developers can increase auditability of signing events by publishing signatures to the Sigstore transparency log, Rekor. This allows developers to audit when signatures are generated for artifacts they maintain, and also monitor when their signing key is used to create a signature.

Developers can upload a signature to the transparency log during signing with COSIGN_EXPERIMENTAL=1 cosign sign-blob –key cosign.key artifact-path. If developers would like to use their own signing infrastructure while still publishing to a transparency log, developers can use the Rekor CLI or API. To upload an artifact and cryptographically verify its inclusion in the log using the Rekor CLI:

rekor-cli upload --rekor_server https://rekor.sigstore.dev \
  --signature <artifact_signature> \
  --public-key <your_public_key> \
  --artifact <url_to_artifact|local_path>

rekor-cli verify --rekor_server https://rekor.sigstore.dev \
  --signature <artifact-signature> \
  --public-key <your_public_key> \
  --artifact <url_to_artifact|local_path>

In addition to PEM-encoded certificates and public keys, Sigstore supports uploading many different key formats, including PGP, Minisign, SSH, PKCS#7, and TUF. When uploading using the Rekor CLI, specify the –pki-format flag. For example, to upload an artifact signed with a PGP key:

gpg --armor -u user@example.com --output signature.asc --detach-sig package.tar.gz

gpg --export --armor "user@example.com" > public.key

rekor-cli upload --rekor_server https://rekor.sigstore.dev \
  --signature signature.asc \
  --public-key public.key \
  --pki-format=pgp \
  --artifact package.tar.gz

Benefits

  • Developers begin to publish signing events for auditability.
  • Artifact consumers can create a verification policy that requires a signature be published to a transparency log.

Self-managed keys in identity-based code signing certificate with auditability

When requesting a code signing certificate from the Sigstore certificate authority Fulcio, Fulcio binds an OpenID Connect identity to a key, allowing for a verification policy based on identity rather than a key. Developers can request a code signing certificate from Fulcio with a self-managed long-lived key, sign an artifact with Cosign, and upload the artifact signature to the transparency log.

However, artifact consumers can still fail-open with verification (allow the artifact, while logging the failure) if they do not want to take a hard dependency on Sigstore (require that Sigstore services be used for signature generation). A developer can use their self-managed key to generate a signature. A verifier can simply extract the verification key from the certificate without verification of the certificate’s signature. (Note that verification can occur offline, since inclusion in a transparency log can be verified using a persisted signed bundle from Rekor and code signing certificates can be verified with the CA root certificate. See Cosign’s verification code for an example of verifying the Rekor bundle.)

Once a consumer takes a hard dependency on Sigstore, a CI/CD pipeline can move to fail-closed (forbid the artifact if verification fails).

Benefits

  • A stronger verification policy that enforces both the presence of the signature in a transparency log and the identity of the signer.
  • Verification policies can be enforced fail-closed.

Identity-based (“keyless”) signing

This final step is added for completeness. Signing is done using code signing certificates, and signatures must be published to a transparency log for verification. With identity-based signing, fail-closed is the only option, since Sigstore services must be online to retrieve code signing certificates and append entries to the transparency log. Developers will no longer need to maintain signing keys.

Conclusion

The Sigstore tooling and infrastructure can be used as a whole or modularly. Each separate integration can help to improve the security of artifact distribution while allowing for incremental updates and verifying each step of the integration.

Delta Lake's expanding ecosystem of connectors

This post originally appeared on the Delta Lake blog

We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark™ 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quite nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Delta 1.2 vs Delta 2.0 chart

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

Select & from events example

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

code example

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark™ performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Optimize deltaTable ZORDER BY (x, y)

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation and this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set spark.databricks.delta.properties.defaults.enableChangeDataFeed = true;

An important thing to remember is once you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGEUPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.
Options Table (v1) Change data (merged as v2) change data feed output

See the documentation for more details.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping and logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

Before column mapping and with column mapping

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:

ALTER TABLE myDeltaTable DROP COLUMN myColumn

See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Experimental support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other notable changes
    • Improve the generated column data skipping by adding the support for skipping by nested column generated column
    • Improve the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.

Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

Optimize ZOrder

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta's expanding ecosystem of connectors

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

The most widely used lakehouse format in the world

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.

Credits

Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

Join the community today

open horizon and linux foundation logos

The following post originally appeared on Medium. The author, Ruchi Pakhle, participated in our LFX Mentorship program this past spring.

Hey everyone!
I am Ruchi Pakhle currently pursuing my Bachelor’s in Computer Engineering from MGM’s College of Engineering & Technology. I am a passionate developer and an open-source enthusiast. I recently graduated from LFX Mentorship Program. In this blog post, I will share my experience of contributing to Open Horizon, a platform for deploying container-based workloads and related machine learning models to compute nodes/clusters on edge.

Background

I have been an active contributor to open-source projects via different programs like GirlScript Summer of Code, Script Winter of Code & so on.. through these programs I contributed to different beginner-level open-source projects. After almost doing this for a year, I contributed to different organizations for different projects including documentation and code. On a very random morning applications for LFX were opened up and I saw various posts on LinkedIn among that posts one post was of my very dear friend Unnati Chhabra, she had just graduated from the program and hence I went ahead and checked the organization that was a fit as per my skill set and decided to give it a shot.

Why did I apply to Open Horizon?

I was very interested in DevOps and Cloud Native technologies and I wanted to get started with them but have been procrastinating a lot and did not know how to pave my path ahead. I was constantly looking for opportunities that I can get my hands on. And as Open Horizon works exactly on DevOps and Cloud Native technologies, I straight away applied to their project and they had two slots open for the spring cohort. I joined their element channel and started becoming active by contributing to the project, engaging with the community, and also started to read more about the architecture and tried to understand it well by referring to their youtube videos. You can contribute to Open Horizon here.

Application process

Linux Foundation opens LFX mentorship applications thrice a year: one in spring, one in summer, and the winter cohort, each cohort being for a span of 3 months. I applied to the winter cohort for which the applications opened up around February 2022 and I submitted my application on 4th February 2022 for the Open Horizon Project. I remember there were three documents mandatory for submitting the application:

1. Updated Resume/CV

2. Cover Letter

(this is very very important in terms of your selection so cover everything in your cover letter and maybe add links to your projects, achievements, or wherever you think they can add great value)

The cover letter should cover these points primarily👇

  • How did you find out about our mentorship program?
  • Why are you interested in this program?
  • What experience and knowledge/skills do you have that are applicable to this program?
  • What do you hope to get out of this mentorship experience?

3. A permission document from your university stating they have no obligation over the entire span of the mentorship was also required (this depends on org to org and may not be asked as well)

Selection Mail

The LFX acceptance mail was a major achievement for me as at that period of time I was constantly getting rejections and I had absolutely no idea about how things were gonna work out for me. I was constantly doubting myself and hence this mail not only boosted my confidence but also gave me a ray of hope of achieving things by working hard towards it consistently. A major thanks to my mentor, Joe Pearson, and Troy Fine for believing in me and giving me this opportunity.⭐

My Mentorship Journey

Starting off from the day I applied to the LFX until getting selected as an LFX Mentee and working successfully for over 3 months and a half, it felt surreal. I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I still remember setting up the mgmt-hub all-in-one script locally and I thought it was just a cakewalk, well it was not. I literally used to try every single day to run the script but somehow it would end up giving some errors, I used to google them and apply the results but still, it would fail. But one thing which I consistently did was share my progress regularly with my mentor, Troy no matter if the script used to fail but still I used to communicate that with Troy, I would send him logs and he used to give me some probable solutions for the same but still the script used to fail. I then messaged in the open-horizon-examples group and Joe used to help with my doubts, a huge thanks to him and Troy for helping me figure out things patiently. After over a month on April 1st, the script got successfully executed and then I started to work on the issues assigned by Troy.

These three months taught me to be consistent no matter what the circumstances are and work patiently which I wouldn’t have learned in my college. This experience would no doubt make me a better developer and engineer along with the best practices followed. A timeline of my journey has been shared here.

  1. Checkout my contributions here
  2. Checkout open-horizon-services repo

Concluding the program

The LFX Mentorship Program was a great great experience and I did get a great learning curve which I wouldn’t have gotten any other way. The program not only encourages developers to kick-start their open-source journey but also provides some great perks like networking, and learning from the best minds. I would like to thank my mentors Joe Pearson, Troy Fine, and Glen Darling because without their support and patience this wouldn’t have been possible. I would be forever grateful for this opportunity.

Special thanks to my mentor Troy for always being patient with me. These kind words would remain with me always although the program would have ended.

And yes how can I forget to plug in the awesome swags, special thanks, and gratitude to my mentor Joe Pearson for sending me such cool swags and this super cool note ❤handwritten thank you note from joe pearson

If you have any queries, connect with me on LinkedIn or Twitter and I would be happy to help you out 😀

Global visionaries headline the premier open source event in Europe to share on OSS adoption in Europe, driving the circular economy, finding inspiration through the pandemic, supply chain security and more.

SAN FRANCISCO, August 4, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open Source Summit Europe, taking place September 13-16 in Dublin, Ireland. The event is being produced in a hybrid format, with both in-person and virtual participation available, and is co-located with the Hyperledger Global Forum, OpenSSF Day, Linux Kernel Maintainer Summit, KVM Forum, and Linux Security Summit, among others.

Open Source Summit Europe is the leading conference for developers, sys admins and community leaders – to gather to collaborate, share information, gain insights, solve technical problems and further innovation. It is a conference umbrella, composed of 13 events covering the most important technologies and issues in open source including LinuxCon, Embedded Linux Conference, OSPOCon, SupplyChainSecurityCon, CloudOpen, Open AI + Data Forum, and more. Over 2,000 are expected to attend.

2022 Keynote Speakers Include:

  • Hilary Carter, Vice President of Research, The Linux Foundation
  • Bryan Che, Chief Strategy Officer, Huawei; Cloud Native Computing Foundation Governing Board Member & Open 3D Foundation Governing Board Member
  • Demetris Cheatham, Senior Director, Diversity, Inclusion & Belonging Strategy, GitHub
  • Gabriele Columbro, Executive Director, Fintech Open Source Foundation (FINOS)
  • Dirk Hohndel, Chief Open Source Officer, Cardano Foundation
  • ​​Ross Mauri, General Manager, IBM LinuxONE
  • Dušan Milovanović, Health Intelligence Architect, World Health Organization
  • Mark Pollock, Explorer, Founder & Collaborator
  • Christopher “CRob” Robinson, Director of Security Communications, Product Assurance and Security, Intel Corporation
  • Emilio Salvador, Head of Standards, Open Source Program Office, Google
  • Robin Teigland, Professor of Strategy, Management of Digitalization, in the Entrepreneurship and Strategy Division, Chalmers University of Technology; Director, Ocean Data Factory Sweden and Founder, Peniche Ocean Watch Initiative (POW)
  • Linus Torvalds, Creator of Linux and Git
  • Jim Zemlin, Executive Director, The Linux Foundation

Additional keynote speakers will be announced soon. 

Registration (in-person) is offered at the price of US$1,000 through August 23. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact events@linuxfoundation.org to request a member discount code. 

Health and Safety
In-person attendees will be required to show proof of COVID-19 vaccination or provide a negative COVID-19 test to attend, and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei, Intel and OpenEuler, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, NGINX, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Press
Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

ABOUT THE LINUX FOUNDATION
Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at https://linuxfoundation.org/

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

Media Contact
Kristin O’Connell
The Linux Foundation
koconnell@linuxfoundation.org

openIDL logo

LISLE, IL., August 3, 2022 — The American Association of Insurance Services (AAIS) and the Linux Foundation welcome Jefferson Braswell as the new Executive Director of the openIDL Project.

“AAIS is excited about the expansion of openIDL in the insurance space openIDL logo and the addition of Jefferson as Executive Director signals even more strength and momentum to the fast-developing project,” said Ed Kelly, AAIS President & CEO. “We are happy to continue to work with the Linux Foundation to help affect meaningful, positive change for the insurance ecosystem.”

“openIDL is a Linux Foundation Open Governance Network and the first of its kind in the insurance industry,” said Daniela Barbosa, General Manager of Blockchain, Healthcare and Identity at the Linux Foundation. “It leverages open source code and community governance for objective transparency and accountability among participants with strong executive leadership helping shepherd this type of open governance networks. Jeff Braswell’s background and experience in financial standards initiatives and consortium building aligns very well with openIDL’s next growth and expansion period.“

Braswell has been successfully providing leading-edge business solutions for information-intensive enterprises for over 30 years. As a founding Director, he recently completed a 6-year term on the Board of the Global Legal Entity Identifier Foundation (GLEIF), where he chaired the Technology, Operations and Standards Committee. He is also the Chair of the Algorithmic Contract Types Unified Standards Foundation (ACTUS), and he has actively participated in international financial data standards initiatives.

Previously, as Co-Founder and President of Berkeley-based Risk Management Technologies (RMT), Braswell designed and led the successful implementation of advanced, firm-wide risk management solutions integrated with enterprise-wide data management tools. They were used by  many of the world’s largest financial institutions, including Wells Fargo, Credit Suisse, Chase, PNC, Sumitomo Mitsui Banking Corporation, Mellon, Wachovia, Union Bank and ANZ.

“We appreciate the foundation that AAIS laid for openIDL, and I look forward to bringing my expertise and knowledge to progress this project forward,” shared Braswell. “Continuing the work with the Linux Foundation to positively impact insurance services through open-source technology is exciting and will surely change the industry for the better moving forward.” 

openIDL, an open source, distributed ledger platform, infuses efficiency, transparency and security into regulatory reporting. With openIDL, insurers fulfill requirements while retaining the privacy of their data. Regulators have the transparency and insights they need, when they need them. Initially developed by AAIS, expressly for its Members, openIDL is now being further advanced by the Linux Foundation as an open-source ecosystem for the entire insurance industry.

ABOUT AAIS
Established in 1936, AAIS serves the Property & Casualty insurance industry as the only national nonprofit advisory organization governed by its Member insurance carriers. AAIS delivers tailored advisory solutions including best-in-class policy forms, rating information and data management capabilities for commercial lines, inland marine, farm & agriculture and personal lines insurers. Its consultative approach, unrivaled customer service and modern technical capabilities underscore a focused commitment to the success of its members. AAIS also serves as the administrator of openIDL, the insurance industry’s regulatory blockchain, providing unbiased governance within existing insurance regulatory frameworks. For more information about AAIS, please visit www.aaisonline.com.

ABOUT THE LINUX FOUNDATION

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at https://linuxfoundation.org.

ABOUT openIDL
openIDL (open Insurance Data Link) is an open blockchain network that streamlines regulatory reporting and provides new insights for insurers, while enhancing timeliness, accuracy, and value for regulators. openIDL is the first open blockchain platform that enables the efficient, secure, and permissioned-based collection and sharing of statistical data. For more information, please visit www.openidl.org.

###

MEDIA CONTACT:

AAIS
John Greene
Director – Marketing & Communications
630.457.3238
johng@AAISonline.com

Linux Foundation

Dan Whiting
Director of Media Relations and Content
202-531-9091
dwhiting@linuxfoundation.org

LF Public Health logo

This original article appeared on the LF Public Health project’s blog.

The past three years have redefined the practice and management of public health on a global scale. What will we need in order to support innovation over the next three years?

In May 2022, ASTHO (Association of State and Territorial Health Officials) held a forward-looking panel at their TechXPO on public health innovation, with a specific focus on public-private partnerships. Jim St. Clair, the Executive Director of Linux Foundation Public Health, spoke alongside representatives from MITRE, Amazon Web Services, and the Washington State Department of Health.

Three concepts appeared and reappeared in the panel’s discussion: reimagining partnerships; sustainability and governance; and design for the future of public health. In this blog post, we dive into each of these critical concepts and what they mean for open-source communities.

Reimagining partnerships

The TechXPO panel opened with a discussion on partnerships for data modernization in public health, a trending topic at the TechXPO conference. Dr. Anderson (MITRE) noted that today’s public health projects demand “not just a ‘public-private’ partnership, but a ‘public-private-community-based partnership’.” As vaccine rollouts, digital applications, and environmental health interventions continue to be deployed at scale, the need for community involvement in public health will only increase.

However, community partnerships should not be viewed as just another “box to check” in public health. Rather, partnerships with communities are a transformative way to gain feedback while improving usability and effectiveness in public-health interventions. As an example, Dr. Anderson referenced the successful VCI (Vaccination Credential Initiative) project, mentioning “When states began to partner to provide data… and offered the chance for individuals to provide feedback… the more eyeballs on the data, the more accurate the data was.”

Cardea, an LFPH project that focuses on digital identity, has also benefited from public-private-community-based partnerships. Over the past two years, Cardea has run three community hackathons to test interoperability among other tools that use Cardea’s codebase. Trevor Butterworth, VP of Cardea’s parent company, Indicio, explained his thoughts on community involvement in open source: “The more people use an open source solution, the better the solution becomes through stress testing and innovation; the better it becomes, the more it will scale because more people will want to use it.” Cardea’s public and private-sector partnerships also include Indicio, SITA, and the Aruba Health Department, demonstrating the potential for diverse stakeholders to unite around public-health goals.

Community groups are also particularly well-positioned to drive innovation in public health: they are often attuned to pressing issues that might be otherwise missed by institutional stakeholders. One standout example is the Institute for Exceptional Care (IEC), a LFPH member organization focused on serving individuals with intellectual and developmental disabilities, “founded by health care professionals, many driven by personal experience with a disabled loved one.” IEC recently presented a webinar on surfacing intellectual and developmental disabilities in healthcare data: both the webinar and Q&A showcased the on-the-ground knowledge of this deeply involved, solution-oriented community.

Sustainability and governance

Sustainability is at the heart of every viable open source project, and must begin with a complete, consensus-driven strategy. As James Daniel (AWS) mentioned in the TechXPO panel, it is crucial to determine “exactly what a public health department wants to accomplish, [and] what their goals are” before a solution is put together. Defining these needs and goals is also essential for long-term sustainability and governance, as mentioned by Dr. Umair Shah (WADOH): “You don’t want a scenario where you start something and it stutters, gets interrupted and goes away. You could even make the argument that it’s better to not have started it in the first place.”

Questions of sustainability and project direction can often be answered by bringing private and public interests to the same table before the project starts. Together, these interests can determine how a potential open-source solution could be developed and used. As Jim St. Clair mentioned in the panel: “Ascertaining where there are shared interests and shared values is something that the private sector can help broker.” Even if a solution is ultimately not adopted, or a partnership never forms, a frank discussion of concerns and ideas among private- and public-sector stakeholders can help clarify the long-term capabilities and interests of all stakeholders involved.

Moreover, a transparent discussion of public health priorities, questions, and ideas among state governments, private enterprises, and nonprofits can help drive forward innovation and improvements even when there is no specific project at hand. To this end, LFPH hosts a public Slack channel as well as weekly Technical Advisory Council (TAC) meetings in which we host new project ideas and presentations. TAC discussions have included concepts for event-driven architecture for healthcare data, a public health data sharing mesh, and “digital twins” for informatics and research.

Design for the future of public health

Better partnerships, sustainability, and governance provide exciting prospects for what can be accomplished in open-source public health projects in the coming years. As Jim St. Clair (LFPH) mentioned in the TechXPO panel: “How do we then leverage these partnerships to ask ‘What else is there about disease investigative technology that we could consider? What other diseases, what other challenges have public health authorities always had?’” These challenges will not be tackled through closed source solutions—rather, the success of interoperable, open-source credentialing and exposure notifications systems during the pandemic has shown that open-source has the upper hand when creating scalable, successful, and international solutions.

Jim St. Clair is not only optimistic about tackling new challenges, but also about taking on established challenges that remain pressing: “Now that we’ve had a crisis that enabled these capabilities around contact tracing and notifications… [they] could be leveraged to expand into and improve upon all of these other traditional areas that are still burning concerns in public health.” For example, take one long-running challenge in United States healthcare: “Where do we begin… to help drive down the cost and improve performance and efficiency with Medicaid delivery? … What new strategies could we apply in population health that begin to address cost-effective care-delivery patient-centric models?”

Large-scale healthcare and public-health challenges such as mental health, communicable diseases, diabetes—and even reforming Medicaid—will only be accomplished by consistently bringing all stakeholders to the table, determining how to sustainably support projects, and providing transparent value to patients, populations and public sector agencies. LFPH has pursued a shared vision around leveraging open source to improve our communities, carrying forward the same resolve as the diverse groups that originally came together to create COVID-19 solutions. The open-source journey in public health is only beginning.

Neville Spiteri

This post originally appeared on the Academy Software Foundation’s (ASWF) blog. The ASWF works to increase the quality and quantity of contributions to the content creation industry’s open source software base. 

Tell us a bit about yourself – how did you get your start in visual effects and/or animation? What was your major in college?

I started experimenting with the BASIC programming language when I was 12 years old on a ZX81Neville Spiteri Sinclair home computer, playing a game called “Lunar Lander” which ran on 1K of RAM, and took about 5 minutes to load from cassette tape.

I have a Bachelor’s degree in Cognitive Science and Computer Science.

My first job out of college was a Graphics Engineer at Wavefront Technologies, working on the precursor to Maya 1.0 3D animation system, still used today. Then I took a Digital Artist role at Digital Domain.

What is your current role?

Co-Founder / CEO at Wevr. I’m currently focused on Wevr Virtual Studio – a cloud platform we’re developing for interactive creators and teams to more easily build their projects on game engines.

What was the first film or show you ever worked on? What was your role?

First film credit: True Lies, Digital Artist.

What has been your favorite film or show to work on and why?

TheBlu 1.0 digital ocean platform. Why? We recently celebrated TheBlu 10 year anniversary. TheBlu franchise is still alive today. At the core of TheBlu was/is a creator platform enabling 3D interactive artists/developers around the world to co-create the 3D species and habitats in TheBlu. The app itself was a mostly decentralized peer-to-peer simulation that ran on distributed computers with fish swimming across the Internet. The core tenets of TheBlu 1.0 are still core to me and Wevr today, as we participate more and more in the evolving Metaverse.

How did you first learn about open source software?

Linux and Python were my best friends in 2000.

What do you like about open source software? What do you dislike?

Likes: Transparent, voluntary collaboration.

Dislikes: Nothing.

What is your vision for the Open Source community and the Academy Software Foundation?

Drive international awareness of the Foundation and OSS projects.

Where do you hope to see the Foundation in 5 years?

A global leader in best practices for real-time engine-based production through international training and education.

What do you like to do in your free time?

Read books, listen to podcasts, watch documentaries, meditation, swimming, and efoiling!

Follow Neville on Twitter and connect on LinkedIn.  

LF Energy OpenGEH (Green Energy Hub) project

The OpenGEH Project is one of the many projects at LF Energy. We want to share about it here on the LF blog. This originally appeared on the LF Energy site

OpenGEH ( GEH stands for Green Energy Hub ) enables fast, flexible settlement and hourly measurements of production and consumption of electricity. OpenGEH seeks to help utilities to onboard increased levels of renewables by reducing the administrative barriers of market-based coordination. By utilizing a modern DataHub, built on a modular and microservices architecture, OpenGEH is able to store billions of data points covering the entire workflow triggered by the production and consumption of electricity.

The ambition of OpenGEH is to use digitalization as a way to accelerate a market-driven transition towards a sustainable and efficient energy system. The platform provides a modern foundation for both new market participants and facilitates new business models through digital partnerships. The goal is to create access to relevant data and insights from the energy market and thereby accelerate the Energy Transition.

Initially built in partnership with Microsoft, Energinet (the Danish TSO) was seeking a critical leverage point to accelerate the Danish national commitment to 100% renewable energy in their electricity system by 2030. For most utilities, getting renewables onboard creates a technical challenge that also has choreography and administrative hurdles. Data becomes the mechanism that enables market coordination leading to increased decarbonization. The software was contributed to the LF Energy Foundation by Energinet.

Energinet sees open source and shared development as an opportunity to reduce the cost of software, while simultaneously increasing the quality and pace of development. It is an approach that they see gaining prominence in TSO cooperation. Energinet is not an IT company, and therefore does not sell systems, services, or operate other TSOs. Open source coupled with an intellectual property license that encourages collaboration, will insure that OpenGEH continues to improve, by encouraging a community of developers to add new features and functionality.


The Architectural Principles behind OpenGEH

By implementing Domain Driven Design, OpenGEH has divided the overall problem  into smaller independent domains. This gives developers the possibility to only use the domains that are necessary to solve for the needed functionality. As the domains trigger events when data changes, the other domains listen on these events to have the most updated version of data.

The architecture supports open collaboration on smaller parts of OpenGEH. New domains can be added by contributors, to extend the OpenGEH’s functionality, when needed to accelerate the green transition.

The Green Energy Hub Domains

The Green Energy Hub system consists of two different types of domains:

  • A domain that is responsible for handling a subset of business processes.
  • A domain that is responsible for handling an internal part of the system (Like log accumulation, secret sharing or similar).

Below is a list of these domains, and the business flows they are responsible for.

  • Business Process Domains
    • Metering Point
      • Create metering point
      • Submission of master data – grid company
      • Close down metering point
      • Connection of metering point with status new
      • Change of settlement method
      • Disconnection and reconnecting of metering point
      • Meter management
      • Update production obligation
      • Request for service from grid company
    • Aggregations
      • Submission of calculated energy time series
      • Request for historical data
      • Request for calculated energy time series
      • Aggregation of wholesale services
      • Request for aggregated tariffs
      • Request for settlement basis
    • Time Series
      • Submission of metered data for metering point
      • Send missing data log
      • Request for metered data for a metering point
    • Charges
      • Request for aggregated subscriptions or fees
      • Update subscription price list
      • Update fee price list
      • Update tariff price list
      • Request price list
      • Settlement master data for a metering point – subscription, fee and tariff links
      • Request for settlement master data for metering point
    • Market Roles
      • Change of supplier
      • End of supply
      • Managing an incorrect change of supplier
      • Move-in
      • Move-out
      • Incorrect move
      • Submission of customer master data by balance supplier
      • Initiate cancel change of supplier by customer
      • Change of supplier at short notice
      • Mandatory change of supplier for metering point
      • Submission of contact address from grid company
      • Change of BRP for energy supplier
    • Data Requests
      • Master data request
  • System Domains

CRob on open source software security education on TechStrong TV

In the Open Source Software Security Mobilization Plan released this past May, the very first stream – of the 10 recommended – is to “Deliver baseline secure software development education and certification to all.”

As the plan states, it is rare to find a software developer who receives formal training in writing software securely. The plan advocates that a modest amount of training – from 10 to ideally 40-50 hours – could make a significant difference in developer contributions to more secure software from the beginning of the software development life cycle. The Linux Foundation now offers a free course, Developing Secure Software, which is 15 hours of training across 3 modules (security principles, implementation considerations & software verification).

The plan proposes, “bringing together a small team to iterate and improve such training materials so they can be considered industry standard, and then driving demand for those courses and certifications through partnerships with educational institutions of all kinds, coding academies and accelerators, and major employers to both train their own employees and require certification for job applicants.”

Also in the plan is Stream 5 to, “Establish the OpenSSF Open Source Security Incident Response Team, security experts who can step in to assist open source projects during critical times when responding to a vulnerability.” They are a small team of professional software developers, vetted for security and trained on the specifics of language and frameworks being used by that OSS project. 30-40 experts would be available to go out in teams of 2-3 for any given crisis.

Christopher “CRob” Robinson is instrumental to the concepts behind, and the implementation of, both of these recommendations. He is the Director of Security Communications at Intel Product Assurance and also serves on the OpenSSF Technical Advisory Committee. At Open Source Summit North America, he sat down with TechStrong TV host Alan Shimel to talk about the origin of his nickname and, more importantly, software security education and the Open Source Product Security Incident Response Team (PSIRT) – streams 1 and 5 in the Plan.  Here are some key takeaways:

  • I’ve been with the OpenSSF for over two years, almost from the beginning. And currently I am the working group lead for the Developer Best Practices Working Group and the Vulnerability Disclosures Working Group. I sit on the Technical Advisory Committee. We help kind of shape, steer the strategy for the Foundation. I’m on the Public Policy and Government Affairs Committee. And I’m just now the owner of two brand new SIGs, special interest groups, underneath the working group. So I’m in charge of the Education SIG and the Open Source Cert SIG. We’re going to create a PSIRT for open source.
  • The idea is to try to find a collection of experts from around the industry that understand how to do incident response and also understand how to get things fixed within open source communities. . . I think, ultimately, it’s going to be kind of a mentorship program for upstream communities to teach them how to do incident response. We know and help them work with security researchers and reporters and also help make sure that they’ve got tools and processes in place so they can be successful.
  • A lot of the conference this week is talking about how we need to get more training and certification and education into the hands of developers. We’ve created another kind of Tiger team, and  we’re gonna be focusing on this. And my friend, Dr. David Wheeler, he had a big announcement where we have existing body of material, the secure coding fundamentals class, and he was able to transform that into SCORM. So now anybody who has a SCORM learning management system has the ability to leverage this free developer secure software training on their internal learning management systems.
  • We have a lot of different learners. We have brand new students, we have people in the middle of their careers, people are making career changes. We have to kind of serve all these different constituents.

Of course, he had a lot more to say. You can watch the full interview, including how CRob got his nickname, and read the transcript below.

Alan Shimel 00:06
Hey, everyone back here live in Austin at the Linux Foundation Open Source Summit. You know, we’ve had a very security-heavy lineup this past week. And for good reason, security is top of mind to everyone. The OpenSSF. Of course, Monday was OpenSSF day, but it’s more than that. More than Monday, we really talked a lot about software supply chains and SBOMs and just securing open source software. My next guest is CGrove or CRbn? No, no, you know, I had CRob in my mind, and that’s what messed me up. Let’s go back to Crob. Excuse me. Now check this out a little thing myself. So Crob was actually the emcee of OpenSSF day on Monday.

CRob 01:01
I had an amazing hat. You did. And you didn’t wear it here. I came from outside with tacos, and it was all sweaty.

Alan Shimel 01:08
We just have two bald guys here. Anyway,

CRob 01:14
safety in numbers.

Alan Shimel 01:15
Well, yeah, that’s true. It’s true. Wear the hat next time. But anyway, first of all, welcome, man. Thank you.

CRob 01:21
It’s wonderful to be here. I’m excited to have this little chat.

Alan Shimel 01:24
We are excited to have you on here. So before we jump into Monday, and OpenSSF day, in that whole thing, you’re with Intel, full disclosure, what do you do in your day job.

CRob 01:36
So my day job, I am the Director of Security Communications. So primarily our function is as incidents happen, so there’s a new vulnerability discovered, or researchers find some report on our portfolio, I help kind of evaluate that and kind of determine how we’re going to communicate it.

Alan Shimel 01:56
Love it, and your role within OpenSSF?

CRob 02:01
So I’ve been with the OpenSSF for over two years, almost from the beginning. And currently I am the working group lead for the developer best practices working group and the vulnerability disclosures working group. I sit on the technical advisory committee, so we help kind of shape, steer the strategy for the foundation. I’m on the Public Policy and Government Affairs Committee. And I’m just now the owner of two brand new SIGs, special interest groups underneath the working group. So I’m in charge of the education SIG, and the open source cert SIG. So we’re going to create a PSIRT for open source.

Alan Shimel 02:38
That’s beautiful man. That is really and let’s talk about that SIRT. Yeah, it’ll be through Linux Foundation.

Unknown Speaker 02:47
Yeah, we are still. So back in May the foundation and some contributors created the mobilization plan. I’m sure people have talked about it this week. 10 point plan addressing trying to help respond to things like the White House executive order. And it’s a plan that says these 10 different work streams we feel we can improve the security posture of open source software. And the open source SIRT was stream five. And the idea is to try to find a collection of experts from around the industry that understand how to do incident response, and also understand how to get things fixed within open source communities.

CRob 03:27
So we’re we have our first meeting for the SIG the first week of July. And we’re going to try to refine the initial plan and kind of spec it out and see how we want to react. But I think ultimately, it’s going to be kind of a mentorship program for upstream communities to teach them how to do incident response. We know and help them work with security researchers and reporters, and also help make sure that they’ve got tools and processes in place so they can be successful.

Alan Shimel 03:56
I love it. Yeah. Let’s be honest, this is a piece of work you cut out for yourself.

Unknown Speaker 04:04
Yes, one of my other groups I work with is a group called First, the Form of Incident Response and Security Teams. And I’m one of the authors of the PSIRT services framework. So I have a little help. So I understand that you got a vendor back on that, right? Yeah, we’re gonna lean into that as kind of a model to start with, and kind of see what we need to change to make it work for open source communities.

Alan Shimel 04:27
I actually love that good thing. When do you think we might see something on this? No pressure.

Unknown Speaker 04:32
No pressure? Oh, definitely. The meetings will be public. So all of that will go up into YouTube. So you’ll be able to observe kind of the progress of the group. I expect we’re going to take probably at least a month to refine the current plan and submit a proposal back to the governing board. We think this is actionable. So hopefully before the end of the year, maybe late fall, we’ll actually be able to start taking action.

Alan Shimel 04:57
All right. Love it. Love it. Gotta ask you, Where does the name come from?

Unknown Speaker 05:03
So the name comes from Novell GroupWise. So back in the day, our network was run by an HP VAX. But our email system plugged into the VAX and you were limited by the characters of your name. So my name Chris Robinson. So his first little first letter, first name, next seven of your last, so I ended up being Crobinsoe. And we hired a developer that walked in, he looked at it, and he’s like, ah, Crobinso the chromosome, right? Got shortened to Crob.

Alan Shimel 05:36
Okay, not very cool. So thank you. Not Crob. That’s right. Thank you Novell is right. That was very interesting days. Remember.

Unknown Speaker 05:45
I love that stuff. I was Novell engineer for many years.

Alan Shimel 05:49
That’s when certs really meant something certified Novell. You are? Yeah. Where are they now? See, I think the last time I was out in Utah. Now I was I think it was 2005. I was out in Utah, they would do if there was something they were working on.

Unknown Speaker 06:14
They bought SUSE. And we thought that that would be pretty amazing to kind of incorporate this Novell had some amazing tools. Absolutely. So we thought that would be really awesome than the NDS was the best. But we were hoping that through SUSE they be able to channel these tools and get broader adoption.

Alan Shimel 06:30
No, I think for whatever reason. There’s a lot of companies from back in those days, right, that we think about, indeed, Yeah. Anyway,

Unknown Speaker 06:45
My other working group. So we have more, but wait, there’s more, we have more. So the developer best practices working group is spinning off and education sake. So a lot of the conference this week is talking about how we need to get more training and certification and education into the hands of developers. So again, we’ve created another kind of Tiger team, are we’re gonna be focusing on this. And my friend, Dr. David Wheeler, David A. Wheeler, he had a big announcement where we have existing body of material, the secure coding fundamentals class, and he was able to transform that into SCORM. So now that anybody who has a SCORM learning management system has the ability to leverage this free developer secure software training, really, yes.

Alan Shimel 07:35
And that’s the SCORM. system. If you have SCORM, you can leverage this.

Unknown Speaker 07:39
free, there’s some rules behind it. But yeah, absolutely. It’s plugged in, we’re looking to get that donated to higher education, historically black colleges and universities (HBCU), trade schools like DeVry, wherever

Alan Shimel 07:52
Get it into people’s hands. That’s the thing to do. So that get that kind of stuff gets me really excited. I’ll be honest with you, you know, all too often, we’re good in the tech industry for forming a foundation and, and a SIG and an advisory board. But rubber meets the road, when you can teach people coming up. Right, so they come in with the right habits, because you know, it’s harder to teach the old dogs, the new tricks, right.

CRob 08:23
I can’t take the class. I know the brains full.

Alan Shimel 08:26
Yeah, no, I hear you. But no, but not only that, look, if you’ve been developing software for 25 years, and I’m gonna come and tell you, Well, what you doing is wrong. And I need you to start doing it this way. Now, I’m gonna make some progress. Because no one wants to say I know everything. And I’m not changing. People don’t just say that. But it’s just almost subconsciously, it’s a lot harder.

Unknown Speaker 08:51
It definitely is. And that’s kind of informing our approach. So we have a traditional, about 20 hours worth of traditional class material. So we’re looking at how we can transform that material into things like webinars and podcasts, and maybe a boot camp. So maybe next year, at the Open Source Summit, we might be able to offer a training class where you walk in, take the class, and walk out with a certification.

CRob 09:17
And then thinking about, you know, we have a lot of different learners. We have, you know, brand new students, we have people in the middle of their careers, people are making career changes. So we have to kind of serve all these different constituents. And that’s absolutely true. And that is one of the problems. Kind of the user journeys we’re trying to fulfill is this. I’m an existing developer, how do I gain new skills or refine what I have?

Alan Shimel 09:40
Let me ask you a question. So, I come from the security side of that. Nothing the matter with putting the emphasis on developers developing more secure software. But shouldn’t we also be developing for security people to better secure open source software.

CRob 10:02
And the foundation itself does have many, it’s multipronged. And so to help like a practitioner, we have things like our scorecard and all stars. And then we have a project criticality score. And actually, we just I, there was a great session just a couple hours ago, by one of my peers, Jacque Chester, and it was kind of a, if you’re a risk guy, it was kind of based off of Open Fair, which is a risk management methodology, kind of explaining how we can evaluate open source projects, share that information with downstream consumers and risk management teams or procurement teams, and kind of give them a quantitative assessment of this is what risks you could incur by these projects.

CRob 10:44
So if you have two projects that do the same thing, one might have a higher or lower score will provide you the data that you could make your own assessment off of that and make your own judgment. So that the foundation is also looking at just many different avenues to get this out there, focused on practitioners and developers, and hopefully by this kind of hydraulic approach, it will be successful. It’ll stick.

Alan Shimel 11:07
you know what you just put as much stuff on the wall and whatever sticks sticks man up. So anyway, hey Crob. Right. I got it right. Yep. All right. Thank you for stopping by. So thank you for all you do, right. I mean, it’s a community thing. These are not paid type of gigs, right. Sure. Yeah. No, and I thank you for your for your time and efforts on that.

CRob 11:30
Thank you very much. All right.

Alan Shimel 11:31
Hey, keep up the great work. We’re gonna take a break. I think we’ve got another interview coming up in a moment. And we’re here live in Austin.