Linux Foundation Editorial Director Jason Perlow had a chance to speak with Masato Endo, OpenChain Project Automotive Chair and Leader of the OpenChain Project Japan Work Group Promotion Sub Group, about the Japan Ministry of Economy, Trade and Industry’s (METI) recent study on open source software management.

JP: Greetings, Endo-san! It is my pleasure to speak with you today. Can you tell me a bit about yourself and how you got involved with the Japan Ministry of Economy, Trade, and Industry?

遠藤さん、こんにちは!本日はお話しできることをうれしく思います。あなた自身について、また経済産業省とどのように関わっていますか?

ME: Hi, Jason-san! Thank you for such a precious opportunity. I’m a manager and scrum master in the planning and development department of new services at a Japanese automotive company. We were also working on building the OSS governance structure of the company, including obtaining OpenChain certification.

As an open source community member, I participated in the OpenChain project and was involved in establishing the OpenChain Japan Working Group and Automotive Working Group. Recently, as a leader of the Promotion SG of the Japan Working Group, I am focusing on promoting OSS license compliance in Japan.

In this project, I contribute to it as a bridge between the Ministry of Economic, Trade, and Industry and the members of OSS community projects such as OpenChain.

For example, I recently gave a presentation of OpenChain at the meeting and introduced the companies that cooperate with the case study.

Jasonさん、こんにちは。このような貴重な機会をありがとうございます。

私は、自動車メーカーの新サービスの企画・開発部署でマネージャーやスクラムマスターを務めています。また、OpenChain認証取得等の会社のオープンソースガバナンス体制構築についても取り組んでいました。

一方、コミュニティメンバーとしてもOpenChainプロジェクトに参加し、OpenChain Japan WGやAutomotive WGの設立に関わりました。最近では、Japan WGのPromotion SGのリーダーとして日本におけるOSSライセンスコンプライアンスの啓発活動に注力しています。

今回のプロジェクトにおいては、経済産業省のタスクフォースとOpenChainとの懸け橋として、ミーティングにてOpenChainの活動を紹介させて頂いたり、ケーススタディへの協力企業を紹介させて頂いたりすることで、コントリビューションさせて頂きました。

JP: What does the Ministry of Economy, Trade, and Industry (METI) do?

経済産業省(METI)はどのような役割の政府機関ですか?

ME: METI has jurisdiction over the administration of the Japanese economy and industry. This case study was conducted by a task force that examines software management methods for ensuring cyber-physical security of the Commerce and Information Policy Bureau’s Cyber Security Division.

経済産業省は経済や産業に関する行政を所管しています。今回のケーススタディは商務情報政策局サイバーセキュリティ課によるサイバー・フィジカル・セキュリティ確保に向けたソフトウェア管理手法等検討タスクフォースにより実施されたものです。

JP: Why did METI commission a study on the management of open source program offices and open source software management at Japanese companies?

なぜ経済産業省は、日本企業のオープンソースプログラムオフィスの管理とオープンソースソフトウェアの管理に関する調査を実施したのですか?

ME: METI itself conducted this survey. The Task Force has been considering appropriate software management methods, vulnerability countermeasures, license countermeasures, and so on.

Meanwhile, as the importance of OSS utilization has increased in recent years, it concluded that sharing the knowledge of each company regarding OSS management methods helps solve each company’s problems.

今回の調査は、METIが主体的に行ったものです。タスクフォースは適切なソフトウェアの管理手法、脆弱性対応やライセンス対応などについて検討してきました。

そんな中、最近はOSS利活用の重要性がより高まっているため、OSSの管理手法に関する各企業の知見の共有が各社の課題解決に有効だという結論に至りました。

JP: How do Japanese corporations differ from western counterparts in open source culture? 

日本の企業は、オープンソース文化において欧米の企業とどのように違いますか?

ME: Like Western companies, Japanese companies also use OSS in various technical fields, and OSS has become indispensable. In addition, more than 80 companies have participated in the Japan Working Group of the OpenChain project. As a result, the momentum to promote the utilization of OSS is increasing in Japan.

On the other hand, some survey results show that Japanese companies’ contribution process and support system are delayed compared to Western companies. So, it is necessary to promote community activities in Japan.

欧米の企業と同様、日本の企業でもOSSは様々な技術領域で使われており、欠かせないものになっています。また、OpenChainプロジェクトのJPWGに80社以上の企業が参加するなど、企業としてOSSの利活用を推進する機運も高まってきています。

一方で、欧米企業と比較するとコントリビューションのプロセスやサポート体制の整備が遅れているという調査結果も出ているため、コミュニティ活動を促進する仕組みをより強化していく必要があると考えられます。

JP: What are the challenges that the open source community and METI have identified due to the study that Japanese companies face when adopting open source software within their organizations? 

日本企業が組織内でオープンソースソフトウェアを採用する際に直面する調査の結果、オープンソースコミュニティと経済産業省が特定した課題は何ですか?

ME: The challenges are:

課題は次のとおりです。

Challenge 1: License compliance

When developing software using OSS, it is necessary to comply with the license declared by each OSS. If companies don’t conduct in-house licensing education and management appropriately, OSS license violations will occur.

Challenge 2: Long term support

Since the development term of OSS depends on the community’s activities, the support term may be shorter than the product life cycle in some cases.

Challenge 3:OSS supply chain management

Recently, the software supply chain scale has expanded, and there are frequent cases where OSS is included in deliveries from suppliers. OSS information sharing in the supply chain has become important to implement appropriate vulnerability countermeasures and license countermeasures.

Challenge 1: ライセンスコンプライアンス

OSSを利用してソフトウエアを開発する場合は、各OSSが宣言しているライセンスを遵守する必要があります。社内におけるライセンスに関する教育や管理体制が不十分な場合、OSSライセンスに違反してしまう可能性があります。 

Challenge 2: ロングタームサポート

OSSの開発期間はコミュニティの活性度に依存するため、場合によっては製品のライフサイクルよりもサポート期間が短くなってしまう可能性があります。

Challenge 3: サプライチェーンにおけるOSSの使用

最近はソフトウエアサプライチェーンの規模が拡大しており、サプライヤからの納品物にOSSが含まれるケースも頻繁に起こっています。適切な脆弱性対応、ライセンス対応などを実施するため、サプライチェーンの中でのOSSの情報共有が重要になってきています。

JP:  Are there initiatives that are working to address these challenges?

これらの課題に取り組むための日本企業の取組の特徴などはありますか?

ME: In this case study, many companies mentioned license compliance. It was found that each company has established a company-wide system and rules to comply with the license and provides education to engineers. The best way to do this depends on the industry and size of the company, but I believe the information from this case study is very useful for each company of all over the world.

In addition, it was confirmed that Software Bill of Materials (SBOM) is becoming more critical for companies in the viewpoint of both vulnerability response and license compliance. Regardless of whether companies are using OSS internally or exchanging software with an external partner, it’s important to clarify which OSS they are using. I recognize that this issue is a hot topic as “Software transparency” in Western companies as well.

In this case study, several companies also mentioned OSS supply chain management. In addition to clarifying the rules between companies, it is characterized by working to raise the level of the entire supply chain through community activities such as OpenChain.

今回のケーススタディでは、多くの企業がライセンスコンプライアンスに言及していました。各企業はライセンスを遵守するために、全社的な体制やルールを整え、エンジニアに対してライセンス教育を実施していることがわかりました。ベストな方法は産業や企業の規模によっても異なりますが、各社の情報はこれからライセンスコンプライアンスに取り組もうとしている企業やプロセスの改善を進めている企業にとって非常に有益なものであると私は考えます。

また、脆弱性への対応、ライセンスコンプライアンスの両面から、企業にとってSBOMの重要性が高まっていることが確認できました。社内でOSSを利用する場合であっても、社外のパートナーとソフトウエアをやりとりする場合であっても、どのOSSを利用しているかを明確にすることが最重要だからです。この課題はソフトウエアの透過性といって欧米でも話題になっているものであると私は認識しています。

このケーススタディの中で複数の企業がOSSのサプライチェーンマネジメントについても言及していました。企業間でのルールを明確化する他、OpenChainなどのコミュニティ活動によって、サプライチェーン全体のレベルアップに取り組むことが特徴になっています。

JP: What are the benefits of Japanese companies adopting standards such as OpenChain and SPDX?

OpenChainやSPDXなどの標準を採用している日本企業のメリットは何ですか?

ME: Companies need to do a wide range of things to ensure proper OSS license compliance, so some guidance is needed. The OpenChain Specification, which has become an ISO as a guideline for that, is particularly useful. In fact, several companies that responded to this survey have built an OSS license compliance process based on the OpenChain Specification.

Also, from the perspective of supply chain management, it is thought that if each supply chain company obtains OpenChain certification, software transparency will increase, and appropriate OSS utilization will be promoted.

In addition, by participating in OpenChain’s Japan Working Group, companies can share the best practices of each company and work together to solve problems.

Since SPDX is a leading international standard for SBOM, it is very useful to use it when exchanging information about OSS in the supply chain from the viewpoint of compatibility.

Japanese companies use the SPDX standard and actively contribute to the formulation of SPDX specifications like SPDX Lite.

企業がOSSライセンスコンプライアンスを適切に行うために行うべきことは多岐に渡るために何かしらの指針が必要です。そのための指針としてISOになったOpenChain Specificationは非常に有用なものです。実際、今回の調査に回答した複数の企業がOpenChain Specificationに基づいてOSSライセンスコンプライアンスプロセスを構築し、認証を取得しています。

また、サプライチェーンマネジメントの観点からも、サプライチェーン各社がOpenChain認証を取得することで、ソフトウエアの透過性が高まり、適切なOSSの利活用を促進されると考えられます。

更にOpenChainのJPWGに参加することで、各社のベストプラクティスを共有したり、協力して課題解決をすることもできます。

SPDXは重要性の高まっているSBOMの有力な国際標準であるため、サプライチェーン内でOSSに関する情報を交換する場合に、SPDXを利用することは互換性等の観点から非常に有益です。

日本企業はSPDXの標準を利用するだけではなく、SPDX LiteのようにSPDXの仕様策定にも積極的にコントリビューションしています。

JP: Thank you, Endo-san! It has been great speaking with you today.

遠藤さん、ありがとうございました!本日は素晴らしい議論になりました。

Jason Perlow, Editorial Director of the Linux Foundation, chats with Jory Burson, Community Director at the OpenJS Foundation about open standardization efforts and why it is important for open source projects.

JP: Jory, first of all, thanks for doing this interview. Many of us know you from your work at the OpenJS Foundation, the C2PA, and on open standards, and you’re also involved in many other open community collaborations. Can you tell us a bit about yourself and how you got into working on Open Standards at the LF?

JB: While I’m a relatively new addition to the Linux Foundation, I have been working with the OpenJS foundation for probably three years now — which is hosted by the Linux Foundation. As some of your readers may know, OpenJS is home to several very active JavaScript open source projects, and many of those maintainers are really passionate about web standards. Inside that community, we’ve got a core group of about 20 people participating actively at Ecma International on the JavaScript TCs, the W3C, the Unicode Consortium, the IETF, and some other spaces, too. What we wanted to do was create this space where those experts can get together, discuss things in a cross-project sort of way, and then also help onboard new people into this world of web standards — because it can be a very intimidating thing to try and get involved in from the outside. 

The Joint Development Foundation is something I’m new to, but as part of that, I’m very excited to get to support the C2PA, which stands for Coalition for Content Provenance and Authenticity; it’s a new effort as well. They’re going to be working on standards related to media provenance and authenticity — to battle fakes and establish trustworthiness in media formats, so I’m very excited to get to support that project as it grows.

JP: When you were at Bocoup, which was a web engineering firm, you worked a lot with international standards organizations such as Ecma and W3C, and you were in a leadership role at the TC53 group, which is JavaScript for embedded systems. What are the challenges that you faced when working with organizations like that? 

JB: There are the usual challenges that I think face any international or global team, such as coordination of meeting times and balancing the tension between asynchronously conducting business via email lists, GitHub, and that kind of thing. And then more synchronous forms of communication or work, like Slack and actual in-person meetings. Today, we don’t really worry as much about the in-person meetings, but still, there’s like, this considerable overhead of, you know, “human herding” problems that you have to overcome. 

Another challenge is understanding the pace at which the organization you’re operating in really moves. This is a complaint we hear from many people new to standardization and are used to developing projects within their product team at a company. Even within an open source project, people are used to things moving perhaps a bit faster and don’t necessarily understand that there are actually built-in checks in the process — in some cases, to ensure that everybody has a chance to review, everybody has an opportunity to comment fairly, and that kind of thing. 

Sometimes, because that process is something that’s institutional knowledge, it can be surprising to newcomers in the committees — so they have to learn that there’s this other system that operates at an intentionally different pace. And how does that intersect with your work product? What does that mean for the back timing of your deliverables? That’s another category of things that is “fun” to learn. It makes sense once you’ve experienced it, but maybe running into it for the first time isn’t quite as enjoyable.

JP: Why is it difficult to turn something like a programming language into an internationally accepted standard? In the past, we’ve seen countless flavors of C and Pascal and things like that.

JB: That’s a really good question. I would posit that programming languages are some of the easier types of standards to move forward today because the landscape of what that is and the use cases are fairly clear. Everybody is generally aware of the concept that languages are ideally standardized, and we all agree that this is how this language should work. We’re all going to benefit, and none of us are necessarily, outside of a few cases, trying to build a market in which we’re the dominant player based solely on a language. In my estimation, that tends to be an easier case to bring lots of different stakeholders to the table and get them to agree on how a language should proceed. 

In some of the cases you mentioned, as with C, and Pascal, those are older languages. And I think that there’s been a shift in how we think about some of those things, where in the past it was much more challenging to put a new language out there and encourage adoption of that language, as well as a much higher bar and much more difficult sort of task in getting people information out about how that language worked. 

Today with the internet, we have a very easy distribution system for how people can read, participate, and weigh in on a language. So I don’t think we’re going to see quite as many variations in standardized languages, except in some cases where, for example, with JavaScript, TC53 is carving out a subset library of JavaScript, which is optimized for sensors and lower-powered devices. So long story short, it’s a bit easier, in my estimation, to do the language work. Where I think it gets more interesting and difficult is actually in some of the W3C communities where we have standardization activities around specific web API’s you have to make a case for, like, why this feature should actually become part of the platform versus something experimental…

JP: … such as for Augmented Reality APIs or some highly specialized 3D rendering thing. So what are the open standardization efforts you are actively working on at the LF now, at this moment?

JB: At this exact moment, I am working with the OpenJS Foundation standards working group, and we’ve got a couple of fun projects that we’re trying to get off the ground. One is creating a Learning Resource Center for people who want to learn more about what standardization activities really look like, what they mean, some of the terminologies, etc. 

For example, many people say that getting involved in open source is overwhelming — it’s daunting because there’s a whole glossary of things you might not understand. Well, it’s the same for standardization work, which has its own entire new glossary of things. So we want to create a learning space for people who think they want to get involved. We’re also building out a feedback system for users, open source maintainers, and content authors. This will help them say, “here’s a piece of feedback I have about this specific proposal that may be in front of a committee right now.”

So those are two things. But as I mentioned earlier, I’m still very new to the Linux Foundation. And I’m excited to see what other awesome standardization activities come into the LF.

JP: Why do you feel that the Linux Foundation now needs to double down its open standards efforts? 

JB: One of the things that I’ve learned over the last several years working with different international standards organizations is that they have a very firm command of their process. They understand the benefits of why and how a standard is made, why it should get made, those sorts of things. However, they don’t often have as strong a grasp as they ought to around how the software sausage is really made. And I think the Linux Foundation, with all of its amazing open source projects, is way closer to the average developer and the average software engineer and what their reality is like than some of these international standards developing boards because the SDOs are serving different purposes in this grander vision of ICT interoperability. 

On the ground, we have, you know, the person who’s got to build the product to make sure it’s fit for purpose, make sure it’s conformant, and they’ve got to make it work for their customers. In the policy realm, we have these standardization folks who are really good at making sure that the policy fits within a regulatory framework, is fair and equitable and that everybody’s had a chance to bring concerns to the table — which the average developer may not have time to be thinking about privacy or security or whatever it might be. So the Linux Foundation and other open source organizations need to fit more of the role of a bridge-builder between these populations because they need to work together to make useful and interoperable technologies for the long term. 

That’s not something that one group can do by themselves. Both groups want to make that happen. And I think it’s really important that the LF demonstrate some leadership here.

JP: Is it not enough to make open software projects and get organizations to use them? Or are open standards something distinctly different and separate from open source software?

JB: I think I’ll start by saying there are some pretty big philosophical differences in how we approach a standard versus an open source project. And I think the average developer is pretty comfortable with the idea that version 1.0 of an open source project may not look anything like version 2.0. There are often going to be cases and examples where there are breaking changes; there’s stuff that they shouldn’t necessarily rely on in perpetuity, and that there’s some sort of flex that they should plan for in that kind of thing.

The average developer has a much stronger sense with a standardization activity that those things should not change. And should not change dramatically in a short period. JavaScript is a good example of a language that changes every year; new features are added. But there aren’t breaking changes; it’s backward compatible. There are some guarantees in terms of a standard platform’s stability versus an open source platform, for example. And further, we’re developing more of a sense of what’s a higher bar, if you will, for open standards activities, including the inclusion of things like test suites, documentation, and the required number of reference implementations examples.

Those are all concepts that are kind of getting baked into the idea of what makes a good standard. There’s plenty of standards out there that nobody has ever even implemented — people got together and agreed how something should work and then never did anything with it. And that’s not the kind of standard we want to make or the kind of thing we want to promote. 

But if we point to examples like JavaScript — here’s this community we have created, here’s the standard, it’s got this great big group of people who all worked on it together openly and equitably. It’s got great documentation, it’s got a test suite that accompanies it — so you can run your implementation against that test suite and see where the dragons lie. And it’s got some references and open source reference implementations that you can view.  

Those sorts of things really foster a sense of trustworthiness in a standard — it gives you a sense that it’s something that’s going to stick around for a while, perhaps longer than an open source project, which may be sort of the beginnings of a standardization activity. It may be a reference to implementing a standard, or some folks just sort of throwing spaghetti at a wall and trying to solve a problem together. And I think these are activities that are very complementary with each other. It’s another great reason why other open source projects and organizations should be getting involved and supporting standardization activities.

JP: Do open standardization efforts make a case for open source software even stronger? 

I think so — I just see them as so mutually beneficial, right? Because in the case of an open standards activity, you may be working with some folks and saying, well, here’s what I’m trying to express what this would look like — if we take the prose — and most of the time, the standard is written in prose and a pseudocode sort of style. It’s not something you can feed into the machine and have it work. So the open source projects, and polyfills, and things of that sort can really help a community of folks working on a problem say, “Aha, I understand what you mean!” “This is how we interpreted this, but it’s producing some unintended behaviors”, or “we see that this will be hard to test, or we see that this creates a security issue.”

It’s a way of putting your ideas down on paper, understanding them together, and having a tool through which everybody can pull and say, Okay, let’s, let’s play with it and see if this is really working for what we need it for.”

Yes, I think they’re very compatible.

JP: Like peanut butter and jelly.

JB: Peanut butter and jelly. Yeah.

JP: I get why large organizations might want things like programming languages, APIs, and communications protocols to be open standards, but what are the practical benefits that average citizens get from establishing open standards? 

JB: Open standards really help promote innovation and market activity for all players regardless of size. Now, granted, for the most part, a lot of the activities we’ve been talking about are funded by some bigger players. You know, when you look at the member lists of some of the standards bodies, it’s larger companies like the IBMs, Googles, and Microsofts of the world, the companies that provide a good deal more of the funding. Still, hundreds of small and midsize businesses are also benefiting from standards development. 

You mentioned my work at Bocoup earlier — that’s another great example. We were a consulting firm, who heavily benefited from participating in and leveraging open standards to help build tools and software for our customers. So it is a system that I think helps create an equitable market playing field for all the parties. It’s one of those actual examples of rising tides, which lift all boats if we’re doing it in a genuinely open and pro-competitive way. Now, sometimes, that’s not always the case. In other types of standardization areas, that’s not always true. But certainly, in our web platform standards, that’s been the case. And it means that other companies and other content authors can build web applications, websites, services, digital products, that kind of thing. Everybody benefits — whether those people are also Microsoft customers, Google customers, and all that. So it’s an ecosystem.

JP: I think it’s great that we’ve seen companies like Microsoft that used to have much more closed systems embrace open standards over the last ten years or so. If you look at the first Internet Explorer they ever had out — there once were websites that only worked on that browser. Today, the very idea of a website that only works on one company’s web browser correctly is ridiculous, right? We now have open source engines that these browsers use that embrace open standards have become much more standardized. So I think that open standards have helped some of these big companies that were more closed become more open. We even see it happen at companies like Apple. They use the Bluetooth protocol to connect to their audio hardware and have adopted technologies such as the USB-C connector when previously, they were using weird proprietary connectors before. So they, too, understand that open standards are a good thing. So that helps the consumer, right? I can go out and buy a wireless headset, and I know it’ll work because it uses the Bluetooth protocol. Could you imagine if we had nine different types of wireless networking instead of WiFi? You wouldn’t be able to walk into a store and buy something and know that it would work on your network. It would be nuts. Right?

JB: Absolutely. You’re pointing to hardware and the standards for physical products and goods versus digital products and goods in your example. So in using that example, do you want to have seven different adapters for something? No, it causes confusion and frustration in the marketplace. And the market winner is the one who’s going to be able to provide a solution that simplifies things.

That’s kind of the same thing with the web. We want to simplify the solutions for web developers so they’re not having to say, “Okay, what am I going to target? Am I going to target Edge? Am I going to target Safari?”

JP: Or is my web app going to work correctly in six years or even six months from now?

JB: Right!

JP: Besides web standards, are there other types of standardization you are passionate about, either inside the LF or in your spare time? 

JB: It’s interesting because I think in my career, I’ve followed this journey of first getting involved because it was intellectually interesting to me. Then it was about getting involved because it was about  making my job easier. Like, how does this help me do business more effectively? How does this help me make my immediate life, life as a developer, and my life as an internet consumer a little bit nicer?

Beyond that, you start to think of the order of magnitude: our standardization activities’ social impact. I often think about the role that standards have played in improving the lives of everyday people. For the last 100 years, we have had building standards, fire standards, and safety standards, all of these things. And because they developed, adopted, and implemented in global policy, they have saved people’s lives. 

Apply that to tech — of course, it makes sense that you would have safety standards to prevent the building from burning down — so what is the version of that for technology? What’s the fire safety standard for the web? And how do we actually think about the standards that we make, impacting people and protecting them the way that those other standards did?

One of the things that have changed in the last few years is that the Technical Advisory Group group or “TAG” at the W3C are considering more of the social impact questions in their work. TAG is a group of architects elected by the W3C membership to take a horizontal/global view of the technologies that the W3C standardizes. These folks say, “okay, great; you’re proposing that we standardize this API, have you considered it from an accessibility standpoint? Have you considered it from, you know, ease of use, security?” and that sort of thing.

In the last few years, they started looking at it from an ethical standpoint, such as, “what are the questions of privacy?” How might this technology be used for the benefit of the average person? And also, perhaps, how could it potentially be used for evil? And can we prevent that reality? 

So one of the thingsI think is most exciting, is the types of technologies that are advancing today that are less about can we make X and Y interoperable, but can we make X and Y interoperable in a safe, ethical, economical, and ecological fashion — the space around NFT’s right now as a case in point. And can we make technology beneficial in a way that goes above and beyond “okay, great, we made the website, quick click here.”

So C2PA, I think, is an excellent example of a standardization activity that the LF supports could benefit people. One of the big issues of the last several years is the authenticity of media that we consume things from — whether it was altered, or synthesized in some fashion, such as what we see with deepfakes. Now, the C2PA is not going to be able to and would not say if a media file is fake. Rather, it would allow an organization to ensure that the media they capture or publish can be analyzed for tampering between steps in the edit process or the time an end user consumes it.  This would allow organizations and people to have more trust in the media they consume.

JP: If there was one thing you could change about open source and open standards communities, what would it be?

JB: So my M.O. is to try and make these spaces more human interoperable. With an open source project or open standards project, we’re talking about some kind of technical interoperability problem that we want to solve. But it’s not usually the technical issues that cause delays or serious issues — nine times out of ten; it comes down to some human interoperability problem. Maybe it’s language differences, cultural differences, or expectations — it’s process-oriented. There’s some other thing that may cause that activity to fail to launch. 

So if there were something that I could do to change communities, I would love to make sure that everybody has resources for running great and effective meetings. One big problem with some of these activities is that their meetings could be run more effectively and more humanely. I would want humane meetings for everyone.

JP: Humane meetings for everyone! I’m pretty sure you could be elected to public office on that platform. <laughs>. What else do you like to do with your spare time, if you have any?

JB: I love to read; we’ve got a book club at OpenJS that we’re doing, and that’s fun. So, in my spare time, I like to take time to read or do a crossword puzzle or something on paper! I’m so sorry, but I still prefer paper books, paper magazines, and paper newspapers.

JP: Somebody just told me recently that they liked the smell of paper when reading a real book.

JB: I think I think they’re right; I think it feels better. I think it has a distinctive smell, but there’s also something very therapeutic and analog about it because I like to disconnect from my digital devices. So you know, doing something soothing like that. I also enjoy painting outdoors and going outside, spending time with my four-year-old, and that kind of thing.

JP: I think we all need to disconnect from the tech sometimes. Jory, thanks for the talk; it’s been great having you here.

Data and storage technologies are evolving. The SODA Foundation is conducting a survey to identify the current challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G. Through new insights generated from the data and storage community at large, end-users will be better equipped to make decisions, vendors can improve their products, and the SODA Foundation can establish new technical directions — and beyond!

The SODA Foundation is an open source project under Linux Foundation that aims to foster an ecosystem of open source data management and storage software for data autonomy. SODA Foundation offers a neutral forum for cross-project collaboration and integration and provides end-users quality end-to-end solutions. We intend to use this survey data to help guide the SODA Foundation and its surrounding ecosystem on important issues.

Please participate now; we intend to close the survey in late May. 

Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results. 

The first 50 survey respondents will each receive a $25 (USD) Amazon gift card. Some conditions apply.

This survey should take no more than 15 minutes of your time.

To take the 2021 SODA Foundation Data & Storage Trends Survey, click the button below:

Thanks to our survey partners Cloud Native Computing Foundation (CNCF), Storage Networking Industry Association (SNIA), Japan Data Storage Forum (JDSF), China Open Source Cloud League (COSCL), Open Infrastructure Foundation (OIF), Mulan Open Source Community

SURVEY GOALS

Thank you for taking the time to participate in this survey conducted by SODA Foundation, an open source project at the Linux Foundation focusing on data management and storage.

This survey will provide insights into the challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G. We hope these insights will help end-users make better decisions, enable vendors to improve their products and serve as a guide to the technical direction of SODA and the surrounding ecosystem.

This survey will provide insights into:

  • What are the data & storage challenges faced by end-users?
  • Which features and capabilities do end users look for in data and storage solutions?
  • What are the key trends shaping the data & storage industry?
  • Which open source data & storage projects are users interested in?
  • What cloud strategies are businesses adopting?

PRIVACY

Your name and company name will not be displayed. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that members of the SODA Foundation survey committee who are not LF employees will review the survey results and coordinate the gift card giveaways. If you do not want them to have access to your name or email address in connection with this, please do not provide your name or email address and you will not be included in the giveaway.

VISIBILITY

We will summarize the survey data and share the learnings during SODACON Global 2021 – Virtual on Jul 13-14. The summary report will be published on the SODA website. In addition, we will be producing an in-depth report of the survey which will be shared with all survey participants.

SODACON GLOBAL 2021

Interested in attending or speaking at SODACON Global? Details for the event can be found at https://sodafoundation.io/events/sodacon-2021-global-virtual/

QUESTIONS

If you have questions regarding this survey, please email us at survey@sodafoundation.io or ask us on Slack at https://sodafoundation.io/slack/

Sign up for the SODA Newsletter at https://sodafoundation.io/

Jason Perlow, Director of Project Insights and Editorial Content at the Linux Foundation, spoke with Hilary Carter about Linux Foundation Research and how it will create better awareness of the work being done by open source projects and their communities.

JP: It’s great to have you here today, and also, welcome to the Linux Foundation. First, can you tell me a bit about yourself, where do you live, what your interests are outside work?

HC: Thank you! I’m a Toronto native, but I now live in a little suburban town called Aurora, just north of the city. Mike Meyers — a fellow Canadian — chose “Aurora, IL” for his setting of Wayne’s World, but he really named the town after Aurora, ON. I also spend a lot of time about 3 hours north of Aurora in the Haliburton Highlands, a region noted for its beautiful landscape of rocks, trees, and lakes — and it’s here where my husband and I have a log cabin. We ski, hike and paddle, with our kids, depending on the season. It’s an interesting location because we’re just a few kilometers north of the 45th parallel — and at the spring and fall equinox, the sun sets precisely in the west right off of our dock. At the winter and summer solstice, it’s 45 degrees to the south and north, respectively. It’s neat. As much as I have always been a bit obsessed with geolocation, I had never realized we were smack in the middle of the northern hemisphere until our kids’ use of Snapchat location filters brought it to our attention. Thank you, mobile apps! 

JP: And what organization are you joining us from?

HC: My previous role was Managing Director at the Blockchain Research Institute, where I helped launch and administer their research program in 2017. Over nearly four years, we produced more than 100 research projects that explored how blockchain technology — as the so-called Internet of value — was transforming all facets of society — at the government and enterprise-level as well as at the peer-to-peer level. We also explored how blockchain converged with other technologies like IoT, AI, additive manufacturing and how these developments would change traditional business models. It’s a program that is as broad as it is deep into a particular subject matter without being overly technical, and it was an absolutely fascinating and rewarding experience to be part of building that.

JP: Tell me a bit more about your academic background; what disciplines do you feel most influence your research approach? 

HC: I was a Political Studies major as an undergrad, which set the stage for my ongoing interest in geopolitical issues and how they influence the economy and society. I loved studying global political systems, international political economy, and supranational organizations and looking at the frameworks built for global collaboration to enable international peace and security under the Bretton Woods system. That program made me feel incredibly fortunate to have been born into a time of relative peace and prosperity, unlike generations before me.

I did my graduate studies in Management at the London School of Economics (LSE), and it was here that I came to learn about the role of technology in business. The technologies we were studying at the time were those that enabled real-time inventory. Advanced manufacturing was “the” hot technology of the mid-1990s, or so it seemed in class. I find it so interesting that the curriculum at the time did not quite reflect the technology that would profoundly and most immediately shape our world, and of course, that was the Web. In fairness, the digital economy was emerging slowly, then. Tasks like loading web pages still took a lot of time, so in a way, it’s understandable that the full extent of the web’s power did not make it into many of my academic lectures and texts. I believe academia is different today — and I’m thrilled to see the LSE at the forefront of new technology research, including blockchain, AI, robotics, big data, preparing students for a digital world.

JP: I did do some stalking of your LinkedIn profile; I see that you also have quite a bit of journalistic experience as well.

HC: I wish I could have had more! I was humbled when my first piece was published in Canada’s national newspaper. I had no formal training or portfolio of past writing to lend credibility to my authorship. Still, fortunately, after much persistence, the editor gave me a shot, and I’m forever grateful to her for that. I was inspired to write opinion pieces on the value of digital tools because I saw a gap that needed filling — and I was really determined to fill it. And the subject that inspired me was leadership around new technologies. I try to be a good storyteller and create something that educates and inspires all in one go. I suppose I come by a bit of that naturally. My father was an award-winning author in Canada, but his day job was Chief of Surgery at a hospital in downtown Toronto. He had a gift to take complex subject matter about diseases, such as cancer, and humanize the content by making it personal. I think that’s what makes writing about complex concepts “sticky.” When you believe that the author is, at some level, personally committed to their work and successful in setting the context for their subject matter to the world at large and do so in a way that creates action or additional thinking, then they’ve done a successful job. 

JP: Let’s try a tough existential question. Why do you feel that the Linux Foundation now needs a dedicated research and publications division? Is it an organizational maturity issue? Has open source gotten so widespread and pervasive that we need better metrics to understand these projects’ overall impact?

HC: Well, let me start by saying that I’m delighted that the LF has prioritized research as a new business unit. In my past role at the Blockchain Research Institute, it was clear that there was and still is a huge demand for research — the program kept growing because technologies continued to evolve, and there was no shortage of issues to cover. So I think the LF is tapping into a deep need for knowledge in the market at large and specific insights on open source ecosystems, in particular, to create greater awareness of incredible open source projects and inspire greater participation in them. There are also threats that we as a society — as human beings — need to deal with urgently. So the timing couldn’t be better to broaden the understanding of what is happening in open source communities, new tools to share knowledge, and encourage greater collaboration levels in open source projects. If we accomplish one thing, it will be to illustrate the global context for open source software development and why getting involved in these activities can create positive global change on so many levels. We want more brains in the game.

JP: So let’s dive right into the research itself. You mentioned your blockchain background and your previous role — I take it that this will have some influence on upcoming surveys and analysis? What is coming down the pike on the front?

HC: Blockchain as a technology has undoubtedly influenced my thinking about systems architecture and how research is conducted — both technological frameworks and the human communities that organize around them. Decentralization. Coordination. Transparency. Immutability. Privacy. These are all issues that have been front and center for me these past many years. Part of what I have learned about what makes good blockchain systems work comes from the right combination of great dependability and security with leadership, governance, and high mass collaboration levels. I believe those values transfer over readily to the work of the Linux Foundation and its community. I’m very much looking forward to learning about the many technology ecosystems beyond blockchain currently under the LF umbrella. I’m excited to discover what I imagine will be a new suite of technologies that are not yet part of our consciousness.

JP: What other LF projects and initiatives do you feel need to have deeper dives in understanding their impact besides blockchain? Last year, we published a contributor survey with Harvard. It reached many interesting conclusions about overall motivations for participation and potential areas for remediation or improvement in various organizations. Where do we go further in understanding supply chain security issues — are you working with the Harvard team on any of those things?

HC: The FOSS Contributor Survey was amazing, and there are more good things to come through our collaboration with the Laboratory of Innovation Science at Harvard. Security is a high-priority research issue, and yes, ongoing contributions to this effort from that team will be critical. You can definitely expect a project that dives deep into security issues in software supply chains in the wake of SolarWinds.

I’ve had excellent preliminary discussions with some executive team members about their wish-lists for projects that could become part of the LF Research program in terms of other content. We’ll hope to be as inclusive as we can, based on what our capacity allows. We look forward to exploring topics along industry verticals and technology horizontals, as well as looking at issues that don’t fall neatly into this framework, such as strategies to increase diversity in open source communities, or the role of governance and leadership as a factor in successful adoption of open source projects.

Ultimately, LF Research will have an agenda shaped not only from feedback from within the LF community but by the LF Research Advisory Board, a committee of LF members and other stakeholders who will help shape the agenda and provide support and feedback throughout the program. Through this collaborative effort, I’m confident that LF Research will add new value to our ecosystem and serve as a valuable resource for anyone wanting to learn more about open source software and the communities building it and help them make decisions accordingly. I’m looking forward to our first publications, which we expect out by mid-summer. And I’m most excited to lean on, learn from, and work with such an incredible team as I have found within the LF. Let’s do this!!!

JP: Awesome, Hilary. It was great having you for this talk, and I look forward to the first publications you have in store for us.

Linux Foundation Research will provide objective, decision-useful insights into the scope of open source collaboration

SAN FRANCISCO, Calif. – April 14, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced Linux Foundation Research, a new division that will broaden the understanding of open source projects, ecosystem dynamics, and impact, with never before seen insights on the efficacy of open source collaboration as a means to solve many of the world’s pressing problems. Through a series of research projects and related content, Linux Foundation Research will leverage the Linux Foundation’s vast repository of data, tools, and communities across industry verticals and technology horizontals. The methodology will apply quantitative and qualitative techniques to create an unprecedented knowledge network to benefit the global open source community, academia, and industry.

“As we continue in our mission to collectively build the world’s most critical open infrastructure, we can provide a first-of-its-kind research program that leverages the Linux Foundation’s experience, brings our communities together, and can help inform how open source evolves for decades to come,” said Jim Zemlin, executive director at the Linux Foundation. “As we have seen in our previous studies on supply chain security and FOSS contribution, research is an important way to measure the progress of both open source ecosystems and contributor trends. With a dedicated research organization, the Linux Foundation will be better equipped to draw out insights, trends, and context that will inform discussions and decisions around open collaboration.”

As part of the launch, the Linux Foundation is pleased to welcome Hilary Carter, VP Research, to lead this initiative. Hilary most recently led the development and publication of more than 100 enterprise-focused technology research projects for the Blockchain Research Institute. In addition to research project management, Hilary has authored, co-authored, and contributed to reports on blockchain in pandemics, government, enterprise, sustainability, and supply chains.

“The opportunity to measure, analyze, and describe the impact of open source collaborations in a more fulsome way through Linux Foundation Research is inspiring,” says Carter. “Whether we’re exploring the security of digital supply chains or new initiatives to better report on climate risk, the goal of LF Research is to enhance decision-making and encourage collaboration in a vast array of open source projects. It’s not enough to simply describe what’s taking place. It’s about getting to the heart of why open source community initiatives matter to all facets of our society, as a means to get more people — and more organizations — actively involved.”

Critical to the research initiative will be establishing the Linux Foundation Research Advisory Board, a rotating committee of community leaders and subject matter experts, who will collectively influence the program agenda and provide strategic input, oversight, and ongoing support on next-generation issues.

About the Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Call For Code Logo

Over the past several decades farmers have been depending increasingly on groundwater to irrigate their crops due to climate change and reduced rainfall. Farmers, even in drought-prone areas, continue to need to grow water-intensive crops because these crops have a steady demand.

In 2019, as part of Call for Code, a team of IBMers came together and brainstormed on ideas they were passionate about – problems faced by farmers in developing countries due to more frequent drought conditions. The team designed an end-to-end solution that focuses on helping farmers gain insight into when to water their crops and help them optimize their water usage to grow healthy crops. This team, Liquid Prep, went on to win the IBM employee Call for Code Global Challenge. 

Liquid Prep provides a mobile application that can obtain soil moisture data from a portable soil moisture sensor, fetch weather information from The Weather Company, and access crop data through a service deployed on the IBM Cloud. Their solution brings all this data together, analyzes it, and computes watering guidance to help the farmer decide whether to water their crops right now or conserve it for a better time.

To validate the Liquid Prep prototype, in December 2019, one of the team members traveled to India and interviewed several farmers in the village Nuggehalli, which is near the town Hirisave in the Hassan district of Karnataka, India. The interviews taught the team that the farmers did not have detailed information on when they should water their specific crops and by how much, as they didn’t know the specific needs on a plant-by-plant basis. They also just let the water run freely if the water was available from a nearby source, like a river or stream, and some were entirely dependent on rainfall. The farmers expressed a great interest in the described Liquid Prep solution as it could empower them to make more informed decisions that could improve yields.

A prototype is born

After winning the challenge the Liquid Prep team took on the opportunity to convert the concept to a more complete prototype through an IBM Service Corps engagement. The team was expanded with dedicated IBM volunteers from across the company and they were assigned to optimize Liquid Prep from August through October 2020. During this time the team developed the Minimum Viable Product (MVP) for the mobile solution.

The prototype consists of three primary components: 

  • A hardware sensor to measure soil moisture
  • A highly visual and easy-to-use mobile web application, and 
  • A back-end data service to power the app. 

It works like this: the mobile web application gets soil moisture data from the soil moisture sensor. The app requests environmental conditions from The Weather Company and crop data from the plant database via the backend service deployed on the IBM Cloud. The app analyzes and computes a watering schedule to help the farmer decide if they should water their crops now or at a later time. 

Partners

Liquid Prep has a developed a great working relationship with partners SmartCone Technologies, Inc., and Central New Mexico Community College. Students in the Deep Dive Coding Internet of Things (IoT) Bootcamp at CNM are designing, developing, and producing a robust IoT sensor and housing it in the shape of a stick that can be inserted into the soil and transfer the soil moisture data to the Liquid Prep mobile app via Bluetooth. The collaboration gives students important real-world experience before they enter the workforce.  

“SmartCone is honored to be part of this project.  This is a perfect example of technology teams working together to help make the world a better place, “ said Jason Lee, Founder & CEO, SmartCone Technologies Inc.

Additionally, Liquid Prep will work together with J&H Nixon Farms, who largely grow soybeans and corn crops on about 2800 acres of agricultural land in Ottawa, Canada. They have offered Liquid Prep the opportunity to pilot test the prototype on several plots of land that have different soil conditions, which in turn can expand the breadth of recommendation options to a larger number of potential users.

Now available as open source

Liquid Prep is now available as an open source project hosted by the Linux Foundation. The goal of the project is to help farmers globally farm their crops with the least amount of water by taking advantage of real-time information that can help improve sustainability and build resiliency to climate change.

Participation is welcomed from software developers, designers, testers, agronomists/agri experts/soil experts, IoT engineers, researchers, students, farmers, and others that can help improve the quality and value of the solution for small farmers around the world. Key areas the team are interested in developing include localizing the mobile app, considering soil properties for the improvement of the watering advice, updating project documentation, software and hardware testing, more in-depth research, and adding more crop data to the database.

Get involved in Liquid Prep now at Call For Code

Call For Code Logo

Today, the Linux Foundation announced that it would be adding Rend-o-matic to the list of Call for Code open source projects that it hosts. The Rend-o-matic technology was originally developed as part of the Choirless project during a Call for Code challenge as a way to enable musicians to jam together regardless of where they are. Initially developed to help musicians socially distance because of COVID 19, the application has many other benefits, including bringing together musicians from different parts of the world and allowing for multiple versions of a piece of music featuring various artist collaborations. The artificial intelligence powering Choirless ensures that the consolidated recording stays accurately synchronized even through long compositions, and this is just one of the pieces of software being released under the new Rend-o-matic project.

Developer Diaries – Uniting musicians with AI and IBM Cloud Functions

Created by a team of musically-inclined IBM developers, the Rend-o-matic project features a web-based interface that allows artists to record their individual segments via a laptop or phone. The individual segments are processed using acoustic analysis and AI to identify common patterns across multiple segments which are then automatically synced and output as a single track. Each musician can record on their own time in their own place with each new version of the song available as a fresh MP3 track. In order to scale the compute needed by the AI, the application uses IBM Cloud Functions in a serverless environment that can effortlessly scale up or down to meet demand without the need for additional infrastructure updates. Rend-o-matic is itself built upon open source technology, using Apache OpenWhisk, Apache CouchDB, Cloud Foundry, Docker, Python, Node.js, and FFmpeg. 

Since its creation, Choirless has been incubated and improved as a Call for Code project, with an enhanced algorithm, increased availability, real-time audio-level visualizations, and more. The solution has been released for testing, and as of January, users of the hosted Choirless service built upon the Rend-o-matic project – including school choirs, professional musicians, and bands – have recorded 2,740 individual parts forming 745 distinct performances.

Call for Code invites developers and problem-solvers around the world to build and contribute to sustainable, open source technology projects that address social and humanitarian issues while ensuring the top solutions are deployed to make a demonstrable difference.  Learn more about Call for Code. You can learn more about Rend-o-matic, sample the technology, and contribute back to the project at https://choirless.github.io/ 

Throughout the modern business era, industries and commercial operations have shifted substantially to digital processes. Whether you look at EDI as a means to exchange invoices or cloud-based billing and payment solutions today, businesses have steadily been moving towards increasing digital operations. In the last few years, we’ve seen the promises of digital transformation come alive, particularly in industries that have shifted to software-defined models. The next step of this journey will involve enabling digital transactions through decentralized networks. 

A fundamental adoption issue will be figuring out who controls and decides how a decentralized network is governed. It may seem oxymoronic at first, but decentralized networks still need governance. A future may hold autonomously self-governing decentralized networks, but this model is not accepted in industries today. The governance challenge with a decentralized network technology lies in who and how participants in a network will establish and maintain policies, network operations, on/offboarding of participants, setting fees, configurations, and software changes and are among the issues that will have to be decided to achieve a successful network. No company wants to participate or take a dependency on a network that is controlled or run by a competitor, potential competitor, or any single stakeholder at all for that matter. 

Earlier this year, we presented a solution for Open Governance Networks that enable an industry or ecosystem to govern itself in an open, inclusive, neutral, and participatory model. You may be surprised to learn that it’s based on best practices in open governance we’ve developed over decades of facilitating the world’s most successful and competitive open source projects.

The Challenge

For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” There have been many other concerns raised and confusion created, as overnight headlines hyped cryptocurrency schemes. Despite all this, behind the scenes, and all along, sophisticated companies understood a distributed ledger technology would be a powerful enabler for tackling complex challenges in an industry, or even a section of an industry. 

At the Linux Foundation, we focused on enabling those organizations to collaborate on open source enterprise blockchain technologies within our Hyperledger community. That community has driven collaboration on every aspect of enterprise blockchain technology, including identity, security, and transparency. Like other Linux Foundation projects, these enterprise blockchain communities are open, collaborative efforts. We have had many vertical industry participants engage, from retail, automotive, aerospace, banking, and others participate with real industry challenges they needed to solve. And in this subset of cases, enterprise blockchain is the answer.

The technology is ready. Enterprise blockchain has been through many proof-of-concept implementations, and we’ve already seen that many organizations have shifted to production deployments. A few notable examples are:

  • Trust Your Supplier Network 25 major corporate members from Anheuser-Busch InBev to UPS In production since September 2019. 
  • Foodtrust Launched Aug 2017 with ten members, now being used by all major retailers. 
  • Honeywell 50 vendors with storefronts in the new marketplace. In its first year, GoDirect Trade processed more than $5 million in online transactions.

However, just because we have the technology doesn’t mean we have the appropriate conditions to solve adoption challenges. A certain set of challenges about networks’ governance have become a “last mile” problem for industry adoption. While there are many examples of successful production deployments and multi-stakeholder engagements for commercial enterprise blockchains already, specific adoption scenarios have been halted over uncertainty, or mistrust, over who and how a blockchain network will be governed. 

To precisely state the issue, in many situations, company A does not want to be dependent on, or trust, company B to control a network. For specific solutions that require broad industry participation to succeed, you can name any industry, and there will be company A and company B. 

We think the solution to this challenge will be Open Governance Networks.

The Linux Foundation vision of the Open Governance Network

An Open Governance Network is a distributed ledger service, composed of nodes, operated under the policies and directions of an inclusive set of industry stakeholders. 

Open Governance Networks will set the policies and rules for participation in a decentralized ledger network that acts as an industry utility for transactions and data sharing among participants that have permissions on the network. The Open Governance Network model allows any organization to participate. Those organizations that want to be active in sharing the operational costs will benefit from having a representative say in the policies and rules for the network itself. The software underlying the Open Governance Network will be open source software, including the configurations and build tools so that anyone can validate whether a network node complies with the appropriate policies.

Many who have worked with the Linux Foundation will realize an open, neutral, and participatory governance model under a nonprofit structure that has already been thriving for decades in successful open source software communities. All we’re doing here is taking the same core principles of what makes open governance work for open source software, open standards, and open collaboration and applying those principles to managing a distributed ledger. This is a model that the Linux Foundation has used successfully in other communities, such as the Let’s Encrypt certificate authority.

Our ecosystem members trust the Linux Foundation to help solve this last mile problem using open governance under a neutral nonprofit entity. This is one solution to the concerns about neutrality and distributed control. In pan-industry use cases, it is generally not acceptable for one participant in the network to have power in any way that could be used as an advantage over someone else in the industry.  The control of a ledger is a valuable asset, and competitive organizations generally have concerns in allowing one entity to control this asset. If not hosted in a neutral environment for the community’s benefit, network control can become a leverage point over network users.  

We see this neutrality of control challenge as the primary reason why some privately held networks have struggled to gain widespread adoption. In order to encourage participation, industry leaders are looking for a neutral governance structure, and the Linux Foundation has proven the open governance models accomplish that exceptionally well.

This neutrality of control issue is very similar to the rationale for public utilities. Because the economic model mirrors a public utility, we debated calling these “industry utility networks.” In our conversations, we have learned industry participants are open to sharing the cost burden to stand up and maintain a utility. Still, they want a low-cost, not profit-maximizing model. That is why our nonprofit model makes the most sense.

It’s also not a public utility in that each network we foresee today would be restricted in participation to those who have a stake in the network, not any random person in the world. There’s a layer of human trust that our communities have been enabling on top of distributed networks, which started with the Trust over IP Foundation

Unlike public cryptocurrency networks where anyone can view the ledger or submit proposed transactions, industries have a natural need to limit access to legitimate parties in their industry. With minor adjustments to address the need for policies for transactions on the network, we believe a similar governance model applied to distributed ledger ecosystems can resolve concerns about the neutrality of control. 

Understanding LF Open Governance Networks

Open Governance Networks can be reduced to the following building block components:

  • Business Governance: Networks need a decision-making body to establish core policies (e.g., network policies), make funding and budget decisions, contracting with a network manager, and other business matters necessary for the network’s success. The Linux Foundation establishes a governing board to manage the business governance.
  • Technical Governance: Networks will require software. A technical open source community will openly maintain the software, specifications, or configuration decisions implemented by the network nodes. The Linux Foundation establishes a technical steering committee to oversee technical projects, configurations, working groups, etc.
  • Transaction Entity: Networks will require a transaction entity that will a) act as counterparty to agreements with parties transacting on the network, b) collect fees from participants, and c) execute contracts for operational support (e.g., hiring a network manager).

Of these building blocks, the Linux Foundation already offers its communities the Business and Technical Governance needed for Open Governance Networks. The final component is the new, LF Open Governance Networks. 

LF Open Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation. 

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at governancenetworks@linuxfoundation.org

Click here to read the January 2021 Linux Foundation Newsletter