How to Evaluate Open Source Projects?
If you’re in the open source world, you probably don’t need a lot of convincing about the high quality software that results from the open source development model. Mass collaboration coupled with vociferous peer review makes for better code and products. It just does. No matter how much of a monopoly might exist today, this collaboration cannot be duplicated within the proprietary software model.
But there remains companies and organizations that still need convincing. Not because open source software holds any secrets — in fact, just the opposite is true given its transparency — but because adoption of new technologies is a process not a destination. It will always be that way, and that is a good thing for all of us. Peer review. Code scrutiny. This will continue to make all software better.
To this end, tools that help other developers utilize open source programs are extremely important.
Today, Coverity is releasing application architecture diagrams from over 2,500 open source projects showing the key components that make up a given software project. This visual presentation of an application’s architecture and related data provides a fascinating and detailed portrait of the software analyzed and can be a great tool in evaluating what the software can do. Today’s release from Coverity exemplifies what transparency in software development can produce.
As an aside, this announcement only makes me wish that we could provide similar analysis to our government legislation. There is a strong push to provide the same transparency and participation ethos of the open source world to government. Let’s hope in a few years I can write about a similar project being applied to our federal, state and local bills.
Coverity’s SCAN, the software behind this big release of data, was originally a part of the Department of Homeland Security’s Open Source Hardening Project. The data provides a clear map for navigating the inner workings of an OSS project as well as a clear path to developing similar functionality.
Back in 2006, Jon Corbet of LWN.net reported on Coverity’s initial defect survey results using an early version of SCAN. The company claimed: “The LAMP stack — Linux, Apache, MySQL, and Perl/PHP/Python — showed significantly better software quality above the baseline with an average of 0.290 defects per thousand lines of code compared to an average of 0.434 for the 32 open source software projects analyzed.” Corbet noted that some of the results didn’t immediately square with the amount of security advisories released, and comments pointed out the unclear nature of the definition of a “defect.”
SCAN has progressed significantly over the past three years, and today’s announcement focuses on architecture diagrams, not defects. The data will be available under the Creative Commons license and is available on Coverity’s SCAN site.