One of the universal constants of open source technology is that there are very few constants. If there is a problem to be solved, or a task to accomplish, it’s a fair bet that there will be multiple projects/applications/approaches to that problem. This is why we don’t have one distribution, one browser, one office suite. When developers think they have a better way to build a mousetrap, open source licensing and culture allows them to go off on their own and make that mousetrap.

This tendency for diversity extends to the open source systems management community… there are a number of systems management and monitoring platforms, and a plethora of monitoring tools and plug-ins for those platforms. For sysadmins who need to find something to help them, it can be a daunting process simply to find these tools, let alone figure out how they work.

Today, GroundWork Open Source, Inc. launched MonitoringForge, a site designed to be a centralized gateway to all things monitoring.

I spoke with Tara Spalding, VP of Marketing at GroundWork, about the new site and why it could fill a central role.

“There are 500 or so open source developers around the world working on monitoring,” Spalding described, “with some working on the core platforms and many more working on the plug-ins.” The problem is, there are over 200 Web sites dedicated to IT monitoring and systems management tools. These sites are not only numerous, but are also inconsistent to how they present information to visitors. The mission of MonitoringForge is to lend continuity to all these different sources of information and community.

But it raises the question: why isn’t MonitoringForge just going to be site #201 in the list of monitoring sites?

According to Spalding, the site itself is not all about Groundwork. “The site is agnostic,” she explained. “Zenoss has agreed to participate, as well as other IT management companies.” In fact, 1,700 open source projects and plugins are ready to go now for MonitoringForge visitors.

The reason for such a high participation is that projects can participate by hosting their code directly on MonitoringForge, or create a project front-end that passes through to the existing development site. The Multi Router Traffic Grapher (MRTG) project is an example of such a site, which integrates the MonitoringForge front-end, but eventually drills down to the MRTG native site.

This pass-though makes it easy for MonitoringForge to set itself up as a gateway. The proof in the pudding will be how seamless the site’s search capabilities are for visitors. The site itself is officially in beta mode, Spalding emphasized, as the development team plans to add more features in the near future.

If you have an interest/need in monitoring and systems management, you should surf over to MonitoringForge and give it a good test run.

Article Source News and Thoughts from Inside the Linux Foundation
September 14, 2009, 10:48 am

In the run up to LinuxCon, we’ve sat down with a number of the conference’s keynote speakers. This week it’s IBM’s Vice President of Open Source and Linux Bob Sutor. Bob is kicking off LinuxCon with his keynote, “Regarding Clouds, Mainframes, Desktops and Linux,” and also participating in a panel discussion with Oracle’s Monica Kumar and Adobe’s Dave McAllister on Open Standards and Linux. Bob is a well known authority on Linux and open standards (as well as a writer and guitar player), so we’re extremely pleased to have him speak at LinuxCon. I caught up with him via email last week.

Q: You’re opening up LinuxCon with the morning keynote on day 1 and will be addressing the cloud, mainframes and the desktop. Where is the single biggest opportunity for Linux? Why?

A: Just one? I think Linux is such a natural for virtualization, both as a host and as a guest, and this will drive Linux even deeper into datacenters. Why? Linux and virtualization increase efficiency, allow consolidation, help reduce power and heat generated, and reduce server footprint. When you combine this with the quality of service offered by mainframes, you get even more benefits. When you open all this up to new ways of scheduling and managing applications, clouds emerge. So I think virtualization is key to what will foster greater use of Linux in the next decade.

Q: It has been 18 years now since Linus posted his “hobby” operating system online. And, IBM this year is celebrating 10 years of Linux. What has been your biggest surprise as the technology has matured over nearly two decades?

A: Linux could have stayed small and appealed to a small group of computer scientists. Instead, the community development process and leadership nurtured the operating system so that it has kept up with the needs of all the different kinds of Linux users. Many would even say that Linux has stayed ahead of those needs. So Linux has evolved from a one person project to something that runs many of the most mission critical applications around the world. I think anyone would be surprised by that!

Q: IBM has always said that Linux is smart (and big) business. Other companies have followed suit. As one of the original corporate backers of the technology, what best practices can you share with companies new to the Linux development process?

A: I’ll quote or paraphrase Eric Raymond and say that companies new to the Linux development process must “scratch their own itch.” That means understanding what Linux does and does not do for them today, and then throwing their efforts into solving the latter issues. Focus first on your areas of current expertise and gain new expertise by working with other members of the community. From a business perspective, use Linux to complement or enhance your existing or planned offerings. Work actively to take Linux to new places it has never gone before (and sorry to channel Star Trek!).

Q: IBM remains high on the list of “sponsors” of the Linux kernel, according to the Linux Foundation’s recent “Who Writes Linux” update. Can you tell us how that “sponsorship” translates into real business results for your company?

A: If we are a successful sponsor of this open source project, Linux runs very well on top of our hardware and underneath our software, but not to our exclusive advantage. As a public company, success as a sponsor means that Linux delivers increased revenue and shareholder value via the hardware and software we sell and the business and technology services we offer.

Q: Can you give us an update from the open standards front?

A: I think people are really starting to understand the value of truly open standards versus proprietary specifications that have been forced upon them. One important open standard that has gotten a lot of deserved attention in recent years, the Open Document Format, is being adopted in more and more countries around the world. The Open Cloud Manifesto has over 250 company and group supporters. I believe the message is clear: the industry and users will benefit the most from an emerging technology when open standards are at the core, and there as early as possible.

Q: What are you most looking forward to at the debut year of LinuxCon?

A: I always try to get an early sense of the vibe of a conference. Is there excitement among the participants? Are people looking forward to creating more technical innovation and greater business growth? Is the community expanding? Are there grand challenges that developers are ready to leap into? I think I’ll see, hear, and feel all that at LinuxCon. I’m also looking to meet in person many of the people involved with the Linux Foundation.

[Also: if you’re interested in hearing Bob in person, please register soon as we are almost at capacity. You can also register online for $99 and watch from the comfort of your computer.]

Article Source Andy Updegrove’s Blog
September 14, 2009, 9:42 am

Well, it’s been a busy week in Lake Wobegon, hasn’t it? First, the Wall Street Journal broke the story that Microsoft had unwittingly sold 22 patents, not to the Allied Security Trust (which might have resold them to patent trolls), but to the Open Inventions Network (which might someday assert them back against Microsoft). A few days later, perhaps sooner than planned, Microsoft announced the formation of a new non-profit organization, the CodePlex Foundation, with the mission of “enabling the exchange of code and understanding among software companies and open source communities.”

Read More

The interweaving of open standards and open source software is common enough that it doesn’t always register with folks why open source companies prefer open standards. There’s this automatic assumption that if a company uses open sourced technology, then of course they would use open standards.

In actuality, there are two big reasons why any company (proprietary or otherwise) would choose to use open standards. The first (and more common, I think) reason is that introducing an open standard levels the playing field for all the participants in that sector. Smaller, growing companies can band together on an open working standard and compete against the one or two giants in the same sector.

The second reason happens less often: any company (entrenched or otherwise) in a given sector may choose to set up/adhere to an open standard because the marketplace is saturated with competition and customer growth would improve for all if standards were followed. It’s a subtle difference, but it’s the key reason behind the Content Management Interoperability Services (CMIS) standard in the enterprise content management (ECM) sector.

This became clear to me after I spoke with Dr. Ian Howells, Chief Marketing Officer of Alfresco, at last week’s Red Hat Summit. Howells outlined CMIS for me, and its potentially enormous impact on the ECM marketplace.

Right now all of the big ECM players, SharePoint, Documentum, Vignette, tend to use document management and collaboration systems that follow their own set of formats and workflows. There’s some interoperability work, but nothing really major. A big reason? While ECM has been around for quite some time, Howells estimates that only a small part of the potential customer base for any ECM solution has actually been tapped.

This gets back to the standards issue. While Alfresco can do a lot of things the other ECM platforms can do, some ECM apps can do specifically better at fax management, file sharing, or what have you. This specialization has led a lot of existing customers to go down many paths for ECM–paths that don’t play well with others.

Alfresco’s approach has been to try to use a broader ECM strategy that appeals to a larger customer base, as well as make it very easy for potential customers to download and try Alfresco.

Now, with the CMIS in the public comment stage at OASIS, the potential for the ECM market to explode is near at hand. With companies like IBM, Oracle, and Microsoft working on this standard, as well as the more specialized ECM vendors, there now exists an opportunity for former competitors to partner with each other and deliver joint solutions to new customers.

When CMIS is approved, customers will no longer have to take the “my way or the highway” approach to ECM tools, which has been a real barrier to ECM adoption of any kind. Given its enormous success in this market already, Alfresco stands to benefit greatly in this sector once CMIS standards are in place.

That’s good news for Linux, too: Howells told me that while most customers download Alfresco for Windows in the try-out phase, a strong majority of those who choose to subscribe to Alfresco deploy on Linux platforms.

While I was away on vacation a couple of weeks ago, the Fake Linus Torvalds were unleashed upon the Twitter community.

Four–count ’em–four prominent members of the information technology community are sharing their inner ids with the rest of the microblogging world on Twitter and Identi.ca.

To say there is a slight lack of inhibition on their part is perhaps a bit of an understatement. The recent tweet from flt#2 about the tattoo on Bill Gates’ left buttock was a pretty big ring on the clue phone.

The others are no slouches in the razzing department, either. Take flt#1‘s first post: “Why do Debian people annoy me so much? Yet they do.” Even Twitter itself is not immune as a target, as this tweet from flt#3 demonstrates: “Is Twitter running on Minix or something? Why the hell is it down so often?”

Of course, it’s not all about fight club. There’s some positive services coming out of this, like this tweet from flt#4: “If you re-tweet this, I will fix a kernel bug for you. Promise.”

When this Twitter takeover was announced, Jim Zemlin also noted that a couple of weeks before LinuxCon, you would be able to vote on who your favorite FLT is.

That voting system is up on running here on Linux.com now, and I invite you to check out the tweets of these four imposters and choose your favorite pseudo-Torvalds. The winner of the most votes will receive the coveted Silver Shaker when we announce the identities of all the tweeters at LinuxCon.

I’d vote myself, but I know who they are. Do you have any guesses?

Article Source Linux Weather Forecast Blog
September 10, 2009, 8:14 am

After a development cycle lasting exactly three months (June 9 to September 9), Linus Torvalds has released the 2.6.31 kernel. This cycle saw the inclusion of almost 11,000 different changes from over 1100 individual developers representing some 200 companies; a minimum of 16% of the changes came form developers representing only themselves. In other words, over the last three months, Linux kernel developers were putting 118 changes into the kernel every day. That is a lot of work.

Needless to say, all these changes have brought a number of new features to Linux. For the full list, I can only recommend heading over to the always-excellent KernelNewbies 2.6.31 page. Some highlights include USB 3 support (making Linux the first operating system to support that technology), much-improved support for ATI Radeon graphics chipsets (though that is still a work in progress), a number of useful tools aimed at finding kernel bugs before they bite users, IEEE 802.15.4 “personal area network” support, a number of tracing improvements, and vast numbers of new drivers.

The change that may be most visible to a lot of users, though, is the merging of the new performance counter subsystem. Performance counters are, at their core, a hardware feature; modern CPUs can track events like cache misses and report them to the operating system. Developers who are trying to squeeze the most performance out of a chunk of code are highly interested in this information; it’s often the best way to know if micro-optimization changes are actually working. Linux has been very slow to get performance counter support in the mainline for a number of reasons, but it’s finally been added for 2.6.31.

Additionally, the performance counter code has been integrated with the tracing framework, meaning that hits on tracepoints can be treated like any other performance counter event. That makes the integration of a number of operating system events a relatively straightforward task – system administrators can do it on production systems without needing to make any kernel changes at all. See this article for a performance counter overview, and this article for a discussion of the integration between performance counters and tracepoints.

So what comes next? As I write this, Linus is taking his traditional one-day break, so the merging of changes for 2.6.32 has not yet begun. That should happen soon, with the merge window staying open through about the 24th Рit will likely close about the same time that LinuxCon ends. So we’ll be in a good position to talk about 2.6.32 features at the kernel roundtable event at LinuxCon on the 21st.

My guesses? The 2.6.32 kernel should come out sometime around the beginning of December. It will include even better ATI Radeon support (with proper 3D acceleration, hopefully), the much-publicized “hv” drivers from Microsoft (though those may be removed before too long since the developers seem to have lost interest in maintaining them), some significant power management improvements, a number of changes aimed at improving virtualization performance, and a vast number of other things Рstay tuned.

Article Source Jim Zemlin’s Blog
September 9, 2009, 1:49 pm

Earlier this week, the Wall Street Journal’s Nick Wingfield broke a story on Microsoft selling a group of patents to a third party. The end result of this story is good for Linux, even though it doesn’t placate fears of ongoing attacks by Microsoft. Open Invention Network, working with its members and the Linux Foundation, pulled off a coup, managing to acquire some of the very patents that seem to have been at the heart of recent Microsoft FUD campaigns against Linux. Break out your white hats: the good guys won.

The details are that Microsoft assembled a package of patents “relating to open source” and put them up for sale to patent trolls. Microsoft thought they were selling them to AST, a group that buys patents, offers licenses to its members, and then resells the patents. AST calls this their “catch and release” policy. Microsoft would certainly have known that the likely buyer when AST resold their patents in a few months would be a patent troll that would use the patents to attack non-member Linux companies. Thus, by selling patents that target Linux, Microsoft could help generate fear, uncertainty, and doubt about Linux, without needing to attack the Linux community directly in their own name.

This deal shows the mechanisms the Linux industry has constructed to defend Linux are working, even though the outcome also shows Microsoft to continue to act antagonistically to its customers.

We can be thankful that these patents didn’t fall into the hands of a patent troll who has no customers and thus cares not about customer or public backlash. Luckily the defenses put in place by the Linux industry show that collaboration can result in great things, including the legal protection of Linux.

The reality is that Windows and Linux will both remain critical parts of the world’s computing infrastructure for years to come. Nearly 100% of Fortune 500 companies support deployments of both Windows and Linux. Those customers, who have the ear of Microsoft CEO Steve Ballmer, need to tell Microsoft that they do not want Microsoft’s patent tricks to interfere with their production infrastructure. It’s time for Microsoft to stop secretly attacking Linux while publicly claiming to want interoperability. Let’s hope that Microsoft decides going forward to actually try to win in the marketplace, rather than continuing to distract and annoy us with their tricky patent schemes. And, let’s offer a big round of applause to Keith Bergelt and OIN, for their perfectly executed defense of the Linux community.

Article Source Community-cation
September 3, 2009, 1:40 pm

The big news coming out of Red Hat for this year’s Red Hat Summit is, at first blush, a little anti-climatic. The release of Red Hat Enterprise Linux 5.4 seems like just another point release for an already stable and successful operating system. A few more bells and whistles, but not much to get users carried away.

But dig a little deeper into the technology coming out in this release, and the strategy behind it, and you will find a narrative that describe how open source will be a leader in the growing cloud arena. The RHEL 5.4 announcement is just the beginning of the story.

There’s a lot of new improvements in RHEL 5.4: Systemtap, Generic Receive Offload, and a priview implementation of the malloc memory allocation library, to name a few. But the one Red Hat’s highlighted in all of their talk about the release is the inclusion of the KVM virtualization toolset. On the surface, this seems like hype. After all, RHEL has had virtualization in the form of Xen since the release of RHEL 5. What’s the big deal?

First, this release is the first in a series of releases scheduled to come out in 2009 that will center on virtualization, the key being the upcoming Red Hat Enterprise Virtualization (RHEV) Hypervisor, which will have the RHEL 5.4 codebase underneath. The other offerings, RHEV Manager for Desktops with fully integrated VDI management and RHEV Manager for Servers, will round out Red Hat’s virtual offerings, and introduce a way for customers to pick and choose how they want to virtualize and manage their application stacks.

It’s important to pause here and answer a question you might be thinking: if Red Hat’s so gung-ho on virtulaization, why didn’t they just wait and release RHEV at the same time as RHEL 5.4? In fact, this was a question at Tuesday’s press conference with Red Hat executives. Here’s how Brian Stevens, Red Hat’s CTO, explained it: releasing RHEL 5.4 now gives customers who are interested in developing apps for RHEV a head start for that development. Having identical codebases in the two products makes that easy.

If this were just about Red Hat releasing a whole bunch of virtual platforms in the near future, the story might end there, leaving us with the vaguely satsified/stunned feeling one might have after seeing the latest Hollywood summer blockbuster.

But here’s the second part of the story: with some existing technologies and a little bit of new tech thrown in, Red Hat is hoping to help cloud customers easily migrate their virtual machines to any cloud–public, private, or anything in-between. I spoke with Stevens prior to his Wednesday keynote, and learned a little bit more about how this aspect of RHEL 5.4 and its upcoming descendants–which Red Hat is calling the “hybrid cloud”–will work.

Earlier, in the press conference, I asked a question about application development between the Xen and the KVM virtual platforms. The application layer is transparent, Stevens explained to me, so developing for one is no different than developing for the other. But, he added, there are still important differences between virtual machines, despite the advent of the Open Virtualization Format (OVF) standard that is supposed to remove compatibility obstacles.

According to Stevens, while the OVF specification provides a format for packaging virtual machines and applications for deployment across different virtualization platforms, he likens the OVF to an overnight envelope sent to a recipient. Each envelope from the service will look the same on the outside, but inside that contents can be far different. Stevens is skeptical that the OVF offers a real solution for virtual image compatibility.

Since RHEL 5.4 has both KVM and Xen, RHEL customers will have this compatibility problem if they try to switch images from one virtual tool to the other. The approach Red Hat is taking to solve this is making use of the libguestfs tool, which is part of the hybrid cloud solution. libguestfs will make it possible to convert a guest OS to another flavor, such as KVM to ESX or Xen to KVM.

Moreover, this conversion can be done on-disk, meaning you don’t have to open the guest OS and run it to make any configuration changes. Conversion is done on the fly and, Stevens added, you can also cat configuration changes within the guest OS–again, without running the guest OS.

Another part of the hybrid clould solution is a grid-level management tool known as MRG Grid, which Red Hat is developing in partnership with the University of Wisconsin-Madison in the Condor Project. In a video demonstration of the MRG (Messaging, Realtime, Grid) Grid tool, the physical resources of a cluster of machines are easily managed by setting the priority of any job using the cluster.

mrg-grid-management.jpgIn the demo, four animated movies share resources for rendering (likely a nod to Red Hat’s marquee customer, Dreamworks Animation), but when one movie needs more rendering time, the Condor tool lets the manager dynamically set the percentage of resources needed, automatically balancing the load with the other jobs. Upon applying the new priorities, the resources the rush job needs are immediately released from the other three movies, and the express job seamlessly rolls in to take over the resources until the task is completed (similar to the screenshot shown at right).

This dynamic resource management will allow cloud users to allocate resources on any set of resources to which they have access: even public clouds if the access rights and configuration are already set up.

Another part of the hybrid cloud, Deltacloud, will make such sharing very easy. Currently, all clouds, public or private, have their own APIs and infrastructures: Amazon EC2, Rackspace, VMWare… so the same VM image that runs on one of these services cannot immediately run on another service, because there are too many differences between the tools and options the clouds run.

This is all part of the plan, Stevens indicated, since cloud providers are scrambling now to find a way to lock customers in to their cloud services. It’s the typical frontier-technology scenario: new territory opens up, and vendors make landgrabs to try to keep customers for as long as they can. Stevens was quick to point out that while VMware announced their cloud initiative, vCloud Express, this week at VMworld, he has yet to find any code, nor any community of cloud developers. Others have pointed out that VMware’s promises of workload portability only work if both customer and service provider use vSphere.

Deltacloud will help avoid the prospect of vendor lock-in by creating an abstraction layer that will allow cloud application developers to make one app (or stack of apps) that will hit the Deltacloud layer, which will in turn operate smoothly on whatever cloud service that’s running underneath, be it EC2, RevM, or any other cloud API/infrastructure.

All of these chapters of the hybrid cloud story means that vendor lock-in can be avoided, since the open-source Deltacloud will have code to download and a pre-existing community infrastructure in place.

More importantly, this may mark the first time that an open source initiative has taken an early lead in a major area of technology. For all other areas, operating systems, middleware, SOA, and virtualization, open source has done very well, but only after propietary technologies had started down the path. Now, by jumping in and using the hybrid cloud tools to remove the threat of cloud-vendor lock in, Red Hat may have set the standards for all future cloud development. And, it’s nice to say, those standards will be truly open.

And if that’s not a happily ever after, I don’t know what is.

whitehurst.jpgWhile most businesses don’t directly compete in the same space as such household names as Google, Facebook, or Twitter, the success of these companies’ services does compete with business users’ IT expectations.

That was the opening point of today’s keynote speech from Red Hat CEO Jim Whitehurst at Red Hat Summit 2009 in Chicago.

These brand-name companies, which leverage community and the power of participation, “may not be competitors to you, but they are competitors to what your employees can and should expect from your IT,” Whitehurst told the joint session crowd attending the opening keynote of this year’s Summit, which is co-located with JBoss World.

The average CIO, he went on to explain, is being asked to deliver much more on tighter budgets and, in many industries, more regulations than ever before. “20th Century IT can’t keep up,” Whitehurst said.

Whitehurst addressed today’s audience of approximately 1500 attendees to position Red Hat’s current status on the market and where it wants to go. The main thrust of the message? Collaboration and participation is key to satisfying customers’ current and future needs.

The power of community is well-known in the Linux arena, but Whitehurst still delivered a telling example: in the entire year of 1998, the Human Genome Project mapped 200 million base pairs. After the project was opened to a wider community, the results were dramatically increased: in January 2003 alone, 1.3 billion base pairs were mapped.

Red Hat is pushing collaboration not just from the innovative standpoint, but from openness and interoperability as well. “Do you want to buy into Larry Ellison’s vision of IT or do you want to listen to your customers?” he asked the attendees.

For the Raleigh, NC-based company, Whitehurst defined the business model: “We don’t sell software. Our stated goal is to ruthlessly commoditize the areas we go in to. It’s about delivering a better collaborative ecosystem.”

Naturally, Whitehurst sees their brand of business as a much more positive way of doing things. And he cited the collaborative model for not only what it brings to customers, but what it doesn’t bring: software bloat.

“The problem with the typical [proprietary] license model… They make money selling licenses. What happens when you have sold to your install base? What do you do? You add features, which your customers may not want, so it becomes bloat. Bloat is a stumbling block for proprietary models,” Whitehurst said.

Red Hat subscribers, he added, can download just what they need, and aren’t forced to upgrade on Red Hat’s terms. Nor are they locked into just Red Hat’s solutions.

“We’re not saying buy the whole stack from us. We don’t to proscribe what layers customers have to use,” Whitehurst said.

This is a prevailing theme for the conference, as speakers and execs from Red Hat are repeatedly emphasizing the ability of any customer to pick and choose whatever tools they want. This melting-pot strategy seems a good fit for today’s economy, where customers don’t have to grab an entire stack of tools to implement Red Hat products.

2009 is shaping up to be one of the worst years in modern memory for corporate travel funding, which is one reason I’m extremely grateful that the registrations for LinuxCon so far have been good. Because of the tough economic climate, we wanted to make sure Linux users and developers all over the world could participate in LinuxCon without leaving their cube/office/RV/tent/etc. You can watch and participate in LinuxCon keynotes for free by registering here. Keynotes include a great line up of Linux luminaries including:

  • The Kernel Panel featuring Linus Torvalds and many leaders of the kernel development team. This is a rare chance to see these guys in action in front of a mic instead of behind a keyboard.
  • Bob Sutor IBM’s VP of Linux will take us through the Cloud and how IBM is making the most of their investments in Linux.
  • OpenSUSE community manager Zonker takes us on a musical guide of the future of Linux. I can’t wait for this one.
  • Matt Asay has a stellar line up debating the real costs of Open Source.
  • Intel’s Imad Sousou gives us the details of the exciting Moblin project.
  • And no Linux conference is complete without an appearance by Big Bird. Noah Broadwater, VP Information Services, Sesame Workshop talks about how they have used Linux to drive down costs but achieve more with their infrastructure.

For those of you who want to dive a little deeper, we have hand picked conference sessions over the three days that you can attend for $99 (or $49 after the conference.) You don’t actually have to be chained to your monitor. This system allows you to archive and pause the material to review at your leisure. Highlights include:

  • Keeping Open Source Open. Patents, trolls and our friend in Redmond with Jim Zemlin and Keith Bergelt from the Open Invention Network.
  • James Bottomley tells us how to contribute to the Linux kernel and why it makes economic sense.
  • John Ellis from Motorola on How to Manage Open Source Compliance and Governance in the Enterprise.
  • Kernel developer Chris Wright takes us through KSM: A mechanism for improving virtualization density with KVM

There are many more sessions as well, so please register to attend if you’re interested.