Article Source Andy Updegrove’s Blog
September 14, 2009, 9:42 am

Well, it’s been a busy week in Lake Wobegon, hasn’t it? First, the Wall Street Journal broke the story that Microsoft had unwittingly sold 22 patents, not to the Allied Security Trust (which might have resold them to patent trolls), but to the Open Inventions Network (which might someday assert them back against Microsoft). A few days later, perhaps sooner than planned, Microsoft announced the formation of a new non-profit organization, the CodePlex Foundation, with the mission of “enabling the exchange of code and understanding among software companies and open source communities.”

Read More

The interweaving of open standards and open source software is common enough that it doesn’t always register with folks why open source companies prefer open standards. There’s this automatic assumption that if a company uses open sourced technology, then of course they would use open standards.

In actuality, there are two big reasons why any company (proprietary or otherwise) would choose to use open standards. The first (and more common, I think) reason is that introducing an open standard levels the playing field for all the participants in that sector. Smaller, growing companies can band together on an open working standard and compete against the one or two giants in the same sector.

The second reason happens less often: any company (entrenched or otherwise) in a given sector may choose to set up/adhere to an open standard because the marketplace is saturated with competition and customer growth would improve for all if standards were followed. It’s a subtle difference, but it’s the key reason behind the Content Management Interoperability Services (CMIS) standard in the enterprise content management (ECM) sector.

This became clear to me after I spoke with Dr. Ian Howells, Chief Marketing Officer of Alfresco, at last week’s Red Hat Summit. Howells outlined CMIS for me, and its potentially enormous impact on the ECM marketplace.

Right now all of the big ECM players, SharePoint, Documentum, Vignette, tend to use document management and collaboration systems that follow their own set of formats and workflows. There’s some interoperability work, but nothing really major. A big reason? While ECM has been around for quite some time, Howells estimates that only a small part of the potential customer base for any ECM solution has actually been tapped.

This gets back to the standards issue. While Alfresco can do a lot of things the other ECM platforms can do, some ECM apps can do specifically better at fax management, file sharing, or what have you. This specialization has led a lot of existing customers to go down many paths for ECM–paths that don’t play well with others.

Alfresco’s approach has been to try to use a broader ECM strategy that appeals to a larger customer base, as well as make it very easy for potential customers to download and try Alfresco.

Now, with the CMIS in the public comment stage at OASIS, the potential for the ECM market to explode is near at hand. With companies like IBM, Oracle, and Microsoft working on this standard, as well as the more specialized ECM vendors, there now exists an opportunity for former competitors to partner with each other and deliver joint solutions to new customers.

When CMIS is approved, customers will no longer have to take the “my way or the highway” approach to ECM tools, which has been a real barrier to ECM adoption of any kind. Given its enormous success in this market already, Alfresco stands to benefit greatly in this sector once CMIS standards are in place.

That’s good news for Linux, too: Howells told me that while most customers download Alfresco for Windows in the try-out phase, a strong majority of those who choose to subscribe to Alfresco deploy on Linux platforms.

While I was away on vacation a couple of weeks ago, the Fake Linus Torvalds were unleashed upon the Twitter community.

Four–count ’em–four prominent members of the information technology community are sharing their inner ids with the rest of the microblogging world on Twitter and Identi.ca.

To say there is a slight lack of inhibition on their part is perhaps a bit of an understatement. The recent tweet from flt#2 about the tattoo on Bill Gates’ left buttock was a pretty big ring on the clue phone.

The others are no slouches in the razzing department, either. Take flt#1‘s first post: “Why do Debian people annoy me so much? Yet they do.” Even Twitter itself is not immune as a target, as this tweet from flt#3 demonstrates: “Is Twitter running on Minix or something? Why the hell is it down so often?”

Of course, it’s not all about fight club. There’s some positive services coming out of this, like this tweet from flt#4: “If you re-tweet this, I will fix a kernel bug for you. Promise.”

When this Twitter takeover was announced, Jim Zemlin also noted that a couple of weeks before LinuxCon, you would be able to vote on who your favorite FLT is.

That voting system is up on running here on Linux.com now, and I invite you to check out the tweets of these four imposters and choose your favorite pseudo-Torvalds. The winner of the most votes will receive the coveted Silver Shaker when we announce the identities of all the tweeters at LinuxCon.

I’d vote myself, but I know who they are. Do you have any guesses?

Article Source Linux Weather Forecast Blog
September 10, 2009, 8:14 am

After a development cycle lasting exactly three months (June 9 to September 9), Linus Torvalds has released the 2.6.31 kernel. This cycle saw the inclusion of almost 11,000 different changes from over 1100 individual developers representing some 200 companies; a minimum of 16% of the changes came form developers representing only themselves. In other words, over the last three months, Linux kernel developers were putting 118 changes into the kernel every day. That is a lot of work.

Needless to say, all these changes have brought a number of new features to Linux. For the full list, I can only recommend heading over to the always-excellent KernelNewbies 2.6.31 page. Some highlights include USB 3 support (making Linux the first operating system to support that technology), much-improved support for ATI Radeon graphics chipsets (though that is still a work in progress), a number of useful tools aimed at finding kernel bugs before they bite users, IEEE 802.15.4 “personal area network” support, a number of tracing improvements, and vast numbers of new drivers.

The change that may be most visible to a lot of users, though, is the merging of the new performance counter subsystem. Performance counters are, at their core, a hardware feature; modern CPUs can track events like cache misses and report them to the operating system. Developers who are trying to squeeze the most performance out of a chunk of code are highly interested in this information; it’s often the best way to know if micro-optimization changes are actually working. Linux has been very slow to get performance counter support in the mainline for a number of reasons, but it’s finally been added for 2.6.31.

Additionally, the performance counter code has been integrated with the tracing framework, meaning that hits on tracepoints can be treated like any other performance counter event. That makes the integration of a number of operating system events a relatively straightforward task – system administrators can do it on production systems without needing to make any kernel changes at all. See this article for a performance counter overview, and this article for a discussion of the integration between performance counters and tracepoints.

So what comes next? As I write this, Linus is taking his traditional one-day break, so the merging of changes for 2.6.32 has not yet begun. That should happen soon, with the merge window staying open through about the 24th Рit will likely close about the same time that LinuxCon ends. So we’ll be in a good position to talk about 2.6.32 features at the kernel roundtable event at LinuxCon on the 21st.

My guesses? The 2.6.32 kernel should come out sometime around the beginning of December. It will include even better ATI Radeon support (with proper 3D acceleration, hopefully), the much-publicized “hv” drivers from Microsoft (though those may be removed before too long since the developers seem to have lost interest in maintaining them), some significant power management improvements, a number of changes aimed at improving virtualization performance, and a vast number of other things Рstay tuned.

Article Source Jim Zemlin’s Blog
September 9, 2009, 1:49 pm

Earlier this week, the Wall Street Journal’s Nick Wingfield broke a story on Microsoft selling a group of patents to a third party. The end result of this story is good for Linux, even though it doesn’t placate fears of ongoing attacks by Microsoft. Open Invention Network, working with its members and the Linux Foundation, pulled off a coup, managing to acquire some of the very patents that seem to have been at the heart of recent Microsoft FUD campaigns against Linux. Break out your white hats: the good guys won.

The details are that Microsoft assembled a package of patents “relating to open source” and put them up for sale to patent trolls. Microsoft thought they were selling them to AST, a group that buys patents, offers licenses to its members, and then resells the patents. AST calls this their “catch and release” policy. Microsoft would certainly have known that the likely buyer when AST resold their patents in a few months would be a patent troll that would use the patents to attack non-member Linux companies. Thus, by selling patents that target Linux, Microsoft could help generate fear, uncertainty, and doubt about Linux, without needing to attack the Linux community directly in their own name.

This deal shows the mechanisms the Linux industry has constructed to defend Linux are working, even though the outcome also shows Microsoft to continue to act antagonistically to its customers.

We can be thankful that these patents didn’t fall into the hands of a patent troll who has no customers and thus cares not about customer or public backlash. Luckily the defenses put in place by the Linux industry show that collaboration can result in great things, including the legal protection of Linux.

The reality is that Windows and Linux will both remain critical parts of the world’s computing infrastructure for years to come. Nearly 100% of Fortune 500 companies support deployments of both Windows and Linux. Those customers, who have the ear of Microsoft CEO Steve Ballmer, need to tell Microsoft that they do not want Microsoft’s patent tricks to interfere with their production infrastructure. It’s time for Microsoft to stop secretly attacking Linux while publicly claiming to want interoperability. Let’s hope that Microsoft decides going forward to actually try to win in the marketplace, rather than continuing to distract and annoy us with their tricky patent schemes. And, let’s offer a big round of applause to Keith Bergelt and OIN, for their perfectly executed defense of the Linux community.

Article Source Community-cation
September 3, 2009, 1:40 pm

The big news coming out of Red Hat for this year’s Red Hat Summit is, at first blush, a little anti-climatic. The release of Red Hat Enterprise Linux 5.4 seems like just another point release for an already stable and successful operating system. A few more bells and whistles, but not much to get users carried away.

But dig a little deeper into the technology coming out in this release, and the strategy behind it, and you will find a narrative that describe how open source will be a leader in the growing cloud arena. The RHEL 5.4 announcement is just the beginning of the story.

There’s a lot of new improvements in RHEL 5.4: Systemtap, Generic Receive Offload, and a priview implementation of the malloc memory allocation library, to name a few. But the one Red Hat’s highlighted in all of their talk about the release is the inclusion of the KVM virtualization toolset. On the surface, this seems like hype. After all, RHEL has had virtualization in the form of Xen since the release of RHEL 5. What’s the big deal?

First, this release is the first in a series of releases scheduled to come out in 2009 that will center on virtualization, the key being the upcoming Red Hat Enterprise Virtualization (RHEV) Hypervisor, which will have the RHEL 5.4 codebase underneath. The other offerings, RHEV Manager for Desktops with fully integrated VDI management and RHEV Manager for Servers, will round out Red Hat’s virtual offerings, and introduce a way for customers to pick and choose how they want to virtualize and manage their application stacks.

It’s important to pause here and answer a question you might be thinking: if Red Hat’s so gung-ho on virtulaization, why didn’t they just wait and release RHEV at the same time as RHEL 5.4? In fact, this was a question at Tuesday’s press conference with Red Hat executives. Here’s how Brian Stevens, Red Hat’s CTO, explained it: releasing RHEL 5.4 now gives customers who are interested in developing apps for RHEV a head start for that development. Having identical codebases in the two products makes that easy.

If this were just about Red Hat releasing a whole bunch of virtual platforms in the near future, the story might end there, leaving us with the vaguely satsified/stunned feeling one might have after seeing the latest Hollywood summer blockbuster.

But here’s the second part of the story: with some existing technologies and a little bit of new tech thrown in, Red Hat is hoping to help cloud customers easily migrate their virtual machines to any cloud–public, private, or anything in-between. I spoke with Stevens prior to his Wednesday keynote, and learned a little bit more about how this aspect of RHEL 5.4 and its upcoming descendants–which Red Hat is calling the “hybrid cloud”–will work.

Earlier, in the press conference, I asked a question about application development between the Xen and the KVM virtual platforms. The application layer is transparent, Stevens explained to me, so developing for one is no different than developing for the other. But, he added, there are still important differences between virtual machines, despite the advent of the Open Virtualization Format (OVF) standard that is supposed to remove compatibility obstacles.

According to Stevens, while the OVF specification provides a format for packaging virtual machines and applications for deployment across different virtualization platforms, he likens the OVF to an overnight envelope sent to a recipient. Each envelope from the service will look the same on the outside, but inside that contents can be far different. Stevens is skeptical that the OVF offers a real solution for virtual image compatibility.

Since RHEL 5.4 has both KVM and Xen, RHEL customers will have this compatibility problem if they try to switch images from one virtual tool to the other. The approach Red Hat is taking to solve this is making use of the libguestfs tool, which is part of the hybrid cloud solution. libguestfs will make it possible to convert a guest OS to another flavor, such as KVM to ESX or Xen to KVM.

Moreover, this conversion can be done on-disk, meaning you don’t have to open the guest OS and run it to make any configuration changes. Conversion is done on the fly and, Stevens added, you can also cat configuration changes within the guest OS–again, without running the guest OS.

Another part of the hybrid clould solution is a grid-level management tool known as MRG Grid, which Red Hat is developing in partnership with the University of Wisconsin-Madison in the Condor Project. In a video demonstration of the MRG (Messaging, Realtime, Grid) Grid tool, the physical resources of a cluster of machines are easily managed by setting the priority of any job using the cluster.

mrg-grid-management.jpgIn the demo, four animated movies share resources for rendering (likely a nod to Red Hat’s marquee customer, Dreamworks Animation), but when one movie needs more rendering time, the Condor tool lets the manager dynamically set the percentage of resources needed, automatically balancing the load with the other jobs. Upon applying the new priorities, the resources the rush job needs are immediately released from the other three movies, and the express job seamlessly rolls in to take over the resources until the task is completed (similar to the screenshot shown at right).

This dynamic resource management will allow cloud users to allocate resources on any set of resources to which they have access: even public clouds if the access rights and configuration are already set up.

Another part of the hybrid cloud, Deltacloud, will make such sharing very easy. Currently, all clouds, public or private, have their own APIs and infrastructures: Amazon EC2, Rackspace, VMWare… so the same VM image that runs on one of these services cannot immediately run on another service, because there are too many differences between the tools and options the clouds run.

This is all part of the plan, Stevens indicated, since cloud providers are scrambling now to find a way to lock customers in to their cloud services. It’s the typical frontier-technology scenario: new territory opens up, and vendors make landgrabs to try to keep customers for as long as they can. Stevens was quick to point out that while VMware announced their cloud initiative, vCloud Express, this week at VMworld, he has yet to find any code, nor any community of cloud developers. Others have pointed out that VMware’s promises of workload portability only work if both customer and service provider use vSphere.

Deltacloud will help avoid the prospect of vendor lock-in by creating an abstraction layer that will allow cloud application developers to make one app (or stack of apps) that will hit the Deltacloud layer, which will in turn operate smoothly on whatever cloud service that’s running underneath, be it EC2, RevM, or any other cloud API/infrastructure.

All of these chapters of the hybrid cloud story means that vendor lock-in can be avoided, since the open-source Deltacloud will have code to download and a pre-existing community infrastructure in place.

More importantly, this may mark the first time that an open source initiative has taken an early lead in a major area of technology. For all other areas, operating systems, middleware, SOA, and virtualization, open source has done very well, but only after propietary technologies had started down the path. Now, by jumping in and using the hybrid cloud tools to remove the threat of cloud-vendor lock in, Red Hat may have set the standards for all future cloud development. And, it’s nice to say, those standards will be truly open.

And if that’s not a happily ever after, I don’t know what is.

whitehurst.jpgWhile most businesses don’t directly compete in the same space as such household names as Google, Facebook, or Twitter, the success of these companies’ services does compete with business users’ IT expectations.

That was the opening point of today’s keynote speech from Red Hat CEO Jim Whitehurst at Red Hat Summit 2009 in Chicago.

These brand-name companies, which leverage community and the power of participation, “may not be competitors to you, but they are competitors to what your employees can and should expect from your IT,” Whitehurst told the joint session crowd attending the opening keynote of this year’s Summit, which is co-located with JBoss World.

The average CIO, he went on to explain, is being asked to deliver much more on tighter budgets and, in many industries, more regulations than ever before. “20th Century IT can’t keep up,” Whitehurst said.

Whitehurst addressed today’s audience of approximately 1500 attendees to position Red Hat’s current status on the market and where it wants to go. The main thrust of the message? Collaboration and participation is key to satisfying customers’ current and future needs.

The power of community is well-known in the Linux arena, but Whitehurst still delivered a telling example: in the entire year of 1998, the Human Genome Project mapped 200 million base pairs. After the project was opened to a wider community, the results were dramatically increased: in January 2003 alone, 1.3 billion base pairs were mapped.

Red Hat is pushing collaboration not just from the innovative standpoint, but from openness and interoperability as well. “Do you want to buy into Larry Ellison’s vision of IT or do you want to listen to your customers?” he asked the attendees.

For the Raleigh, NC-based company, Whitehurst defined the business model: “We don’t sell software. Our stated goal is to ruthlessly commoditize the areas we go in to. It’s about delivering a better collaborative ecosystem.”

Naturally, Whitehurst sees their brand of business as a much more positive way of doing things. And he cited the collaborative model for not only what it brings to customers, but what it doesn’t bring: software bloat.

“The problem with the typical [proprietary] license model… They make money selling licenses. What happens when you have sold to your install base? What do you do? You add features, which your customers may not want, so it becomes bloat. Bloat is a stumbling block for proprietary models,” Whitehurst said.

Red Hat subscribers, he added, can download just what they need, and aren’t forced to upgrade on Red Hat’s terms. Nor are they locked into just Red Hat’s solutions.

“We’re not saying buy the whole stack from us. We don’t to proscribe what layers customers have to use,” Whitehurst said.

This is a prevailing theme for the conference, as speakers and execs from Red Hat are repeatedly emphasizing the ability of any customer to pick and choose whatever tools they want. This melting-pot strategy seems a good fit for today’s economy, where customers don’t have to grab an entire stack of tools to implement Red Hat products.

2009 is shaping up to be one of the worst years in modern memory for corporate travel funding, which is one reason I’m extremely grateful that the registrations for LinuxCon so far have been good. Because of the tough economic climate, we wanted to make sure Linux users and developers all over the world could participate in LinuxCon without leaving their cube/office/RV/tent/etc. You can watch and participate in LinuxCon keynotes for free by registering here. Keynotes include a great line up of Linux luminaries including:

  • The Kernel Panel featuring Linus Torvalds and many leaders of the kernel development team. This is a rare chance to see these guys in action in front of a mic instead of behind a keyboard.
  • Bob Sutor IBM’s VP of Linux will take us through the Cloud and how IBM is making the most of their investments in Linux.
  • OpenSUSE community manager Zonker takes us on a musical guide of the future of Linux. I can’t wait for this one.
  • Matt Asay has a stellar line up debating the real costs of Open Source.
  • Intel’s Imad Sousou gives us the details of the exciting Moblin project.
  • And no Linux conference is complete without an appearance by Big Bird. Noah Broadwater, VP Information Services, Sesame Workshop talks about how they have used Linux to drive down costs but achieve more with their infrastructure.

For those of you who want to dive a little deeper, we have hand picked conference sessions over the three days that you can attend for $99 (or $49 after the conference.) You don’t actually have to be chained to your monitor. This system allows you to archive and pause the material to review at your leisure. Highlights include:

  • Keeping Open Source Open. Patents, trolls and our friend in Redmond with Jim Zemlin and Keith Bergelt from the Open Invention Network.
  • James Bottomley tells us how to contribute to the Linux kernel and why it makes economic sense.
  • John Ellis from Motorola on How to Manage Open Source Compliance and Governance in the Enterprise.
  • Kernel developer Chris Wright takes us through KSM: A mechanism for improving virtualization density with KVM

There are many more sessions as well, so please register to attend if you’re interested.

The announcement earlier this month for the new Playstation 3 (PS3) Slim model caused some consternation for Linux users, as it revealed that PS3-maker Sony would no longer support the “Install Other OS” feature that currently operates on existing PS3 machines.

I heard the news when I got back from vacation, and I was more than a little disappointed, since this is a big hit to low-cost supercomputing.

It’s easy to assume that the only users affected by Sony’s decision are the ever-present tinkerers who try (and typically succeed) to install Linux on every new device that comes out. Hence, Linux on iPhone and the like. It’s a challenge that seems to range from ardent hobby to mild obsession.

In the case of the PS3, however, the benefits of Linux on the CellBE-processor device were immediate. In 2007, the researchers at North Carolina State University clustered eight PS3 machines that ran Fedora Core 5 Linux (ppc64). That same year a University of Massachusetts team found that putting together an eight-node PS3 cluster together (for a cost of about US$4000) would perform with the same processing power as a 200-processor supercomputer.

Sony explained their decision on the Playstation 2 developer forum, in a post that has since been removed:

“The reasons are simple: The PS3 Slim is a major cost reduction involving many changes to hardware components in the PS3 design. In order to offer the OtherOS install, SCE would need to continue to maintain the OtherOS hypervisor drivers for any significant hardware changes–this costs SCE. One of our key objectives with the new model is to pass on cost savings to the consumer with a lower retail price. Unfortunately in this case the cost of OtherOS install did not fit with the wider objective to offer a lower cost PS3.”

Of course, I am sure there are other organizations who would be happy to step up and help maintain the hypervisor, especially Fixstar, makers of Yellow Dog Linux and sellers of pre-loaded Yellow Dog PS3 devices in single-, eight-, and 32-node clusters.

Unfortunately, the economics are far more complex. It’s no secret that Sony loses money on every PS3 it sells, counting on game sales to make up for the loss in revenue. Academic institutions using PS3s for clusters aren’t likely to buy copies of Batman: Arkham Asylum or tomorrow’s release of Guitar Hero 5.

So, even if someone were to step up and take care of the hypervisor support issue, there’s still the matter of making up for Sony’s lost revenue per PS3 unit.

It has been confirmed that Sony has no plans to remove the “Install Other OS” feature on current PS3 models, so there’s no immediate danger. It’s only a matter of time for obsolesce to creep in as existing PS3s fade out while PS3 Slims pervade. Hopefully, all those tinkerers out there will figure out a way to run Linux on these new PS3s, so low-cost supercomputing can continue unabated.

Article Source Andy Updegrove’s Blog
August 31, 2009, 9:00 am

Steve Jobs is a genius of design and marketing, but his track record on calling the right balance between utilizing proprietary arts and public resources (like open source and open standards) is more questionable. Two news items caught my eye today that illustrate the delicacy of making choices involving openness for the iPhone platform – both geopolitically as well as technically.

The first item can be found in today’s issue of the London Sunday Times, and the second appears at the MacNewsWorld.com Web site. The intersecting points of the two articles are the iPhone and, less obviously, openness. But the types of openness at issue in the two articles are at once both different, and strangely similar.

The Sunday Times piece recounts the (unsuccessful) efforts of Andre Torrez, the chief technology officer at Federated Media in San Francisco, to switch from the iPhone to an Android-based G1 handset, because he objects to the closed environment that the iPhone represents. But after just a week, Torrez reverts to the better app-provisioned iPhone.

The second article confirms the long-rumored news that Apple has found an iPhone distributor in China Рperhaps one that may even have placed an initial order of 5 million units. That’s undeniably big news for Apple and its stockholders.

But what exactly will Chinese users (and their government) be able to do with their iPhones? That’s where the standards angle comes in, and the standards in question are the well-known WiFi standard, and the lesser known, Chinese, home-grown WAPI standard (WAPI stands for “wireless authentication and privacy infrastructure).

Read the rest of the story here