Article Source Linus Torvalds’s Blog
October 3, 2009, 2:10 pm

So Tove is off learning how to judge Tae-Kwon-Do competitions, and the kids are roaming the neighborhood like jackals (or maybe they’re upstairs reading a book. Who knows? I take a “hands off” approach to parenting).

So I’m stalking the kitchen looking for food, my trusted canine companion by my side. My prey hides quietly on a remote shelf, but I outsmart the cardboard packaging easily (along with the NASA-designed internal metallic pouch), and am soon ready to feast on the guts of some random Indian lentil stew.

And that’s when it hits me.

A quiet rattling emanates ominously from inside the nutritionally uninteresting outer shell as I’m about to discard it. I go on high alert, and ancient instincts immediately raise my adrenaline levels. What’s going on?

So I look inside, and in addition to the metallic pouch with the actual food, my meal has come with a CD full of (and I quote) “Authentic Indian Cuisine”. No, wait! Underneath that it says “Indian Classical Duets”.

Which brings me to today’s title: “WTF?” Have I been leading an unusually sheltered life, and this is actually normal? What’s next? Happy Meals that come with Beyonce CD’s?

Now I’m intrigued, and considering going through our other indian ready-made meals. Was this a one-off? Or had I not just noticed before, and do all those $2.99 pre-made indian meal pouches come with these odd musical accompaniments?

The fight against software patents got a big boost yesterday when one Linux company filed an amicus brief with the US Supreme Court urging it to uphold last year’s Bilski vs. Doll decision in the Federal appeals court.

That vendor was Red Hat, which really comes as no surprise. With the veiled threats of software patent enforcement coming from Redmond, and the not-so-veiled threat of any one of a number of patent trolls getting ready to use a purchased patent to pursue litigation, Red Hat would get a huge benefit from the elimination of software patents. Of course, so will everyone else writing code today.

Here’s what’s happening: in 2008, the Federal appeals court upheld a Patent and Trademark Office (PTO) ruling that Bernard Bilski’s method of risk assessment of bad weather through commodities trading was not patent-eligible under Section 101 of the Patent act.

The Court made up a test to benchmark if claimed ideas like this could actually be patentable. The idea has to either be tied to a particular machine or apparatus or transform something into a different state or thing. The Court stated in its ruling that this should be the only applicable test.

And that’s the key thing: if the Supreme Court, which is scheduled to hear oral arguments on the Bilski case on Nov. 9, upholds this ruling against Bilski, the clause “tied to a particular machine or apparatus” will have enormous implications on software patents. Because this means that abstract ideas, particularly the algorithms and process found in, say, software code–are not patentable. This would put all of the PTO’s approvals of software patents into question and very likely decimate the current notion of software patents.

Red Hat’s amicus (“friend of the court”) brief is consistent–they filed a similar brief about how software patents adversely effect the software development industry when the Bilski case was in the Appeals Court. Yesterday’s brief also “asked the Supreme Court to adopt the machine-or-transformation test set forth in the Bilski case and to make clear that it excludes software from patentability,” according to Red Hat CEO Jim Whitehurt in a Thursday blog posting.

If you’re wondering how one court decision can make such a huge difference, recall that it was decisions in the 90s that changed the definition of patentability back then, which led to the current mess we’re in.

“Software patents now number in the hundreds of thousands, and they cover abstract technology in vague and difficult-to-interpret terms. Because software products may involve thousands of patentable components, developers face the risk of having to defend weak-but-costly patent infringement lawsuits. A new class of business enterprise–patent trolls–has developed to file lawsuits to exploit this system,” Red Hat noted in a press release.

Patent litigation is a big source of pain for software development companies, which often has to hold patent portfolios of their own just as a defensive measure to keep from getting sued. Patent litigation is particularly expensive, too, according to Keith Bergelt, CEO of the Open Invention Network (OIN).

In a talk at LinuxCon last week, Bergelt stated: “The unfortunate thing about patent litigation is that it often falls within a seam. Companies have to often hire outside counsel, which is very expensive.”

To illustrate the danger of patents, Bergelt’s OIN, a consortium of six companies (IBM, NEC, Novell, Philips, Red Hat, and Sony) was recently instrumental in the purchase of 22 allegedly Linux-related patents being sold in private auction by Microsoft. Allied Security Trust I, another defensive patent pool, bought the patents from Microsoft. OIN has a patent treaty with AST I, and was able to pick up the patents.

Bergelt explained that AST I often does not hold patents like this for any great length of time if AST I sees no danger

In his talk, Bergelt maintained that Microsoft advertised these patents as being Linux-related. Microsoft has been moving lately towards getting more revenue from its existing patent portfolio, Bergelt explained, and if such a patent sale just happened to land in the hands of a non-practicing entity, so much the better for Redmond. The fact that Microsoft sold the patents to AST I rather than OIN convinces Bergelt that Microsoft had an ulterior motive to sow FUD in the community, even while Microsoft would have plausible deniability in selling to known patent defense fund like AST I.

With all of the complexity of patents and the litigation and business that surrounds them, trying to code without trepidation in such a insane environment is, well, insane. And while I applaud the efforts of OIN, I agree with Bergelt’s other statement in his presentation: “Reform is key.”

Let’s keep our fingers crossed that Red Hat’s efforts and all the others involved in Bilski will convince the Supreme Court justices to uphold Bilski, so real reform can begin.

Article Source Andy Updegrove’s Blog
September 30, 2009, 10:38 am

Two weeks ago, I wrote an analysis of the governance structure of the CodePlex Foundation, a new open source-focused foundation launched by Microsoft. My opinion, as expressed in that piece, was that significant changes (which I outlined) would need to be made to the Foundation before it would be taken seriously by Microsoft’s competitors, and more especially, by individual open source developers.

But what about the business premise for the Foundation itself? Let’s say that CodePlex does restructure in such a way as to create a trusted, safe place for work to be done to support the open source software development model. Is there be a need for such an organization, and if so, what needs could it help meet.

As with my last piece, I’ll use the Q&A approach to make my points.

Read the Rest Here

One thing about a web site is that it’s always being renovated. Users make suggestions, developers find ways to improve functionality; it’s a process.

As part of that process, the forums have gotten a nice makeover this week. While many of you liked the functionality, there were also a lot of notes about the width of the interface. That’s why the first thing you’ll notice is the increased size of the forums. We knocked down some virtual walls and gave the system more room to spread out.

Beyond aesthetics, the new layout will allow visitors to view more topics at once, and scan more information in a thread.

There are also more threads to peruse. We’ve added a new Linux System Administrator section, with topic areas on Cloud Management, Linux Security, and Network Management, to name a few. We really want to encourage sysadmins to use these forums to share information about using Linux on the larger scale.

When I was at LinuxCon last week, one of the regular readers asked about the forums and what their purpose on the site was, since we have Answers and Groups. The answer is something I learned ‘way back in my days at Sams Publishing: people approach learning in different ways. That’s why some people like big, texty tomes to learn about computers, while others prefer step-by-step visual guides.

People differ, too, on how they approach community. That’s why a good city/town planner will provide different forms of recreation areas for citizens: football fields, tennis, swimming, forested trails… if all rec areas were the same, it would be boring.

And that’s the reason for all of these opportunities to share knowledge and information. Some Linux users prefer the forum setting, while others just want to get in, ask a question, and get out.

Now that the forums have been renovated, we’re now looking for volunteers to act as moderators. If there’s a topic area you’d like to help moderate,
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
and we’ll get you started.

Article Source Community-cation
September 25, 2009, 11:04 am

The first LinuxCon may be over, but the knowledge and community shared at the conference will be around for a long time.

Expert sessions, informative keynotes, and multiple opportunities to kick back and socialize with Linux consumers of all stripes–these marked the flavor for LinuxCon. Attendees appreciated the balance of learning and information they got from the sessions–feedback from those I (unscientifically) surveyed was overwhelmingly positive, and I was asking for all comments, not just the good.

Some of the highlights of the conference include:

  • Jim Zemlin’s opening keynote, which showcased the important numbers that surround the Linux ecosystem, such as 2,700,00, the number of lines of code added to kernel in the last year according to the recently updated “Who Writes Linux” paper from the Linux Foundation; 10,923, the number of lines of code added to the Linux kernel every day; and 5,547, the number of lines deleted every day.
  • The relevation of the Fake Linux Torvalds’ identities: Dan Lyons, the ghost behind FakeSteveJobs and currently a Newsweek reporter; Matt Asay, CNET open source blogger and VP of business development at Alfresco; Joe “Zonker” Brockmeier, former reporter and currently community manager for openSUSE; and Jono Bacon, Community Manager, Ubuntu. Followers of the FLT tweets voted Matt Asay as the most popular impostor.
  • There are three big areas of opportunity for Linux in the near future: cloud computing, mainframe, and Linux’ future on the desktop. That was the main message of Monday’s keynote from IBM’s Dr. Robert Sutor.
  • The hugely popular Linux kernel roundtable, an all-star line-up of Linux kernel developers who gave their take on what’s right–and what’s wrong–with the Linux kernel today. The panel’s members, Jon Corbet of, Chris Wright from Red Hat, IBM’s Ted Ts’o, Novell’s Greg Kroah-Hartmann, and Linus Torvalds, founder of the Linux kernel, manned the stage to answer questions from panel moderator James Bottomley of Novell as well as many questions from the audience.
  • It wasn’t all work: the well-attended Linux Foundation bowling party at Grand Central Bowling raised money for Defenders of Wildlife. The event was a big success, with teams comprised of friends old and new who banded together for a common cause. A lot like open source, come to think of it. Attendees raised $3,000 for the Defenders of Wildlife charity. Check out the video highlights.
  • Brockmeier tapped into his pro DJ expertise music to entertain and inform the audience about how Linux can be perceived through the lens of rock and roll. Best comparison? Debian as The Velvet Underground.
  • Addressing the LinuxCon attendees in his Wednesday keynote on “The Freedom to Collaborate,” HP Open Source & Linux Chief Technologist Bdale Garbee announced the launch of a new HP-sponsored web portal for supporting non-commercial Linux distributions and described the value of collaboration for businesses who use open collaboration.
  • Speaking before a combined session of LinuxCon and the co-located Linux Plumber’s Conference, Canonical founder Mark Shuttleworth drilled home the concepts of cadence, quality, and design in the Linux development ecosystem, particularly cadence.

These events are just the tip of the iceberg for all of the great sessions hosted at LinuxCon. Alfresco’s Matt Asay had a stellar panel debating the real costs of Open Source, Intel‚Äôs Dirk Hohndel provided details of the exciting Moblin project; and Noah Broadwater, VP Information Services, Sesame Workshop gave great details about how Sesame deployed SUSE Linux.

Attendees at LinuxCon were the first to hear news about a new Moblin-based netbook coming to the market. On Wednesday that news was confirmed: at the Intel Developer’s Forum in San Francisco, Dell, Canonical, and Intel announced the availability of a Moblin v2-based netbook model, the Dell Inspiron Mini 10V. The 10V will run Canonical’s Moblin Netbook Remix and went on sale September 24.

There were quite a few giveaways, too. Qualcomm gave away Android-based phones to some lucky folks and Novell had a drawing for Chumby devices.

Words simply fail me at the outrageous Jeremy Allison as Steve Ballmer, host of the Linux Foundation Quiz Show. As always, he knows how to make this event one of the most popular spectacles in the entire Linux community.

And the whole thing was capped off by a tremendous end-of-show reception sponsored by Intel at McCormick and Schmicks. Outstanding food in a gorgeous setting near the event site.

There’s good news, too, for those who could not attend LinuxCon: You can register to view many conference sessions for $49. You can archive and pause the material to review at your leisure. Highlights include:

  • Keeping Open Source Open–a look at patents, trolls and our friend in Redmond with Zemlin and Keith Bergelt from the Open Invention Network.
  • Novell’s James Bottomley explained how to contribute to the Linux kernel and why it makes economic sense.
  • John Ellis from Motorola on How to Manage Open Source Compliance and Governance in the Enterprise.
  • Kernel developer Chris Wright from Red Hat examined KSM: A mechanism for improving virtualization density with KVM.

The wealth of knowledge gained by LinuxCon attendees will greatly benefit the community-at-large, because there are great new technologies right around the bend, and attendees have a great edge on capitalizing on that future.

Article Source Community-cation
September 24, 2009, 11:28 am

Cadence, quality, and design were the core themes of Canonical founder Mark Shuttleworth’s closing keynote talk at LinuxCon.

Speaking before a combined session of LinuxCon and the co-located Linux Plumber’s Conference, Shuttleworth drilled home the importance of these concepts in the Linux development ecosystem, particularly cadence.

Shuttleworth has long maintained that if free and open source software projects can begin to sync their development cycles with each other, then both upstream and downstream developers (and, ultimately, users) will benefit. This is large part of the strategy behing Canonical’s strict six-month release for the Ubuntu distribution and the 18-month Ubuntu Long Term Support (LTS) cycles.

It won’t be easy, he told the crowd, but already quite a few projects are seeing the value of cadence (Shuttleworth cited recent moves in the KDE Project). Shuttleworth empasized, as he has in the past, that it doesn’t matter what pattern of cadence projects take, just so long as that pattern is predictable.

Quality is another core component of how development projects can improve. Shuttleworth described how Canonical continually applies bug tracking data to improve Ubuntu. This seemed to strike a chord in attendees–several of the post-talk questions dealt with perceived lacks of response from the Ubuntu bug reporting system. Shuttleworth replied that even though bug fixes weren’t going to be immediate, the more people that report a given bug, the higher the priority that bug would gain.

On design, Shuttleworth emphasized how important user testing of interface and function can be. Canonical uses daily testing for Ubuntu and other oper source projects–information that is fed directly back to the developer (sometimes with the developer in the room when testing occurs). “Developers always learn a lot from these tests,” he said.

This was a strong day for Canonical, and it showed in Shuttleworth’s delivery. Earlier, Dell and Intel made a joint announcement with Canonical at the Intel Developers Forum about the new Dell Inspiron 10v netbook, which will run the Canonical’s Moblin Netbook Remix.

On the same day, IBM and Canonical introduced “a new, flexible personal computing software package for netbooks and other thin-client devices to help businesses in Africa bridge the digital divide by leapfrogging traditional PCs and proprietary software,” according to a press release.

Part of IBM’s Smart Work Initiative, the new package targets the rising popularity of low-cost netbooks to make IBM’s industrial-strength software affordable to new, mass audiences in Africa. This program appears to be the first major deployment of the Microsoft-Free PC technology both companies announced in December 2008.

The first day of LinuxCon proved to be a big hit amongst many attendees, not just for the quality of content but for the extracurricular activities the event has provided.

bowling.jpgLast night, the main event was the well-attended Linux Foundation bowling party across the river at Grand Central Bowling, which was designed to raise money for Defenders of Wildlife. The event was a big success, with teams comprised of friends old and new who banded together for a common cause. A lot like open source, come to think of it.

And the nice thing about it was that while there was fun to be had, attendees also raised $3,000 for Defenders of Wildlife. Congratulations, then, to all involved.

After a night of fun like that, and with all the work that some LinuxCon goers still can’t seem to tear themselves away from, it’s nice to have a chance to relax. The venue here is pretty nice, lots of alcoves and niches in the Portland Waterfront Marriott to sit down and converse.¬†

yoga.jpgFor more intensive relaxation, there’s the yoga classes held daily at the show at 7:30 a.m. There’s been some early risers who have made these sessions, and I hear the classes really help reset mind and body.

spa.jpgAnother service is the holistic day spa, offered by The Dragontree, a Portland-based business that approaches spa and massage treatments not just as a way to pamper clientele but also to therapeutically restore health.

So, while there are excellent massage services, there are also skin care consultation and other holistic care options here at the show, according to The Dragontree’s Heather Wade.

This may not sound like much, but traveling¬† alone can bring a lot of stress to anyone, so we’re happy to offer the little things to help make everyone’s visit to LinuxCon as fun as possible.

Just after the popular kernel panel roundtable at LinuxCon today,Linux Foundation Executive Director Jim Zemlin took the stage and announced the time had come to reveal the identities of the Fake Linus Torvalds who had inundated the Linux Foundation’s Twitter feed in recent weeks with humorous, sometimes startling, comments.
Article Source Community-cation
September 18, 2009, 7:19 am

Last night, I had the distinct pleasure of speaking at the monthly Fort Wayne, IN LUG meeting. Apparently, they thought I knew something about Linux. It did not take me long to prove them wrong.

Just kidding. In all honesty, it did go well, and was more of a long discussion than a full-blown speech, since I am big on interaction. The topic of the discussion is “Linux Everywhere You Want It,” with the idea that with Linux being pretty much everywhere in platform-space today, the idea of the traditional desktop paradigm is evolving from the sit-on-the-PC-with-files-and-folders model to anything-that-can-provide-a-good-window model.

The big drivers for this shift, in my opinion, are two-fold: first, the flexibility and stability of Linux allows it to be ported to all of these different platforms, which forms a great basis for the growth of this new model. The second, which is no big surprise, is the cloud. After all, the platform-as-a-window model only works if there’s something good to look at. If my bay window looks out over an alley rather than a river, you can see how it wouldn’t have much appeal.

This is, for me, pretty standard stuff, and for these LUG members as well. We talked about the benefits of the cloud: for me, the former configuration manager, getting more apps from the cloud means less support and maintenance issues on the client machine; for one LUG member, the ease of seamless upgrades to reduce costs.

Of course, there are cons to the cloud right now, things that will need to be worked out, the most basic need being maintaining availability. When Gmail went down last week, it caught many people (myself included) unprepared on what to do when we couldn’t access our messages or file attachments. But another con was raised that I had not considered before, and I think it deserves a little more examination.

A FWLUG member raised this objection to the cloud: as a developer, it concerns him that the cloud will eventually leave the traditional desktop behind in terms of innovation. Developers, he maintained, have a much easier time mashing up local applications and code than apps out on a cloud somewhere. Will innovation disappear if more apps are out on the cloud?

I think there a couple of reasons why this won’t happen. For one, I don’t believe the desktop model we have now is going to vanish completely. There will always be a market for those users who want local apps (be it for control, development, security, what have you). The cloud is poised to capture those users who really don’t know or care what an operating system is.

The irony is, this ignorance of operating system is one of the things that Microsoft counts on when the average consumer goes to a big-box retailer and wants a new computer. They buy the computer that will run apps they have already or the apps they think they need. They aren’t thinking “I need to stick with Microsoft as a loyal customer,” they just want what works.

But if the cloud provides a platform of apps that will be compatible with their old apps and files (and that time is coming soon), then the argument “don’t buy that, it won’t run your apps” will fail, and suddenly price and value becomes a much bigger consideration–something in which Linux-loaded platforms can easily succeed.

For those users who do know more about what’s going on with an operating system and their computers, there will be a desire for the tools and flexibility of a local OS, and Linux can fill that niche handily.

That was the answer I gave last night, but after a night’s sleep, it occurred to me the audience member who originally raised the point could have his concerns realized: if the cloud gets really popular and we find ourselves in a world where client machines turn into dumber and dumber boxes, then innovation and development might get pinched, because there would be less of a place to “play” for current desktop and application developers.

The good news is, even if the market for the traditional desktop grows so small that platforms won’t continue to provide it, I believe open source will ultimately save the day. Open source gives cloud app providers the opportunity to give developers a chance to innovate out on the cloud. In fact, they will want more developers to participate, since it will raise the chances for innovation.

This will be the real tipping point for the cloud-as-platform: when apps are not only hosted, but created en masse on the cloud. It’s coming: Alfresco just announced yesterday a dev program on Amazon EC2 services, and SpringSource acquired Cloud Foundry back in August to enhance their cloud development options for customers.

The cloud shouldn’t kill development and innovation, but innovation may have to move to a new address as applications increasingly move from clients to the cloud.

Article Source Linux Weather Forecast Blog
September 17, 2009, 10:37 am

So I am hitting the road next week. It should be no surprise that LinuxCon and the Linux Plumbers Conference are coming up. I have a talk (the well-travelled Kernel Report) and the kernel developers’ panel, both on Monday; I fully expect to be tired by the end. There’s a lot of other interesting stuff happening at LinuxCon, which is being held for the first time ever. I’m looking forward to seeing how it comes out.

The Linux Plumbers Conference is happening for the second time. The first was a great success – by most accounts, one of the very best technical events of the year. Quite a bit of serious work got done there, with effects being felt throughout the development community. This year – which has just sold out – looks to be just as good.

Unfortunately, I’ll be skipping out of Plumbers early. Fortunately, it’s for an event that I’m looking forward to at least as much: the Realtime Linux Workshop in Dresden, and the Realtime mini-summit which precedes it. These, too, will be events which strongly influence future Linux development.

“Realtime,” of course, means that the system can be counted on to respond to external events within a known, bounded time. When one thinks of realtime applications, it’s natural to envision serious data acquisition and process control applications. These applications do exist, but realtime Linux is applicable to a far wider field than that. Embedded devices Рespecially gadgets like telephones Рcan make good use of realtime response, for example.

Perhaps the area receiving the most commercial attention, though, is in the financial trading realm. If you’re letting your computers make decisions that move millions of dollars around based on immediate market conditions, you really want to know that the computer will get the job done in the required time. The alternative can be a lot of lost money. When you realize that much of that high-speed trading is done on Linux systems, you come to see why realtime Linux is so interesting to financial businesses. My (very much outsider) understanding is that many of the customers of the realtime distributions are coming from this sector.

Getting true realtime response out of a general-purpose operating system has long been held to be impossible. So realtime has traditionally been done on small, special-purpose kernels which have been heavily audited to ensure that no unwanted latencies will sneak in. Linux kernel developers have never been much interested in being told something is impossible, though. So, a few years back, a group of them set out to create a version of the Linux kernel which could be counted on to provide realtime response.

This work was done by making (almost) everything in the kernel preemptible. When any part of the system can be pushed out of the way to deal with an important event, it is much harder for specific parts of the kernel to create unexpected latencies. This approach (called “realtime preemption”) greatly reduces the amount of code auditing which must be done.

Realtime Linux works well Рat least, on hardware which is, itself, properly responsive. It has been packaged into enterprise-level realtime offerings by a number of distributors and appears to be a big business. But there is one interesting note to all of this: much of the realtime code is outside of the mainline kernel. This is a violation of the “upstream first” rule which is normally followed quite strictly by most distributors.

There is a good reason for this: the realtime work is a big set of changes to the kernel. It could never have been just dumped into the kernel in the course of a normal three-month development cycle without destabilizing things badly. So the realtime code has stayed outside of the mainline while it evolved and stabilized; pieces of it have been merged over the years as they proved ready. Many features of the current kernel – which improve things for all users – have their origins in the realtime tree.

At this point, though, the realtime developers think that they are getting close to done Рat least, as close to “done” as anything ever gets in the constantly-evolving Linux kernel. So it’s time for most of the rest of this code to move out of the realtime tree and into the mainline. One of the primary goals of the mini-summit in Dresden is to put together a strategy for making this happen. If this process works, we should see the addition of this major functionality mostly complete sometime in 2010.

The main realtime conference, instead, is dedicated to the state of the art. Yes, I’ll be doing the Kernel Report talk again Рthough I think I’ll take out the realtime slides since Thomas Gleixner will have provided a much better update just before me. Then we’ll get to hear what people are doing with realtime Linux, how things can be improved, and what the developers are planning.

It looks to be a fascinating, exhausting experience. Years of work are coming to their culmination now, and all Linux users will benefit from it.