Jon Corbet is a highly-recognized contributor to the Linux kernel community. He is a developer and the executive editor of Linux Weekly News (LWN). He is also The Linux Foundation’s chief meteorologist, a role in which he translates kernel-level milestones into an industry outlook for Linux. Corbet has also written extensively on how to work within the Linux kernel development community and has moderated a variety of panels on the topic. Today, he gives us an update on the Linux “weather forecast,” why sharing your code upstream is critical, and the state of virtualization in the kernel.

You’ve been the “chief meteorologist” for the Linux Weather Forecast for a while now. What’s the general forecast for Linux today?

Corbet: Bright and sunny with occasional thunderstorms.

That’s a broad question; I’ll restrict myself to the kernel level where I can at least pretend to know what I’m talking about. Things are going quite well, for the most part. The development process is humming along with very few glitches, and we have more developers involved with every release. Things look very healthy.

The 2.6.34 kernel will hit the net sometime this month; it seems to be stabilizing well. It’s full of improvements to our power management and tracing infrastructures, both of which have been slower than we might have liked it to mature over the last few years. There’s two new filesystems: LogFS is aimed at solid-state devices, while Ceph is meant for high-reliability network-distributed storage. Things have been improved across the board; it should be a good release.

You’ve also written a lot about how to participate in the Linux development community and have moderated a number of panels on the topic. What is the most common question you get and how do you address it?

Corbet: The most common question I get must certainly be: “how do I get started

working on the kernel?” It is true that getting into the community can be an intimidating prospect: the code is large and complex; the mailing list gets 400 messages per day; and the community does not have a reputation for

always being overly friendly to those who are just beginning to feel their way around.

That said, it’s not as bad as it seems. Most community discussions are quite polite and professional anymore, and people do try to guide newcomers in the right direction. The quality of the kernel’s code base has increased over the years; it has also always been a highly modular system, so it’s easy to just focus on one little piece. And the documentation for newcomers has gone from nonexistent to significant.

Aspiring kernel developers still often wonder where to get started. Too many of them try to make their entrance with things like coding style patches or spelling fixes. That’s understandable; it must be tempting to start learning the ropes with a patch, which, with any luck at all, cannot break anything.  But that kind of work is not hugely helpful, to either the developer or the kernel as a whole. That’s why I tend to pass on Andrew Morton’s advice: the first thing any kernel developer should do is make the kernel work flawlessly on every system they have access to. Fixing bugs is a far more valuable exercise and a great way to start building a reputation within the community.

The other thing I like to suggest is reviewing of patches. That can be hard for a newcomer, who can only feel out of place asking questions about code posted by established developers.  But code review is always in short supply, and there’s no better way to learn than to look at a lot of code and figure out how it works. Review comments which are expressed as questions (rather than as criticism) will always get a polite reply – and often thanks.

With an increase in the number of companies using Linux, especially in the mobile/embedded space, is it changing the dynamic of the Linux development process? If so, how?

Corbet: There have been many companies using Linux for years, and most kernel developers have been employed to do that work for just as long; so, in one

way, things haven’t changed much. We just have some new companies showing up, and they are all welcome.

That said, the embedded world is different. Embedded developers are bringing some good things, including a stronger focus on power efficiency and code size and support for a wide variety of new hardware. On the other hand, embedded developers often work in an environment of strict secrecy and tight deadlines that can make it very hard for them to work effectively with the community.  We are still working on ways to help these developers get their code upstream. Progress has been made, but the problem is not fully solved by any means.

Can you tell us a little about the virtualization work happening at the kernel level and what still needs to be done?

Corbet: I’ve been saying for a while that, at the kernel level, the virtualization problem has been solved. We have three virtualization mechanisms in the kernel (Lguest, KVM, and Xen), two of which are being widely used commercially.

The large number of developers working on Linux virtualization would be amused to hear me say that there’s nothing left for them to do, though. Most of the activity at this point is in the performance area. The intersection of memory management and virtualization appears to be especially tricky, so there is a lot of work being done to make it function more efficiently.

Some people question the importance of “mainlining” their code in the Linux kernel. Can you talk about the benefits and the payoff of getting your code accepted?

Corbet: Well, that’s a long list. Some of the things I routinely tell people include:

* Code that is in the mainline kernel is invariably better code. There has never been a code submission to come out from behind a corporate firewall – or even from a community project site – which did not need major improvements. Those improvements will happen, either as part of the merging process or afterward. Anybody who cares about the quality of their code will want to get it into the kernel, where it can be reviewed and improved.

* The maintenance of out-of-tree code can be an exercise in pain; there is quite a bit of work required just to keep up with mainline kernel changes. Once the code is merged, that work simply vanishes.  When a kernel API changes, all in-tree users are fixed as part of the change; out-of-tree code is out of luck. Merging code into the mainline allows the contributor to forget about much of the maintenance work, freeing them to go build new things.

* Code in the mainline kernel gets to users automatically; they do not have to go digging through other repositories to find it. Distributors will enable it. That makes life easier for your users, and they will appreciate it.

* That’s simply how our community works.  We would not have the kernel we have now if all those contributors did not do the work to get their changes upstream.

* It should also be noted that the contribution of code is the best way to influence the direction of the kernel going into the future. The kernel belongs, literally, to all who have contributed to it; each contributor has helped to build a kernel that meets their needs a little better. A code contribution is like a vote that helps to decide what kernel we’ll be running tomorrow.

Time to Get Out the Vote!

Two months ago we invited the community to create its own unique designs to appear on Linux.com Store merchandise – – designs that would invoke feelings of geek pride, freedom, fun, eccentricity and originality. 

Since then, we’ve received more than 100 submissions and today, after some grueling decision-making, we are announcing the T-shirt design contest finalists! 

The 100+ designs we received proved that the best ideas come from the community. We also know that the community knows best, so we’re asking you to vote for the very best design. We have six finalists, not five as we originally said we would have. Like I said: it was hard to choose!

The community favorite will win a trip to Boston to attend LinuxCon as well as the fame and fortune garnered by having their design displayed on Linux.com Store merchandise worn around the globe.

We think the finalists represent a diverse selection of creative concepts that touch on geek pride, freedom, fun and originality. The top six finalists include:

* The Colors of Linux

* I Am Root

* The People’s Product

* Linux is Great

* There’s No Place Like /home

* Two Linux Penguins Sharing an Iceberg

See the designs and begin voting for your favorite(s) here: http://www.linux.com/tshirt-design-contest/.

Voting will close at midnight on June 6, 2010, after which we’ll tally the votes and announce the winner prior to LinuxCon.

Your vote will determine which design is available on merchandise in the Linux.com Store and for purchase at LinuxCon. So, get your personal and professional networks behind your favorite design(s) so yours doesn’t get pushed under the rug!

Last week, we had our fourth annual Linux Foundation Collaboration Summit. In the years since our first one at Google in 2007, quite a bit has changed: more mobile content, a bigger audience and a broader collection of developers, industry people and users solving real technical and legal challenges facing the platform. (We also all got free Linux phones.) Another big change? For the first time we offered live streaming of day one to anyone who registered. The streaming by all accounts went well and we look forward to offering this service for Linuxcon and future Linux Foundation events.But in case you missed it live, now you can catch up. We have published all the keynotes from day one here: http://video.linuxfoundation.org/categories/conferences-symposiums/2010-collaboration-summit This year a theme emerged: how companies work within the Linux community.My highlights:

Day two and threes were also filled with great technical and legal discussions and collaboration. Sadly no video. Next year you’ll just have to attend. We have put together many of the slides presented at the conference though: http://events.linuxfoundation.org/events/collaboration-summit/slidesI hope you enjoy these videos and would like to thank the technical and event crew who pulled this off with limited resources. Please let me know if you have feedback. Keep an eye on our Linuxcon announcements. The early bird discount expires soon and I hope to see everyone there for what is sure to be a great time.

I am pleased to announce the winners of the 2010 We’re Linux Video Contest. We had quite a few amazing videos to choose from, many of which captured the spirit of Linux.

Winners

First place: Go Linux

http://video.linuxfoundation.org/video/1696

Second place: Create Something Unique

http://video.linuxfoundation.org/video/1683

Third place: Linux: Free Your Computer

http://video.linuxfoundation.org/video/1687

I really enjoyed the winning video because it does something  important in the world of marketing: frame the capabilities of your product within a broader movement. In technology, features are not enough. People want a cause. Linux historically has had a meaningful cause behind it — freedom — which is captured well in the two runners-up. The winning video is a practical and effective message about one unique element of Linux that is often over-looked. Sustainability should be more than a buzz-word; it should be a way of life. In today’s world where we are facing a mountain of Vista and Windows 7-orphans, people should realize that they don’t need to pollute a landfill with a perfectly good computer. Linux not only is free (as in beer and source), it’s also a sustainable choice for extending the life of electronics of all kinds.

I’d like to congratulate our winners and all of those who entered the contest and shared their vision of Linux. I also want to thank our fabulous panel of judges who picked the winners.

• Andrew Morton, lead Linux kernel maintainer;
• Stephen O’Grady, co-founder, Red Monk;
• Stormy Peters, executive director, GNOME Foundation;
• Brandon Phillips, Linux kernel developer, Novell;
• Bob Sutor, VP, Open Source and Linux, IBM Software Group; and
• Steven Vaughan-Nichols, journalist, ComputerWorld.

We had a great meeting this morning at the LF Collaboration Summit with the Linux.com Gurus and community members. Here’s a fun picture that includes (from left to right): Masen Marshall, Andrea Benini, David Ames (LF staff) and Matthew Fillpot.

There was interesting discussion during Day 1 at the Linux Foundation Collaboration Summit about virtualization and cloud computing as it relates to Linux. Former Red Hat executive and rPath Founder and CTO Erik Troan took a few minutes to share his perspective on how Linux is supporting virtual computing environments and cloud computing initiatives.

rPath recently joined The Linux Foundation. Can you tell us about how you’re using Linux today to advance your business?

Troan: Linux has always been an integral part of rPath’s focus as a company. We provide automation solutions to help deploy and manage large numbers of Linux systems, and we’re being used on deployments in the range of 15 or 20,000 Linux servers. The success Linux has had in scale-out infrastructures has created new management costs that we work to reduce.

You work with system administrators every day. How are they using Linux to support new cloud computing initiatives at their companies?

Troan: Linux is very popular in cloud initiatives. It functions very well in a

headless environment, and the licensing model means that customers don’t

have to count how many machines are running. Commercial licenses can be

very hard to use in a dynamic cloud environment. If you have twenty

machines running for three days, five hundred for twenty minutes, and then

you have ten running for the rest of the month, how many Windows licenses

do you need? How do you count those to make sure you stay within your

license bounds? These questions are hard for commercial vendors to answer,

and ”just use Linux” has become a very simple and fast way to get cloud

projects up-and-running.

Your company has said that it sees increasing Linux deployments to support virtual computing environments? What’s driving this?

Troan: Virtual environments are about two things: Cost and management.

The first phase of virtualization was all about reducing the number of physical boxes a company had to purchase — in other words, server consolidation. Not only did this mean buying fewer machines, it dramatically reduced expenditures

on related expenses like floor space, power and cooling. The cumulative savings were so great that virtualization delivered a positive return very, very quickly.

Once consolidation was underway, the management benefits of virtualization

became apparent. Running images can move off of a piece of hardware,

allowing zero downtime maintenance and replacement. Systems can be

snapshot or suspended, freeing up RAM while preserving the systems. These

features are a little harder to benefit from on day one, but they are an

even larger financial benefit in the medium term.

Linux has been part of this in a couple of ways. First of all, significant

numbers of compute workloads in enterprises run on Linux, and those workloads are being moved into virtual environments rapidly. Second, Linux itself is being used as a virtualization platform. Amazon uses Xen for EC2, which

is by far the largest virtualized infrastructure in existence, and interest

in KVM has started to move into the prototyping phase. Linux’s ability to deliver stable virtualization at a low price point is very interesting for corporate users.

It’s also worth mentioning that the licensing challenges commercial software

has in cloud environments also apply in virtual environments. It can be hard to know how much of a product is running, and nobody likes to count things.

With budgets and headcount down, how are administrators continuing to scale system counts?

Troan: The answer, very simply, is automation. In order to handle scale you have

to automate everything you possibly can. Mark Burgess, who developed cfengine, said in Login magazine, “Always let your tool do the work.” The only way we can cope with complexity is by having tools do the heavy lifting. In

deployment, provisioning, and configuration, automated tools are how you manage more and more boxes without adding people. Large organizations

like Google and Yahoo! have been doing this for years using home grown tools.

Now that even midsized companies have thousands of servers, we’re seeing a lot

of interest in off-the-shelf solutions for automation across the server lifecycle.

Automation not only reduces the time it takes to deploy and manage servers; it

also reduces the errors that occur when things are being done by hand. Putting

systems into place that do things the right way every time is the only way to grow server counts.

 

 

 

There was interesting discussion during Day 1 at the Linux Foundation Collaboration Summit about virtualization and cloud computing as it relates to Linux. Former Red Hat executive and rPath Founder and CTO Erik Troan took a few minutes to share his perspective on how Linux is supporting virtual computing environments and cloud computing initiatives.

rPath recently joined The Linux Foundation. Can you tell us about how you’re using Linux today to advance your business?

Troan: Linux has always been an integral part of rPath’s focus as a company. We provide automation solutions to help deploy and manage large numbers of Linux systems, and we’re being used on deployments in the range of 15 or 20,000 Linux servers. The success Linux has had in scale-out infrastructures has created new management costs that we work to reduce.

You work with system administrators every day. How are they using Linux to support new cloud computing initiatives at their companies?

Troan: Linux is very popular in cloud initiatives. It functions very well in a

headless environment, and the licensing model means that customers don’t

have to count how many machines are running. Commercial licenses can be

very hard to use in a dynamic cloud environment. If you have twenty

machines running for three days, five hundred for twenty minutes, and then

you have ten running for the rest of the month, how many Windows licenses

do you need? How do you count those to make sure you stay within your

license bounds? These questions are hard for commercial vendors to answer,

and ”just use Linux” has become a very simple and fast way to get cloud

projects up-and-running.

Your company has said that it sees increasing Linux deployments to support virtual computing environments? What’s driving this?

Troan: Virtual environments are about two things: Cost and management.

The first phase of virtualization was all about reducing the number of physical boxes a company had to purchase — in other words, server consolidation. Not only did this mean buying fewer machines, it dramatically reduced expenditures

on related expenses like floor space, power and cooling. The cumulative savings were so great that virtualization delivered a positive return very, very quickly.

Once consolidation was underway, the management benefits of virtualization

became apparent. Running images can move off of a piece of hardware,

allowing zero downtime maintenance and replacement. Systems can be

snapshot or suspended, freeing up RAM while preserving the systems. These

features are a little harder to benefit from on day one, but they are an

even larger financial benefit in the medium term.

Linux has been part of this in a couple of ways. First of all, significant

numbers of compute workloads in enterprises run on Linux, and those workloads are being moved into virtual environments rapidly. Second, Linux itself is being used as a virtualization platform. Amazon uses Xen for EC2, which

is by far the largest virtualized infrastructure in existence, and interest

in KVM has started to move into the prototyping phase. Linux’s ability to deliver stable virtualization at a low price point is very interesting for corporate users.

It’s also worth mentioning that the licensing challenges commercial software

has in cloud environments also apply in virtual environments. It can be hard to know how much of a product is running, and nobody likes to count things.

With budgets and headcount down, how are administrators continuing to scale system counts?

Troan: The answer, very simply, is automation. In order to handle scale you have

to automate everything you possibly can. Mark Burgess, who developed cfengine, said in Login magazine, “Always let your tool do the work.” The only way we can cope with complexity is by having tools do the heavy lifting. In

deployment, provisioning, and configuration, automated tools are how you manage more and more boxes without adding people. Large organizations

like Google and Yahoo! have been doing this for years using home grown tools.

Now that even midsized companies have thousands of servers, we’re seeing a lot

of interest in off-the-shelf solutions for automation across the server lifecycle.

Automation not only reduces the time it takes to deploy and manage servers; it

also reduces the errors that occur when things are being done by hand. Putting

systems into place that do things the right way every time is the only way to grow server counts.

 

 

 

 

Nokia’s Vice President of MeeGo Devices, Ari Jaaksi, will kick off the afternoon at today’s Linux Foundation Collaboration Summit with his keynote at 1:15 p.m. PT. He took a few minutes with us this morning to share what he’ll be speaking about and how the MeeGo project is going.

Today you are keynoting at the Linux Foundation’s Collaboration Summit. Can you give us a preview of what you’re going to be sharing with the audience?

Jaaksi: My keynote will share a bit of history, including Nokia’s experiences with Linux and Maemo, and how we take that forward. I’ll also share why Intel and Nokia chose to create this project, some recent milestones, and how developers can get involved.
 
How is the “big merge” going and are things on track to deliver MeeGo v1.0 in Q2?

Jaaksi: We’re moving right along and making great progress. Following the initial announcement at Mobile World Congress, we’ve released the MeeGo core operating system repositories – anyone can go to meego.com and download this package for free. And just yesterday, a number of leading companies spanning chipset designers, device manufacturers, software vendors and more announced their support for MeeGo
 
We’re well on our way toward the MeeGo 1.0 release – – I invite readers to come to my presentation or visit meego.com to learn about what’s next.
 
How do Maemo and Moblin developer communities complement each other?

Jaaksi: Moblin brings in a group of very talented developers. They have a world-class build infrastructure and experience working with upstream projects. They know how to work with different products and architectures.
 
Maemo is one of the largest open source communities in the mobile space. Maemo brings in the expertise of mobile devices, ARM based technologies and consumer products. The Maemo community also knows what it takes to build a consumer product — to the end — with quality and finish.
 
By bringing together developers from the Maemo and Moblin communities together, we’re broadening the base for innovative ideas to transpire.
 
Can you tell us more about Qt and what it brings to the MeeGo project?

Jaaksi: Qt is a cross-platform application and UI framework used by hundreds of thousands of developers worldwide looking to create amazing user experiences on Windows, Mac, Linux, Windows Mobile, Symbian and Maemo devices. Qt will be the primary application framework for MeeGo and both Intel and Nokia are committing their investment in it.  For developers interested in MeeGo, Qt helps increase the scope for their applications and services across multiple platforms, all using consistent application APIs.
 
How will the open development model and working so closely with upstream partners help to position MeeGo for success?

Jaaksi: MeeGo is a full open source project hosted by the Linux Foundation and governed according to best practices of open source development. As in other true open source projects, technical decisions are made based on technical merit of the code contributions being made.
 
In the end, Nokia, Intel and our upstream partners share a vision of mobile computing devices and the increasing importance of wireless connectivity – together, through the open MeeGo project, we will help to drive rapid innovation, adoption and consumer choice.
 
And already now, anybody can participate and see what we are doing. Code is developed in the open and decisions are made openly in meetings.
 
How are you bringing new contributors to the project? Do you see momentum?

Jaaksi: The beauty of the MeeGo project is that anyone can join and contribute. We’ve seen lots of interest in the project to-date, as evidenced by yesterday’s ecosystem release.
 
It’s only been two months since we announced MeeGo, and we’re seeing the momentum every day. With the first MeeGo devices due to market in the second half of this year, the project has no plans of slowing down anytime soon!

 

We’re preparing for this Friday’s Linux.com Planning Meeting at the Linux Foundation’s Collaboration Summit. The session begins at 9 a.m. PT and will take place in room Osaka. We’re expecting this year’s Linux.com Gurus to join us and we want to invite other Linux.com members to attend the meeting as we look toward the year ahead. If you’re not at Hotel Kabuki for the Collaboration Summit and want to call in for the meeting, email me at
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
and I’ll share the phone conference details and logistics.

I arrived at Hotel Kabuki this afternoon. The sun is out in San Francisco and the meeting space is ready for what I expect to be great collaboration this week!

 

 

For those of us that have worked for years in open source, rumors in the press of IBM “breaking its open source patent pledge” were met with a bit of dismay. IBM is one of the top contributors to the Linux kernel and dozens of critical open source projects. For more than a decade IBM has been a good citizen in the open source community.

To get to the bottom of things I contacted Dan Frye, VP of Open Systems Development at IBM and member of the Linux Foundations board of directors, to “say it wasn’t so.” Fortunately all of us can breathe easy – IBM remains true to their word. Here is the note I received from Dan which is very clear:

Jim,

There’s been recent interest in IBM’s “500 patent” pledge made in 2005 and how it applies today. It’s always important to get the facts, and the words of the pledge itself are the facts we need.

“The pledge will benefit any Open Source Software. Open Source Software is any computer software program whose source code is published and available for inspection and use by anyone, and is made available under a license agreement that permits recipients to copy, modify and distribute the program’s source code without payment of fees or royalties. All licenses certified by opensource.org and listed on their website as of 01/11/2005 are Open Source Software licenses for the purpose of this pledge.

“IBM hereby commits not to assert any of the 500 U.S. patents listed below, as well as all counterparts of these patents issued in other countries, against the development, use or distribution of Open Source Software.”

IBM stands by this 2005 Non-Assertion Pledge today as strongly as it did then. IBM will not sue for the infringement of any of those 500 patents by any Open Source Software.

Thanks.

Daniel Frye
VP, Open Systems Development
IBM Linux Technology Center