Article Source Community-cation
December 10, 2009, 7:01 am

The technology event season has come to a close, as the holidays approach and 2009 draws to its end.

This was a great year for events in the Linux community. Several regional Linux and open source shows filled the calendar around the world, so most Linux enthusiasts got a chance to visit at least one event near them. There were also the big shows, like OSCON, Open Source World, CeBIT, and, of course, LinuxCon, which came off as an unqualified success for the Linux Foundation.

So what’s kicking off for the first half of 2010 and beyond? Here’s a wish list of the events I want to attend:

  • SCALE 8X, February 19-21, 2010, Westin Hotel near Los Angeles International Airport, Los Angeles, CA. This is one of my favorite community shows, and its importance in the Linux ecosystem cannot be underestimated. This is the community event that sets a bar for many of the other events to reach, with strong tracks for users and developers. Vendors and experts from Linux and open source deliver top-notch content, at a level that belies the “amateur” label one might put on a community-organized show. There’s nothing amateur about SCALE. They’re still looking for speakers, too, so visit their call for papers site and see what you can contribute.
  • CeBIT 2010, Open Source Forum, March 2-6, 2010, Hanover, Germany. After the huge success of the open source agenda at last year’s CeBIT, the show organizers are giving it more room and a better location at the CeBIT exhibition grounds in 2010. The call for papers is now officially open. Interested speakers can send their proposals with the title of their talk, speaker bio, and a short description of the talk until January 7, 2010 to
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
  • 4th Annual Linux Foundation Collaboration Summit. April 14-16, 2010, Hotel Kabuki, San Francisco, CA. Japantown in San Francisco will be the home again next year for the Linux Foundation Collaboration Summit, an invitation-only summit that will gather core kernel developers, distribution maintainers, ISVs, end users, system vendors and other community organizations to meet face-to-face to tackle and solve the most pressing issues facing Linux today. Want to be a part of it? Request an invite now. But registration is limited so please submit your invitation request to the Collab Summit today.
  • Red Hat Summit/JBoss World, June 22-25, 2010, Seaport World Trade Center, Boston, MA. It seems like just yesterday that I was in Chicago for the 2009 edition of these events, but this week Red Hat has announced the call for papers for these massive events. This year’s event promises more technical content, and a new joint track on cloud computing.
  • LinuxCon, August 10-12, 2010, Renaissance Boston Waterfront, Boston, MA. This is the event for which I am the most excited: the Linux Foundation’s annual technical conference that covers all matters Linux. LinuxCon will bring together the best and brightest that the Linux community has to offer, including core developers, administrators, end users, community managers and industry experts–all together to learn and share new ideas about Linux.

There will be more events coming in 2010, of course, but these are the ones I am really looking forward to at the moment. I am sure this list will expand as the year starts and more conferences are announced. For now, this is my travel wish list for 2010. Tell me what I missed, or better yet, add event information in the Events section so everyone can start planning which events they will attend.

Article Source Community-cation
December 7, 2009, 7:21 am

There was a good story on Ars Technica last week about the current state of browser share on the Internet. Browser news is always interesting because it gives us one window into how far open source has come on the desktop.

The thrust of the Ars story was how IE8 has passed the market share of IE7 (though not, amazingly, IE6, the current Bane of the ‘Net), which does not interest Linux users, except to highlight the capacity for Windows users to stick with what they know even if it puts all of their data at risk. In looking at the data, however, I was interested to note how much market share the open source browsers had against the proprietary.

November 2009 Worldwide Browser Shares, Courtesy of Net Applications
November 2009 Worldwide Browser Shares, Courtesy of Net Applications.

In the figure above, you can see the pie chart from the data source at Net Applications. Right away, you can see that Firefox is at a healthy 24.72 percent of the worldwide browser market, according to Net Applications. That’s a respectable number, but the one that really interests me is IE’s share: only 63.62 percent. This is a figure that clearly demonstrates how far the Microsoft IE juggernaut has fallen against open source browsers.

That’s because, with the exception of Opera, pretty much all of the other browsers on the list competing against IE are open source browsers.

Now, I know some will argue against Safari being called open source, but I am going to count it in the open column for the primary reason that even though some of the Safari GUI elements and tools are proprietary, the WebKit renderer code is wide open.

If you add up the percentages of all of the open source browsers in the report, you come up with 33.54 percent of the total browsers tracked. That’s compared to proprietary browsers’ 66.43 percent share (the missing .03 percent are browsers that fell below the significant tracking value for the survey).

For those Safari-as-proprietary purists out there, slipping Apple’s browser into the proprietary column puts the shares at 29.18 percent for open source and 70.79 percent for proprietary browsers.

Either way you add it up, this puts open source browsers at or very nearly a third of the market share, and with the newcomer Chrome on the scene, we can only expect this open source share to increase in the coming months.

There are three positives that come out of data like this. First, there’s the obvious argument against open-source naysayers: there are very few arguments one would have against the success of open source as a general concept when looking at the popularity of open source browsers.

Second, there’s the nice fact that nearly all of these browsers provide a nice set of Linux applications for people to try. Firefox, Opera, and Konqueror are great Linux browsers already, and the Chromium development version of Chrome works pretty well on Linux for now.

But I believe the most important takeaway from this data is that we have finally reached a tipping point for open standards across the entire web. No Web developer worth their salt can ignore that fact that more than a third of browsers are rendering sites using open standards (and here we have to count Opera in this group, because proprietary or not, it still follows open standards).

There are still IE-only pages out there of course, because there are still some bad web devs in the world, but the pressures of user demand will soon force all web coders to re-work their code to more open standards. Because sooner or later, they or their bosses will be wondering why they’re giving up a full third of their potential visitor traffic just to toe the Microsoft line.

Even as critics mock the open source browsers’ clear lack of majority (for now), these browsers have already performed their best service for Linux and general Internet users: forcing the Web to be built on open standards, and leveling the playing field for Linux to compete on mobile and cloud platforms, all the while giving all users a safer and more stable set of tools with which to surf the web.

Article Source Community-cation
December 3, 2009, 7:18 am

Some of you might be too young to remember the space race: the frenzied decades of the ’50s and ’60s where the US and the USSR poured massive resources to be the First [Anything] in space. But all of us might have a chance to watch some of this in microcosm as OEM vendors jockey for the coveted “first Chrome OS netbook shipped” position.

The fervor over Google’s new web-oriented operating system is surprising, even for ardent Linux supporters. After all, it’s only been a couple of weeks since Chrome OS was open sourced and after personally test-driving it on a virtual machine, I can tell you that while very interesting, this is still a very young system.

But since the November 19 release of the ChromiumOS source code, we have already seen:

  • By end of day on November 19, virtual machines of Chrome OS were available.
  • On November 25, Dell technology strategist Doug Anson released a Live USB image designed to run specifically atop Dell’s Mini 10v netbook.
  • On December 2, Taiwanese tech outlet DigiTimes reported Acer’s Chairman J.T. Wang being confident that his company would be the first out the door with a Chrome OS netbook–a product Acer has been working on since July 2009, and plans to release by the second half of 2010.

That’s a lot of ground covered in just two weeks, but that’s the kind of excitement the Chrome OS offering generates. And Acer is not the only player in the netbook race: in July, when Chrome OS was first announced, “ASUS… Hewlett-Packard, Lenovo, Qualcomm, Texas Instruments, and Toshiba” were also announced as hardware partners.

If you’re wondering where Dell is on this list, you’re not the only one. Anson’s release of the Mini 10v image was strictly unofficial, and to date Dell has made no official announcements on its plans for Chrome OS, other than they’re evaluating it.

Dell’s hesitancy seems a bit strange at first glance. After all, this is a hot new operating system designed by Google, which has a decent track record for the software it launches. And Chrome OS has a strong pedigree: it’s got Ubuntu and Moblin inside. Dell has released products with both of these technologies, so what’s not to like?

In this case, it may be just ordinary caution. It’s no secret that Dell, like most other OEMs, got hurt by the combination of the economic recession and the huge disappointment known as Windows Vista. On the same day the ChromiumOS source code was opened, Dell reported a 54 percent drop in their third-quarter profits, with a 15 percent cut in net revenue. This is enough to make any company gun-shy in venturing into a new product line–particularly a product line like netbooks, which have lower selling prices and therefore don’t contribute as much to profits as, say, servers. Adding to this caution is Dell’s previously stated goal of cutting $4 billion in costs.

The good news for Dell (and the rest of the hardware vendors) is that there may be signs of an impending recovery in the server market: IDC has reported this week that while server shipments fell, it was not as hard as they fell in 2Q. In this most recent quarter, server shipments were down 17.9 percent; much better than the 30.1 percent drop in the second quarter. 3Q shipments rose 12.4 percent over 2Q, according to IDC’s Worldwide Quarterly Server Tracker, a clear sign of some growth.

This is not out-of-the-woods news yet, but if this continues and becomes a trend, Dell could see a much-welcome stabilization of its server business, and an influx of revenue that might make them more receptive to jumping on the Chrome OS bandwagon.

And, it’s not like Dell is out of the netbook game entirely. As mentioned, Dell has released Ubuntu-loaded netbooks, including the first commercial product shipping with Moblin v2.

In the meantime, Acer is staking its claim to be the first Chrome OS netbook distributor next year. With this much commercial interest, Google’s decision to open the code at this juncture is a good one, as even more innovation will be pumped into the new operating system as launch approaches. Look for a lot of new features and new vendor participants in 2010.

The race is indeed on, and it will not be boring.

We are often so focused on the end-user or business experience of the computers we use on a daily basis that we sometimes forget about the nuts-and-bolts side of IT–a side of technology where performance is king no matter what platform, operating system, and task is present.

This is the world for organizations like the Computer Measurement Group, a not-for-profit dedicated to providing predictable measurements and analysis to any form of IT service. And, after providing accurate ways to measure IT service–no matter the platform or the job–figuring out ways to improve it.

The group initially got its start in measuring services, workloads, and transactions specifically in the mainframe arena, according to Dr. Michael Salsburg, Chief Architect for Real-Time Infrastructures within Unisys Corporation’s Systems and Technology group and active CMG member. In fact, Salsburg related to me in a recent phone interview, there is still the perception out there that mainframes are all the CMG is concerned about.

That is a misperception, actually, since the CMG has evolved to measuring services levels, capacity, performance… any aspect of IT that can be managed is fair game to be measured.

The trick, of course, is how to measure it.

This challenge is usually met during the CMG’s annual conferences, where IT luminaries from around the world meet to discuss the latest obstacles in performance measurement. This year’s conference, CMG’s 35th, will prove to be no exception, as attendees meet to take on cloud computing.

The topic of this year’s CMG conference, being held this December 6-11 in Dallas, Texas, was chosen last year, just as the initial hype for cloud computing was getting started, Salsburg said. Essentially, cloud computing was just another form of the Software as a Service (SaaS) model, he continued, and was definitely worth the focus of the CMG conference.

Cloud computing, as a broad topic, “provides some wrinkles” in measuring performance, Salsburg explained, “because we can’t always get access to all measurement points as if they were in your own data center.” While the end results can be measured against the initial input, breaking down the performance at intermediate stages can be difficult.

The very structure of cloud computing can throw up other obstacles in the measurement path. Salsburg described the three basic layers of cloud computing: cloud providers who handle just the IT infrastructure, cloud publishers who create the actual SaaS, and cloud consumers who may have their own processes added on to the job at hand. These levels can be public or private at any stage of the process, and can be very blended (such as eBay, which acts as provider and publisher nearly equally).

But getting into these types of measurements is what the CMG is all about. And these aren’t numbers that only performance wonks are going to appreciate: real-world management decisions, such as capacity planning for SaaS, will be made based on the techniques the CMG’s members will devise.

The keynote address for CMG’09, “The Impact of Software as a Service on the Enterprise Data Center,” presented by Amy Wohl, President of Wohl Associates, will cover this very issue.

Ten more presentations are scheduled throughout the conference program, as well as the The Plenary Session, “The Next Revolution in Computing: The Virtualized Data Center Meets the Cloud,” presented by Jeffrey Nick, Senior Vice President and Chief Technology Officer of the EMC Corporation.

Visit the conference site for more information on CMG’09, or visit the CMG site.

Article Source Andy Updegrove’s Blog
November 25, 2009, 9:08 am

According to Reuters, one more thread in the long-running saga of Rambus and the JEDEC SDRAM standards abuse saga appears to be reaching an end.

Specifically, the wire service report that next Wednesday the European anticompetition regulators will accept the settlement terms offered last June by chip maker Rambus. Under those terms, Rambus will not be fined and will not be found liable for any wrongdoing. In exchange, the Reuters sources says, Rambus will offer some of its products at a reduced royalty rate, and the rights to fabricate some of its older products for free, for five years beginning in 2010.

If the settlement is announced as anticipated, U.S. regulators (who fought and lost) may wonder whether their brethren across the pond (who fought and settled) are better poker players than they are.

Read the rest here

Article Source Community-cation
November 25, 2009, 7:54 am

If you have any gold in your portfolio, then you are indeed a fortunate soul. With gold at an all-time high, upwards of US$1,100 per Troy ounce, lots of people with this precious metal are feeling quit flush right now.

No need to look at the URL field; you haven’t accidentally landed on an investment site. Trust me, I’m the last guy you want money advice from. But something’s going on with the gold market right now that could be analogous to a potential problem with cloud computing.

What’s happening is this: in larger cities where large corporate banks have massive vaults, many retail customers who have been storing their precious metals in their bank’s vault are being asked to move their property to make room for the bank’s larger customers’ gold holdings. It is more profitable, it seems, for a bank to store one ton of gold for one customer than one pound of gold each for 2,000 clients.

So, instead of honoring the smaller retail clients’ needs, these banks are requiring the retail customers to find new homes for their gold.

I know what some might be thinking: this would be one of those problems that would be nice to have. But with gold so ridiculously valuable, the security logistics alone for moving this stuff are nightmarish, and if customers simply take it to their home safes, they put themselves at a pretty high personal risk.

What struck me about this situation is how banks can simply change their policies to suit the needs of their corporate clients at the expense of the retail clients. And, I could not help but draw the parallel: how will policies for storing our personal data be developed as more and more cloud providers and services ask us to keep all of our electronic “stuff” out on the cloud?

For instance, the pictures of our last family vacation are sitting on my hard drive, available as a slide show to any unfortunate soul I can rope into watching them. They are not monetarily valuable, but they are personally precious to me and my family. I could choose to store them on Flickr so more people can see them, but I don’t because (a) I do know how unexciting they are and (b) I don’t want to have pictures of my kids out on the Internet.

This last concern is a father’s nettling worry, and I’m sure lots of people would tell me I’m just being paranoid. And they might have a valid point. But I am who I am, much to my daughters’ sometime dismay, and the photos stay private for now.

With new platforms like Moblin and Chrome OS in various stages of availability, though, I may soon find myself facing a real push to store data like this out on the cloud.

My security twitchiness aside, I do see real advantages to using the cloud for storing data. It would be nice, for instance, to have all of my computers’ data stored and accessible from one place. Right now, a lot of older info is kept on enclosed hard drives pulled straight from the original machine the drive was used, until an upgrade or a re-assignment necessitated moving that drive. If I have to find my wife’s recipe for peppermint cocoa created in 2003, I know on what drive that’s stored, but it’s not a pretty system by any means.

There would be appeal, therefore, in keeping everything out on the cloud, so no matter what system I was using (or from where), I could get to everything quickly. Naturally, I can do that now, but the pain point is not high enough (yet) for me to actually make the change.

The analogy between finding a new home for gold holdings and storing my data does break down under close analysis. After all, it costs way much more to build a physical, secure vault than to slap another few petabytes into a cloud storage unit. There will never be a time when a cloud provider will say, “sorry, can’t store your files anymore, no more room.”

But, unfortunately, I could see an instance when a storage provider could say to little guys like me, “sorry, we need to keep the high-paying customers happy, so they’re keeping the high-speed access. You’ll have to use the lower-speed line, unless you up your subscription fee for a low, low, price of…”

As a consumer, I want to see one specific policy from providers of cloud-based storage: a guarantee that my data will never be given second-class status. There is very little chance I will ever be as profitable to a cloud storage provider as a large corporate entity, so, like the retail gold holders at banks today, I want assurances that I will never find myself struggling to get at my data.

Sure, if the provider updates their system to provide faster access for all, then I could have the option to pay more for more speed, but don’t ever bump me down unless I pony up.

Almost as much as security and privacy, I want assurances that my data will be kept for the long haul. Because I want to show my grandkids someday that trip to the Biggest Ball of Twine we took in the summer of 2009.

Article Source Community-cation
November 18, 2009, 10:40 am

The strong growth of open source in IT will be demonstrated with its prominent placement at the world’s largest computer event, CeBIT 2010.

That’s the news coming out of Hannover, Germany the permanent home for the CeBIT show, where the CeBIT Open Source 2010 event will be taking center stage at one of the largest expo halls according to show organizers Deutsche Messe. According to Britta W√ºlfing, CeBIT Event Manager, the shift of the Open Source exhibition to the more prominent Hall 2 is a direct response to the huge popularity this segment of the show enjoyed in 2009.

“Nearly 47,000 attendees chose open source as a subject they were interested in,” W√ºlfing said in an interview. Given numbers like that–for a segment of the CeBIT show that was brand-new in 2009–it’s little wonder that Deutsche Messe and partner Linux Pro Magazine are touting the recent Call for Projects for CeBIT Open Source 2010.

W√ºlfing indicated that the trade show’s organizers, having fully grasped the popularity of open source, are now putting even more of a spotlight on the topic by giving the topic a home in what W√ºlfing describes as a much better venue. “It’s wonderful they saw the big success in 2009,” she said.

The Call for Projects will give 15 projects even more of a spotlight within the CeBIT Open Source event. Selected non-commercial projects will be given free exhibit space and a 20-minute presentation spot in one of the show’s Hour for Projects. These presentations will not only be offered for show attendees, but they will also be broadcast via a live stream to any one on the Internet who wants to watch the event.

Wülfing explained that she and the show organizers are looking for projects that are innovative and doing something new in open source. Ideally, these projects, while they are non-commercial themselves, should be producing something that benefits business.

After all projects are gathered, a jury will determine which 15 projects will get the nod to take advantage of the opportunity to get in front of thousands of people live and far more than that over the Internet.

Some projects, such as, even though they are non-commercial, chose to purchase floor space on their own last year, given their financial backing from sponsor Sun Microsystems, Wülfing added. This means that smaller projects that might otherwise be daunted by competition from larger organizations, will have more of a chance when applying for one of the coveted spots.

The “Call for Projects” will close this Friday, November 20, so interested groups shouldn’t delay. Application forms are available at Linux Pro Magazine.

Article Source Andy Updegrove’s Blog
November 16, 2009, 6:26 pm

Over the next ten years, tens, and possibly hundreds of millions, of new platforms are going to be put into place in the United States as part of a new national infrastructure; an equal number will be installed in Europe (many are already being installed). The same may happen in other parts of the world as well.

Most of these platforms will be invisible in every day life, but together they are intended to play a major role in limiting green house gasses, lowering national dependencies on foreign oil, and capping, or even lowering, our otherwise perpetually growing demand for electricity. Many of these platforms, and perhaps most, will run Linux.

That is, if everything goes according to plan – and that plan relies in large part on whether we can develop, integrate, and implement an unprecedented number of standards in record time. Happily, that goal took a major step forward today in Denver, Colorado.

Read the rest here

Article Source Community-cation
November 16, 2009, 7:03 am

As the week began, I had the fortune to come across an excellent article in the Wall Street Journal that addressed the problem employees face all-too-often in the workplace: the hardware and software workers are required to use based on their company’s IT policies is often out of date with the technology they can purchase and use at home as consumers.

The writer on the piece, Nick Wingfield, does a pretty good job summing up what many workers are running into out in the corporate world: highly restrictive storage limits on e-mail, obsolete search functions for corporate information, and PC machines that were top-of-the line when Windows XP first came out… eight years ago.

For anyone familiar with IT practices, this is a story that’s become all too familiar. As consumers, we have the ability to buy the latest hardware (PC or mobile), incorporate the latest in software tools, and generally build outstanding home systems that are light-years ahead of what we use at work. Yet our corporate tools are supposed to be producing results that will better our employers’ bottom lines. Why can’t we get better tools?

I should pause here and mention that my own company’s corporate policies are very progressive by the standards of this article. We can use whatever software we need to get the job done, and we have access to pretty high-end laptops and PC to do it. But, I must also disclose that all of us here at the Linux Foundation are computer-savvy enough that we can self-support our own machines and software.

Really, could you see Linus calling into a help desk with a question? A lot of organizations are not so blessed. Technical ability ranges from the highly skilled to the is-this-a-coffee-holder level of user. Couple this with a diverse variety of job functions that call for an equal variety of tools, and you can see why most corporate IT policies trend towards locking everything down. Keep it simple, and things are less likely to break.

As a former IT configuration manager, I get that. I really do. Nothing makes me cringe more than hearing about employees who install this cool new software they found on the Internet last weekend, and then wonder why their machines are compromised–if they are lucky enough to even find out.

Security is the most important aspect of an IT department’s preventative functions. Between viruses, worms, and disgruntled employees, the very real possibility of losing valuable corporate data is a big part of why employees have few choices on the tools they can use–or have to jump through several hoops to get what they need.

A friend of mine is facing this at her workplace: constant e-mails from her IT department about clearing her inbox storage space out have prompted her to challenge her IT department as to why they can’t use a solution like Gmail. She also has a Gmail account for a sideline business, and even with the large files she transfers and stores in that personal account, she tells me she is no where near her storage limit on Gmail.

But when confronting her IT department, they inform her that e-mail is not meant for file storage, that Gmail poses potential security problems, and migrating to such a solution is too costly. (Ironically, when she dutifully tries to use her personal or network drives for file storage as her IT team suggests, she runs out of space on her hard drive and gets e-mails from the same IT department that her alloted network space is getting too full.)

Cost, of course, is another consideration for business IT policies. It’s expensive to buy licenses for all of that software, so buying additional software has to be looked at with a critical eye. And, if I were an IT manager and had just spent $XX,000s upgrading my proprietary e-mail server, why would I dump it for something else even the new solution were ultimately far less expensive?

We’ve heard this all before, but the Wingfield article does a good job framing it within a problem many of us faced: the growing disparity between the tools available to us as consumers versus as employees. A disparity made all the more apparent, according to the article, by the fact that more PCs are now being sold to consumers than business workers.

Ultimately, Linux and the rest of the free software pantheon provides the real solution for corporations. When software costs can be slashed to nearly nothing, and hardware costs can see a commensurate reduction thanks to less resources being used, Linux is by far the best operating system to break this cycle of forced obsolescence.

Security, too, is not such a concern with Linux. While no system is perfect, IT managers would have to worry about security a lot less in a Linux shop than a Windows workplace.

Wingfield doesn’t directly mention Linux in his solution set, but the article’s goal was not really to advocate any one particular brand of technology over another. The article did address the growing prevalence of virtual machines, which would very effectively segregate corporate tools and data. And as a virtual platform, Linux has very few peers.

Another interesting bit of info in the report: Kraft Foods Inc. has begun a program that lets its employees choose their own computers to use. But, Wingfield reports, “employees who choose Macs are expected to solve technical problems by consulting an online discussion group at Kraft, rather than going through the help desk, which deals mainly with Windows users.”

This practice at Kraft raises an interesting possibility for Linux advocates. The formation of corporate Linux user groups (CLUGs) that would allow any workers interested in using cutting-edge technology to give Linux and/or other free software a try without diverting corporate resources towards training for and supporting the new tools.

If done properly, CLUGs could allow teams to cut software and support costs and very likely increase productivity, since workers would now have access to a much wider range of software than they might otherwise have under a traditionally restrictive corporate IT policy.

Article Source Community-cation
November 11, 2009, 5:04 am

Besides excellent sessions and networking opportunities, there’s usually behind-the-scenes action going on at our many events: meetings and get-togethers for any one, and sometimes several, of our working groups and teams. The Linux Foundation (LF) staff and members are scattered around the world, so using our events for valuable face time is a necessity.

One of the more important functions is the annual Technical Advisory Board (TAB) election, which was held this year at the Japan Linux Symposium in Tokyo. Today, we’ve announced the results of that election.

If you’re not familiar with the TAB, this 10-member board consists of members of Linux kernel community who are annually elected by their peers to serve staggered, two-year terms. The TAB collaborates with the LF on programs and issues that affect the Linux community, and the TAB Chair also sits on the LF board.

It is often perceived that the Foundation uses TAB to influence the Linux kernel community, when actually it’s the other way around: TAB typically advises the LF, giving us important guidance on what’s happening with the Linux kernel so our membership can better react to the technical changes and advances.

For the 2009 election, there are three new members of TAB:

  • Alan Cox, employed by Intel SSG and manager of major Linux projects such as the original Linux SMP implementation, the Linux Mac68K port and an experimental 16bit Linux subset port to the 8086.
  • Thomas Gleixner, who manages bug reports for NAND FLASH, core timers and the unified x86 architecture.
  • Ted Ts‚Äôo, the first North American Linux kernel developer and Linux Foundation fellow. Ted was also voted as the new Vice Chair.

Re-elected for two-year terms are Jon Corbet, Linux kernel developer, editor of LWN, and author of the Linux Kernel Weather Report, and Greg Kroah-Hartman, employed by Novell and kernel maintainer for the –stable branch as well as manager of the Linux Device Driver Project.

The other five TAB members, who are serving the remainder of their two-year terms include:

  • James Bottomley, Novell distinguished engineer and Linux Kernel maintainer of the SCSI subsystem, the Linux Voyager port and the 53c700 driver.
  • Kristen Carlson Accardi, kernel developer at Intel and contributor to the ACPI, PCI, and SATA subsystems.
  • Chris Mason, Oracle Kernel development team and creator of the Btrfs file system
  • Dave Jones, maintainer of the Fedora kernel at Red Hat
  • Chris Wright, employed by Red Hat, maintainer for the LSM framework, and co-maintainer of the -stable Linux kernel tree.

As you can see, each member of TAB is a valuable kernel contributor, so they know what’s going on in the kernel at a level that’s dizzying to most folks. The LF is very lucky to have such expertise to advise and guide us, and as always, we thank the TAB members for their valuable service.