Article Source Community-cation
November 16, 2009, 7:03 am

As the week began, I had the fortune to come across an excellent article in the Wall Street Journal that addressed the problem employees face all-too-often in the workplace: the hardware and software workers are required to use based on their company’s IT policies is often out of date with the technology they can purchase and use at home as consumers.

The writer on the piece, Nick Wingfield, does a pretty good job summing up what many workers are running into out in the corporate world: highly restrictive storage limits on e-mail, obsolete search functions for corporate information, and PC machines that were top-of-the line when Windows XP first came out… eight years ago.

For anyone familiar with IT practices, this is a story that’s become all too familiar. As consumers, we have the ability to buy the latest hardware (PC or mobile), incorporate the latest in software tools, and generally build outstanding home systems that are light-years ahead of what we use at work. Yet our corporate tools are supposed to be producing results that will better our employers’ bottom lines. Why can’t we get better tools?

I should pause here and mention that my own company’s corporate policies are very progressive by the standards of this article. We can use whatever software we need to get the job done, and we have access to pretty high-end laptops and PC to do it. But, I must also disclose that all of us here at the Linux Foundation are computer-savvy enough that we can self-support our own machines and software.

Really, could you see Linus calling into a help desk with a question? A lot of organizations are not so blessed. Technical ability ranges from the highly skilled to the is-this-a-coffee-holder level of user. Couple this with a diverse variety of job functions that call for an equal variety of tools, and you can see why most corporate IT policies trend towards locking everything down. Keep it simple, and things are less likely to break.

As a former IT configuration manager, I get that. I really do. Nothing makes me cringe more than hearing about employees who install this cool new software they found on the Internet last weekend, and then wonder why their machines are compromised–if they are lucky enough to even find out.

Security is the most important aspect of an IT department’s preventative functions. Between viruses, worms, and disgruntled employees, the very real possibility of losing valuable corporate data is a big part of why employees have few choices on the tools they can use–or have to jump through several hoops to get what they need.

A friend of mine is facing this at her workplace: constant e-mails from her IT department about clearing her inbox storage space out have prompted her to challenge her IT department as to why they can’t use a solution like Gmail. She also has a Gmail account for a sideline business, and even with the large files she transfers and stores in that personal account, she tells me she is no where near her storage limit on Gmail.

But when confronting her IT department, they inform her that e-mail is not meant for file storage, that Gmail poses potential security problems, and migrating to such a solution is too costly. (Ironically, when she dutifully tries to use her personal or network drives for file storage as her IT team suggests, she runs out of space on her hard drive and gets e-mails from the same IT department that her alloted network space is getting too full.)

Cost, of course, is another consideration for business IT policies. It’s expensive to buy licenses for all of that software, so buying additional software has to be looked at with a critical eye. And, if I were an IT manager and had just spent $XX,000s upgrading my proprietary e-mail server, why would I dump it for something else even the new solution were ultimately far less expensive?

We’ve heard this all before, but the Wingfield article does a good job framing it within a problem many of us faced: the growing disparity between the tools available to us as consumers versus as employees. A disparity made all the more apparent, according to the article, by the fact that more PCs are now being sold to consumers than business workers.

Ultimately, Linux and the rest of the free software pantheon provides the real solution for corporations. When software costs can be slashed to nearly nothing, and hardware costs can see a commensurate reduction thanks to less resources being used, Linux is by far the best operating system to break this cycle of forced obsolescence.

Security, too, is not such a concern with Linux. While no system is perfect, IT managers would have to worry about security a lot less in a Linux shop than a Windows workplace.

Wingfield doesn’t directly mention Linux in his solution set, but the article’s goal was not really to advocate any one particular brand of technology over another. The article did address the growing prevalence of virtual machines, which would very effectively segregate corporate tools and data. And as a virtual platform, Linux has very few peers.

Another interesting bit of info in the report: Kraft Foods Inc. has begun a program that lets its employees choose their own computers to use. But, Wingfield reports, “employees who choose Macs are expected to solve technical problems by consulting an online discussion group at Kraft, rather than going through the help desk, which deals mainly with Windows users.”

This practice at Kraft raises an interesting possibility for Linux advocates. The formation of corporate Linux user groups (CLUGs) that would allow any workers interested in using cutting-edge technology to give Linux and/or other free software a try without diverting corporate resources towards training for and supporting the new tools.

If done properly, CLUGs could allow teams to cut software and support costs and very likely increase productivity, since workers would now have access to a much wider range of software than they might otherwise have under a traditionally restrictive corporate IT policy.

Article Source Community-cation
November 11, 2009, 5:04 am

Besides excellent sessions and networking opportunities, there’s usually behind-the-scenes action going on at our many events: meetings and get-togethers for any one, and sometimes several, of our working groups and teams. The Linux Foundation (LF) staff and members are scattered around the world, so using our events for valuable face time is a necessity.

One of the more important functions is the annual Technical Advisory Board (TAB) election, which was held this year at the Japan Linux Symposium in Tokyo. Today, we’ve announced the results of that election.

If you’re not familiar with the TAB, this 10-member board consists of members of Linux kernel community who are annually elected by their peers to serve staggered, two-year terms. The TAB collaborates with the LF on programs and issues that affect the Linux community, and the TAB Chair also sits on the LF board.

It is often perceived that the Foundation uses TAB to influence the Linux kernel community, when actually it’s the other way around: TAB typically advises the LF, giving us important guidance on what’s happening with the Linux kernel so our membership can better react to the technical changes and advances.

For the 2009 election, there are three new members of TAB:

  • Alan Cox, employed by Intel SSG and manager of major Linux projects such as the original Linux SMP implementation, the Linux Mac68K port and an experimental 16bit Linux subset port to the 8086.
  • Thomas Gleixner, who manages bug reports for NAND FLASH, core timers and the unified x86 architecture.
  • Ted Ts‚Äôo, the first North American Linux kernel developer and Linux Foundation fellow. Ted was also voted as the new Vice Chair.

Re-elected for two-year terms are Jon Corbet, Linux kernel developer, editor of LWN, and author of the Linux Kernel Weather Report, and Greg Kroah-Hartman, employed by Novell and kernel maintainer for the –stable branch as well as manager of the Linux Device Driver Project.

The other five TAB members, who are serving the remainder of their two-year terms include:

  • James Bottomley, Novell distinguished engineer and Linux Kernel maintainer of the SCSI subsystem, the Linux Voyager port and the 53c700 driver.
  • Kristen Carlson Accardi, kernel developer at Intel and contributor to the ACPI, PCI, and SATA subsystems.
  • Chris Mason, Oracle Kernel development team and creator of the Btrfs file system
  • Dave Jones, maintainer of the Fedora kernel at Red Hat
  • Chris Wright, employed by Red Hat, maintainer for the LSM framework, and co-maintainer of the -stable Linux kernel tree.

As you can see, each member of TAB is a valuable kernel contributor, so they know what’s going on in the kernel at a level that’s dizzying to most folks. The LF is very lucky to have such expertise to advise and guide us, and as always, we thank the TAB members for their valuable service.

Article Source Community-cation
November 10, 2009, 5:41 am

One of the big challenges facing Linux development is the straightforward issue of actually having developers to write the code.

As more academic and vocational training centers add curriculum that includes programming for Linux and free software projects, this has become less of a problem in recent years. But training new developers takes time, and meanwhile there’s a huge resource of existing Windows developers out there who could very easily switch to Linux or add Linux development to their skillsets.

What’s stopping them? According to Miguel de Icaza, Mono project founder and vice president of Developer Platforms at Novell, “the learning curve [for Linux programming] was too steep” for Windows developers. de Icaza and the rest of the Mono team at Novell have just announced a solution that will greatly smooth out this learning curve: Mono Tools for Visual Studio, a new add-in for the popular Windows IDE that will enable C# and .NET coders to use VS directly to create apps for Linux, as well as for Unix and Mac OS X.

That’s a pretty big deal, though not necessarily for existing Linux developers. This tool “is aimed squarely at the Windows development community,” de Icaza said in an interview this week.

Instead of building an app in Windows, then going through all of the hassle of porting it in the usual sense by using new IDE tools to code, test, and debug applications for Linux, programmers can now stay in VS and perform all of those tasks to build a Linux application. Porting to Linux has been made a lot easier, if this add-in delivers on its promise.

Getting these applications running on Linux has been made easier, too. Mono Tools also features integration with SUSE Studio Online, Novell’s packaging and appliance-building tool. de Icaza highlighted this feature in his description of Mono Tools, indicating that it instead of building an application to run on an existing Linux server, programmers could build a completely self-contained Linux software appliance dedicated solely to that application.

That benefit should also entice Windows developers to reconsider building apps for their own native Windows platform. de Icaza described the steps that are familiar to any Windows sysadmin and developer: configuring an app, installing it on a Windows machine, rebooting, tying in the database, rebooting again… all of this complicated effort is bypassed by using a self-contained Linux appliance.

Given Novell’s efforts in putting this project together, it comes as little surprise that openSUSE and SUSE Linux Enterprise Server (SLES) will initially be the big beneficiaries of any ported applications. Mono Tools not only has integration with SUSE Studio, it also provides automated packaging of apps for SLES and openSUSE. Other distros should also be able to benefit from Mono Tools, de Icaza added, given that C# and .NET apps tend to to be isolated from distribution-centric code.

When I spoke to de Icaza and his colleague Joseph Hill, Mono Product Manager at Novell, I had to ask: is there really a big demand for this kind of tool? After all, the common perception is that Windows developers are happily coding away for their platform of choice, and not interested in cross-platform development. That, it seems, could be a misperception.

de Icaza and Hill pointed out that the very beta program for Mono Tools seems to abuse that notion: nearly 4,000 developers signed up to test drive Mono Tools.

“From the demand for the beta,” de Icaza said, “we are seeing a very strong desire to use .NET and target other platforms.”

Mono Tools for Visual Studio is available now, in three editions: Professional Edition (individual) for $99, Enterprise Edition (one developer in an organization) for $249, and Ultimate Edition for $2,499, which provides a limited commercial license to redistribute Mono on Windows, Linux, and Mac OS X and includes five enterprise developer licenses. All product versions include a one-year subscription for product updates.

Visit the Mono Tools for Visual Studio site to learn more and download a free 30-day trial, or visit the Mono Project.

Article Source Andy Updegrove’s Blog
November 5, 2009, 1:58 pm

It’s been more than a month since I last wrote about the CodePlex Foundation, the new open source initiative announced by Microsoft in early September. While things were pretty quiet at the Foundation site for some time, that changed on October 21, when the Foundation posted its new Project Acceptance and Operation Guidelines, a key deliverable that gives insight into a variety of aspects of the Foundation’s developing purpose and philosophy. A “house” interview of Sam Ramji by Paul Gillin was posted a week later.

Surprisingly, though, there was very little pickup on any of this new information until yesterday (perhaps with a little nudging from the PR side of the house), when several stories popped up on line, including one at InternetNews.com, and another at ZDNet.com. Each is based on a conversation between Sam Ramji and the reporter (Sean Michael Kerner, at InternetNews, and Dana Blankenhorn, at ZDNet.com)…

Read More

Article Source Community-cation
November 5, 2009, 9:45 am

The concept of net neutrality is an issue that’s been bandied about for a few years now in the US–among several other political hot potatoes you have probably heard about.

Being against net neutrality is one of those arguments that on first glance seems reasonable. The power company, after all, charges us more as we consume more electricity–why shouldn’t Internet carriers be allowed to charge based on data load and regulate traffic accordingly?

There’s a lot of reasons why this analogy tends to fail. The most immediate one is that the utility companies have always had this sort of revenue model. There was never the sort of all-you-can-eat model that Internet carriers are using today. Although very early on some Internet providers charged customers based on a tiered data-usage price model, it quickly became the norm to charge not for data flow but for the size of the pipe. Internet carriers can’t charge me more for downloading multiple Linux .iso images, but they can make me pay more if I want to get those images onto my computer faster.

That’s a simple breakdown of the analogy, but the more subtle one is this: the power company creates the electricity they send down the wire to my house, so there’s no question about their right to get paid based on the amount of juice my family uses. The same holds true for the water and gas companies, though they didn’t make the product–they just went out and got it.

But the Internet carriers don’t own the material they’re trafficking, nor did they create it. We do.

Granted, that “we” is a very diverse group: from the media companies to governments to corporations to individuals, data is owned by lots of different entities, or not owned at all, depending on the licenses and copyrights. Even the Internet carriers own some of the data–content they provide for their customers–but not all of it.

This is an important distinction. To continue the analogy, this would be like the water company getting all their water from private wells in everyone’s backyard, then redistributing the water to everyone who needed it, including the owners of the wells. And now they’re asking those well owners to pay for the water they’re taking from the system.

Let’s be clear here: it’s perfectly reasonable for Internet carriers to charge for the infrastructure used to bring the Internet to homes and offices. But to charge on a data-usage model when they didn’t create the data in the first place seems more than a little disingenuous.

How important is the net neutrality issue? Put yourself in a world where you were charged more for the amount of data that comes in and out of your system. Now imagine how free and open Linux development would occur in such a world.

Pretty daunting, wouldn’t you say?

To get an even better idea of how important an Open Internet is, take a look at this video from Jesse Dylan from FreeForm. It will remind you how many things we owe to this notion of net neutrality.

 

 

Managing your RSS feeds, once a novelty, has become a task you need to perform almost monthly, as you find interesting new feeds on a daily basis.

There are various tips to handling multiple RSS feeds, including the most common trick of organizing your feeds into prioritized folders or groups (depending on what metaphor your RSS reader is using). Create “Must Read,” “Get to This Week,” and “Recreational” groups or something similar, and shuffle your feeds around into these groups. That way you can eat the elephant one bit at a time and not feel overloaded when you see that 1000+ entry number in your reader’s status field.

Why the basic advice on RSS organization? Partly as a friendly reminder, and partly because you’re going to want to have your feeds organized when you pull in your favorite Linux.com RSS feeds, now posted on our new RSS Feeds page.

It should be noted that these feeds are not new; most of them have been available on the site for quite some time, in either RSS 2.0 or Atom 1.0 syndicated form. But, until now, these feeds were not listed all in once place, just tagged with the RSS icon in the URL field of your favorite browser. Responding to reader requests, and having recently created a new feed for original Linux.com content, we have put together this master RSS feed page.

There are feeds of every type in this list. There’s the all-content feed, for those who want to see everything, which is balanced with the original-content feed that displays the articles and tutorials found only on Linux.com. If you like to follow the Featured Blogs from Jim Zemlin, Linus Torvalds, and other Linux Foundation staffers, there’s a feed for that; as well as a feed for the Community Blogs and for the popular Distribution Blogs.

If you are only interested in specific aspects of the Linux ecosystem, likely there is a feed for you. Every category from Software to Hardware to Enterprise has a seperate feed. The subcategories also have their own feeds, so you can follow very specific arenas, such as Mobile Linux or Linux Security.

By providing this list, we hope to make it even easier to plug into Linux.com for all of the latest info and analysis about all things Linux.

Article Source Andy Updegrove’s Blog
November 2, 2009, 5:47 am

It’s easy to appreciate the wonders of the Web, and all of the riches that the Internet brings into our lives. All of which makes it easy indeed not to notice the things that tend to slip away, as the collateral damage of progress. Recently, we woke up to the fact that if we don’t care about document formats, our personal and public documents may disappear into a digital Black Hole of no return.

Documents aren‚Äôt the only thing that may disappear, though, as we place higher and higher priorities on fast loading speeds and easy formatting. If that‚Äôs all we care about, then the aesthetics of the printed word and its visual presentation will disappear forever as well…

Read More

Article Source Andy Updegrove’s Blog
October 27, 2009, 7:22 am

One of the realities that every standards professional must deal with is the sad fact that everyone else in the world thinks that standards are…

[start over; no one else thinks about standards much at all]

Ahem. One of the things that standards folks must come to terms with is the fact that on the rare occasions when anyone else thinks about standards at all, likely as not it’s to observe that standards are…

…boring.

[There. I’ve said it]

Read More

Article Source Linux Weather Forecast Blog
October 26, 2009, 7:14 pm

So I’ve just returned from Tokyo, where I attended the 2009 Kernel Summit and the first ever Japan Linux Symposium. My body clock is expected sometime later this week. It was a tiring but rewarding week, and not just because the sushi was so good.

The Kernel Summit went well. There are not a whole lot of earthshaking decisions to report from there; the real news seems to be that the process is working quite well despite the record pace of change and no serious changes are required.

Perhaps the most interesting session was how Google uses Linux, led by Mike Waychison. Mike gave us a much clearer view of how the Linux kernel is employed in Google’s production systems than we have ever had before. It was interesting to see the extreme pressures put on Linux by Google’s workload and the equally extreme responses that Google has had to make.

People in the community often complain that engineers who go to work for Google disappear into a black hole and are never heard from again. At the Summit we got a sense for what goes on in that black hole. We also learned a bit about how much it has cost Google to stray as far from the mainline kernel as it has. There is now an effort underway there to work more with the development community and run something which looks more like a mainline Linux kernel. Google should benefit through lower development and maintenance costs and a better ability to get support from the community. And the community will benefit from Google’s contributions and its experience running massive clusters of Linux systems at the limit of their capabilities. It looks like a winning move for everybody involved.

The Japan Linux Symposium was the first large-scale development conference to be held in that part of the world. It was a great success. Developers from all over east Asia come to present their work and talk with their peers. For the curious, I’ve written up a report from JLS with more information. JLS plans to return next year, and I can only wish it the best success.

It’s worth noting that everything worked exceptionally well for a first-time conference. Our Japanese hosts welcomed everybody and treated them to a well-run event. The Linux Foundation folks did a great job, but that’s just normal for events they work on anymore. Overall, it’s hard to find something to complain about, with the possible exception of the damage done to my wallet in all those Akihabara technology stores. There may be some stress when family finances are next discussed, but that’s life.

There’s been a lot of noise made about the benefits of mobile computing. It will keep us connected, it will be make life more convenient, and we can watch movies of dancing Internet kittehs where ever we might be.

Well, maybe not that last option.

I will admit that I am a little skeptical of the mobile computing promise. The skepticism doesn’t come from any sort of ludditry; I have the smartphone and the laptop, too, and I can surf in a coffee shop with the best of them. No, my skepticism comes from the fundamental question: do you want to be so connected?

The basis of this question comes from my own experience thus far: most of the tasks I do in a mobile capacity are, well, for work. Usually it’s checking my e-mail or managing websites via the phone or laptop. If I am traveling, there’s a little more personal tasks involved, like watching a DVD on the plane or checking the weather forecast, but there’s usually some work involved, as well.

But, even though I enjoy my job enormously, it is not something I care to do at all hours. A balance must always be made between work and home, no matter how cool your work is. So, being connected–to me, at least–means shifting that balance more towards the work side of things.

Lately, though, I have begun to re-think this notion. I am finding more uses for my mobile device that are family-oriented. Exploring my smartphone’s app store has led me to finding a lot of tools I can use for home life, such as a remote DVR app that lets me record TV shows on my home DVR if I forgot to do it before leaving the house. Or an app that lets me track a loved one’s flight as they return home. Or the app that lets you find a decent restaurant within walking distance while visiting a major city.

Being leery of mobile computing for work is also a bit of a contradiction for me. After all, I, like many others in the US, actively telecommute. In fact, I telecommute 100 percent of the time, since our offices are in San Francisco while I live in the culturally opposite location of Indiana. Connectivity is how I do my job. This point was driven home when my Internet service provider informed me they were going to be laying fiber in my neighborhood next week and there might be occasional outages. The fact that I knew I could relocate to any number of places to re-establish connectivity (yes, including the coffee shops) set my mind at ease and made me realize that even if you don’t use it all of the time, the availability of mobile computing is of great value.

I’m not alone. According to the Consumer Electronics Association, over 38 million people of US workers work from home at least once a month, according to the October 8 report “Telework and the Technologies Enabling Work Outside Corporate Walls.”

That portion of the workforce population, 37 percent, is significant in many ways, not the least being that the notion of telecommuting has certainly become mainstream and not some passing fad. It also indicates the enormous potential market for vendors and developers who are looking for some place to py their trade. There’s 38 million people out there who need decent tools to make their work efforts better. And the employers of those 38 million folks are looking for ways to do it at a lower cost.

You can see, then, why a Linux option makes increasing sense for such a market. If low-cost netbook and laptop clients can give workers the tools and connectivity they need to get their work done (all the better if those tools are plugged into cloud-based applications), and if smartphones with high-end interfaces like Moblin can make those platforms more viable for remote employees, then Linux stands to be a driving technological force in making this telecommuting marketplace more efficient than ever.

Tools that can make you work smarter, not harder may actually bring a better balance to your work-home life. That’s the promise of mobile computing with Linux can bring to employees. Doing it for a lower cost with more stability and security is the promise for the employers.