Most Linux users use a distribution, not Linux from scratch. If the distro is older than the hardware, they hope to find a simple to use update medium with "new drivers to get it working". They also expect the system to continue to run after any system update, and obviously also after system upgrades (to the next distribution release).
So to keep distribution users happy, the distributors together with the system and component vendors test each and every kernel and driver update.
In the past, when the kernel and the drivers were distributed together, that meant three alternatives for systems that need drivers newer than what is on the distribution:
- Pay the distributor to retest all older systems relying on the driver you need and get a driver update in the distro kernel
- Wait for the next distro release (service pack, new release)
- Build your own driver, somehow, unsupported by the distro. and fingers crossed it still works after kernel updates or distro upgrades...
In essence the one main hindrance of getting Linux widely supported with distributions was the retesting effort of the "one kernel for all" distribution model.* Open Source in Mobile (OSiM) - Barcelona, Sept 18-19 
First the distributions made driver distribution independent from kernel distribution: they created "Kernel Module Packages (KMPs)".
Then they looked at how component vendors maintain their drivers. Learning from that, the distributors decided to no longer distribute the one latest and greatest current driver version to all systems, but to explicitly recommend a driver snapshot of some point in time for specific chipset or system releases. The current KMP also _should_ work, but here is the one that we've tested to work.
This change then necessitated to extend anything that expects "one driver for all" to also handle context sensitivity: driver installation, driver updates, kernel updates or system upgrades.
We've started this work group to share infrastructure and technology, to reduce the cost of getting Linux working with a distro. We prefer the investment to rather go into the number of supported systems, to get Linux as widely distributed as possible.
The challenge is about as old as Linux. So every system vendor created their own solution approach. HP has support packs, Dell invented the great dkms tools, IBM has their solution.
Now Novell started these KMPs and RH supports them, too, and Debian based distros are watching what's going on.
Which gives one additional solution per distro for each system vendor.
This workgroup now defines a framework that shares tools and infrastructure and allows vendors to focus on hardware and upstream drivers and backporting, while the Driver Backport Distribution provides common means to get that to users on their distributions.
Because of the retesting effort. Yes, current mainline _should_ not break anything. Reality bites.
So there is no way around retesting, which given the number of systems and components comes at prohibitive cost. That cost is the real reason why distros prefer to stick with their kernel and only carefully add backported security fixes.
As the name says, the Driver Backport Distribution infrastructure is meant for backports of the upstream driver and we strongly recommend to get your driver upstream:
- Ehen upstream you get API changes fixed by the community
- Your code is reviewed
- The driver is in every distribution
- It's getting community testing, feedback and possibly contributions
Yes. We actually see this as an advantage: there are drivers that are not upstream yet, yet work well enough even for mission critical production environments.
There is Linux users who would like to use these drivers, there is Linux distributors, who'd like to make that easy for these users, so we believe for the objective of getting Linux as widespread as possible, it's A Good Thing that Driver Backport Distribution also works for drivers that are still on their way into the kernel.
How about the drivers that claim being "independent work"?
No doubt, Driver Backporting infrastructure can also be used for those. There is nothing preventing the technology or the infrastructure from that.
So we had long arguments whether we should really do this at all. For example we pondered whether having a driver widespread would give the vendors pressure to look at the value of being open source, e.g. for getting their API fixed if it is changed, for getting community review whether they use the kernel The Right Way, etc.
That said, in our view the benefits of having a well defined Driver Backport Distribution mechanism by far outweighs the risk of slightly easing the use of a few debatable drivers. That is why we do it.
If you did maintenance on KMPs the same way you do it on the kernel, then definitively yes.
So with the KMPs, you simply get the issue fixed in the current driver version, upstream, and then you distribute that to exactly those machine types that need the fix. That even contains retesting to exactly those machines that get new code. Which actually means less maintenance effort.
Vendors can make "upstream integration" a certification requirement. There have to be good reasons for a not upstream yet driver to be certified.
Vendors can help educating component vendors and system vendors on the benefits of being upstream and they can teach and assist in adjusting the code so it actually gets upstream.