On Monday, September 9, 2013 1:56:12 PM UTC-6, rh wrote:
>
>
> Seems like a really bad idea but the switch to device tree threw a wrench 
> into 
> things.  But does it not go against the whole notion of the linux kernel? 
> The end user benefits from being able to grab the latest kernel and have 
> things 
> work.  So what's TI's problem that they have to maintain their own kernel? 
> Resource constraints? Seems doubtful. It would take far fewer resources 
> to maintain code in the current kernel as you have the masses to help 
> find problems and fix them too. 
>
>
The device tree thing is basically a mandate from Linus himself that the 
ARM developer community "get their act together" and quit exploding the 
kernel with unique arch trees for every SoC under the sun. The idea is to 
be able to build a single kernel that will boot a wide variety of SoCs 
within a family, with all the messy details of clocks, voltages, 
configuration registers, etc. abstracted into what amounts to a fancy 
configuration file. Device Tree is not a new thing; it's been around for a 
very long time in non-ARM architectures. However, SoCs are *far* more 
varied in architecture than your average PC, which the world that Linus 
comes from, and furthermore a lot of the really complex boot-related stuff 
in a PC is already taken care of with a BIOS so it doesn't have to be 
included in the kernel at all.

Another thing to keep in mind is that the Linux kernel, unlike traditional 
kernel development such as the BSD kernel or various proprietary ones, is 
highly chaotic in its internal interfaces and always has been.  If you 
write a device driver for BSD or for Windows, you have a pretty reasonable 
guarantee that the APIs your driver relies on will be there for a while, so 
you could distribute a binary driver without worrying that it will break 
next month, or that you will have to completely re-architect it next year.  
Not so with the Linux kernel; in order to have any hope of an ongoing 
up-to-date driver, you either have to have some full-time kernel developers 
who keep track of how the kernel is changing and keep your driver in synch, 
or you have to get your driver merged into the kernel, which is not exactly 
an easy feat for someone who doesn't know the ropes yet.  Again, it's 
always been this way, and Linus purposefully chose to avoid stabilizing 
internal kernel interfaces in order to keep the kernel development from 
getting stuck in backward-compatibility hell.  It also encourages 
open-source contribution, which has always been an important part of Linux.

And as for maintaining their own kernels, it's not really true that anyone 
maintains their own truly distinct kernel.  In one sense, there's just one 
fuzzy Linux kernel development cloud with patches moving between people in 
a buzz of controlled chaos, and in another sense there are more distinct 
kernels than you could hope to count.  Just take a look at 
http://git.kernel.org/ and see how many different kernels there are there. 
There are a few major ones; the big one being the torvalds kernel that is 
the canonical one, but there are a small army of "lieutenant" maintainers 
that are in charge of some subsystem and make sure that all the patches for 
their subsystem (or other division of responsibility) are reviewed and up 
to standards. When it's time for a new release, they work with Linus to get 
the changes in their kernels that are ready for release into the main 
kernel, while continuing to incubate the changes that aren't ready or 
otherwise acceptable yet.  Sometimes, there are "feature branches" that 
maintain some alternative functionality (such as different schedulers or 
new niche architectures) for long periods of time, continually merging 
changes from the main kernel to keep theirs up to date.

All of this development is very fast-paced and decentralized, so the only 
sane way for someone like TI to get Linux support for their new silicon 
into a kernel for their customers is to maintain a branch of the kernel at 
some level in which they can do work on new functionality in a somewhat 
stable code environment.  For their ARM stuff, they collaborate with other 
ARM vendors through Linaro on the base ARM functionality, and then work on 
common platform functionality across product families in kernel branches 
like linux-omap.  Somewhere along here, the consumers of the TI platforms 
that want to run Android on them collaborate with TI and Google through 
sites like omapzoom, where ti feature branches, Google's Android changes, 
and Linaro and Ubuntu work get mixed together in various ways.

Anyway, hopefully this makes it a bit clearer why things are the way they 
are in the ARM Linux world.  With new SoCs coming out all the time with 
various incompatibilities, and the difficulties of combining 
closed-licensed pieces with rapidly moving open source basics, it's hardly 
surprising that TI has a hard time meeting its software commitments to the 
Beagle community.  It's also not surprising that the hobbyist community has 
had a difficult time making much progress without support from TI on the 
pieces closest to the closed-license bits. Believe me, I share the 
frustration you feel!

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to