On Wednesday, 20 March 2013 16:13:19 UTC+1, Kristopher Micinski wrote:
>
> I understand the management issue, that's very real, but can you give 
> any ideas as to why maintaining a single codebase would be difficult?


Distributed development on a single code base is always difficult, even 
within the same company.
More so with several - competing! - companies.

Imagine a kernel-internal structural change that would affect several 
architectures. Every single supporter / implementer needs to be "in the 
boat", or you will have to support two different versions of your interface 
- the old one and the new one - for a considerable time. (Google for "Big 
Kernel Lock" to see an example.)

Then a lot of SoCs contain proprietary technology. E.g. for the PandaBoard 
you always need to hunt down the correct EGL2 graphics driver blobs 
(PowerVR code from Imagination Technologies Limited) that correctly match 
your Linux kernel, or it won't properly boot into the GUI.


Assume you do do some development on a multi-architecture kernel, and you 
change something. Theoretically, before you can commit your changes, you 
need to build and test them for all potentially affected architectures. 
That is a LOT of architectures (>1000). In fact, probably nobody can really 
do this. (Yes, modularization helps. A lot! Unless e.g. you need to change 
the module interface itself.)

How do you want to handle these problems for a common code base, spanning 
multiple architectures, and affecting lots of different competing hardware 
vendors?

 

> Because the configurations are different, and include specific mods? 
>
 
Partially. Also it is a business / historic development process issue:

The quick-and-dirty way of supporting an SoC (or a specific Evaluation 
Board, aka "machine") is to start with a kernel that is already roughly 
doing what you need, and then to "just" change the things you need to be 
done differently. 
Hey presto! It works! Now to the next project! 

Or - if you really have the time - you can now do some nifty special 
optimizations - so your code runs specifically fast on your platform, 
giving you a business advantage.

This scheme minimizes the time spent by the individual developer to support 
her specific SoC or eval board.


>From a "grand view" perspective, it is of course disastrous: this 
development style efficiently prevents that you can ever have a really 
"unified" kernel.

Instead of using an existing, well-maintained kernel driver (module) and 
just setting the correct parameters, or of carefully augmenting existing 
code and adding a new parameter setting, you just reinvent the wheel again. 
And again.

For the ARM architecture this got so bad, that two years ago Linus Torvalds 
himself brought up the issue.
See [http://thread.gmane.org/gmane.linux.kernel/1114495/focus=55060] for 
the whole thread. The key mail is also here 
[https://lkml.org/lkml/2011/3/30/834]. 
Subsequently Linaro was founded to address the problem. (And they seem to 
have done marvellous work.)

For ARM, even the old pre-device-tree system works like this: You can build 
a kernel for multiple machines. The boot loader must pass a parameter to it 
during boot (a "machine type" number in a register, e.g. R2),. Then the 
correct module initialization routine is called for your machine. If the 
kernel does not find the passed-in number in its compiled-in list of 
supported machine types, it just stays in an endless loop (to avoid doing 
any harm).

Now the ARM kernel branch also moved towards "device trees". Now the boot 
loader (e.g. U-Boot) passes a data structure to the kernel that contains 
configuration information for the available devices. If you have a new SoC 
that only re-uses a lot of peripheral blocks (just with different 
addresses, etc.) then you don't need to build a new kernel for it at all: 
just generate the "device tree" for it and have the kernel load it on boot.

 

> My idea was that most vendors took kernels, added a few modules, 
> changed the configuration, and that was that. 
>

In principle this is what they do. But - as stated - don't forget time 
pressure on kernels for new devices or SoCs.  
There is also a personal factor for the individual developer (wherever s/he 
sits - in America, Europe, China or India): 
To do "the right thing" takes more time - you need to analyze and 
understand an existing driver to be able to carefully extend it. While you 
analyze and understand, you don't write code, so your "SLOC count" goes 
down and you get less payment, or eventually some nasty questions from your 
boss about your efficiency.
If you just copy the whole kernel or driver and twiddle what needs to be 
changed, or if you write a simple driver from scratch, this is much faster. 
So even large companies fall into that trap. [Example withheld. :-)]


As you see, it's not so much a matter of technology, but of business 
/development / management interests and knowledge scope. What you want is 
possible, already today. Question is: is it feasible? 
Is your "buying power" large enough? Then you can likely convince the 
hardware vendors to do really good and clean kernel development. 

Otherwise the best you can do is to invest some time and energy and try to 
help "cleaning up the mess". If you employ developers on your project, make 
it clear to them that "clean integration" is what you actually want.

Happy hacking!

-- 
-- 
unsubscribe: [email protected]
website: http://groups.google.com/group/android-kernel
--- 
You received this message because you are subscribed to the Google Groups 
"Android Linux Kernel Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to