On Sat 2018-03-17 @ 07:23:52 AM, Philip Balister wrote:
> > Would anyone like to make an honest, unbiased attempt at answering:
> > 1. What problem(s) was the post-OE-classic split attempting to solve?
> 
> The all in one approach is untestable.

Thank you.

I was able to find the old OE-classic[1]:

        $ cd OE-classic
        $ find . -name "*bb" -print | wc -l
        7853

        $ cd meta-openembedded
        $ find . -name "*bb" -print | wc -l
        1477

        $ cd openembedded-core
        $ find . -name "*bb" -print | wc -l
        837

Wow!

> > 2. Did it work? Can it be said that the problem(s) the OE-split was 
> > attempting
> >    to solve, have actually been solved by the split? (and, if new problems
> >    arose as a result of this split, were they small an manageable relative 
> > to
> >    the pre-split problems?)
> > 
> 
> OE-core is well tested, and that has taken a lot of resources. There are
> not dedicated resources to test much beyond this. The real problem is
> how to get resources to test layers beyond oe-core. And maintain testing
> of oe-core.

It sounds to me as though the goal-posts of what to include in a layer are not
set by what logically hangs together, but rather by how much can be tested. It
sounds like the "capacity that can be tested" is the metric that will be used.
So we should stop making passionate arguments about what should be included
together in a layer, and start talking concretely about resources and capacity.

If we determine that X number of recipes can be tested in a hour, and that we
only want to test for Y number of hours, then no layer should have more than
Y*X recipes.

And if that means that it will now take 30 layers to build anything useful,
then so be it. The problem is, you might be the first and only person to ever
try to put together those specific 30 layers in a given way. So although the
project will be able to say "these 30 layers are tested amazingly well...
individually" it's anyone's guess what you'll end up with when you put all of
them together.

The argument has been that smaller layers can be more easily tested, which
gives us the feeling that the quality is going up. And that might be true, on
an individual layer's basis. But it's also been my experience that as the
number of layers needed to build a specific project go up, the quality tends
to go down (and complication increases).

Why not look to other projects for inspiration? Take the Linux kernel: it's
a huge project! Does anyone promise that every possible combination of
configurations is tested and verified before each release (or that they even
compile cleanly)? Although the parts of the kernel have their own trees and
mailing lists, at the end of the day, it's shipped as one kernel in one
repository.


[1] which isn't easy, thanks to https://www.oeclassic.com/
-- 
_______________________________________________
Openembedded-devel mailing list
Openembedded-devel@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-devel

Reply via email to