Spring Zhang <[email protected]> writes:

> On 18 May 2012 13:04, Michael Hudson-Doyle <[email protected]>wrote:
>
>> Multiple conversations over the last week have convinced me that
>> lava-test, as it currently is, is not well suited to the way LAVA is
>> changing.
>>
>> I should say that I'm writing this email more to start us thinking
>> about where we're going rather han any immediate plans to start
>> coding.
>>
>> The fundamental problem is that it runs solely on the device under
>> test (DUT).  This has two problems:
>>
>>  1) It seems ill-suited to tests where the not all of the data
>>    produced by the test originates from the device being tested
>>    (think power measurement or hdmi capture here).
>>
>>  2) We do too much work on the DUT.  As Zygmunt can tell you, just
>>    installing lava-test on a fast model is quite a trial; doing the
>>    test result parsing and bundle formatting there is just silly.
>>
> I agree with the test result staff, but if we move it to host side, it
> needs a collecting and parsing too. I think we can discuss a more efficient
> result collecting way but I have no good idea here.
>
> We can enable a result collection and parsing extension, for out-of-order
> test result, we use a dumb one, collect all output logs and no analysis,
> just dump it to test result.

Yes, I think to start with we should just ship the entire test output
from the DUT to the host and parse it there.

>> I think that both of these things suggest that the 'brains' of the
>> test running process should run on the host side, somewhat as
>> lava-android-test does already.
>>
>> Surprisingly enough, I don't think this necessarily requires changing
>> much at all about how we specify the tests.  At the end of the day, a
>> test definition defines a bunch of shell commands to run, and we could
>> move to a model where lava-test sends these to the board[1] to be
>> executed rather than running them through os.system or whatever it
>> runs now (parsing is different I guess, but if we can get the output
>> onto the the host, we can just run parsing there).
>>
>> To actually solve the problems of 1 and 2 above though we will want
>> some extensions I think.
>>
>> For point 1, we clearly need some way to specify how to get the data
>> from the other data source.  I don't have any bright ideas here :-)
>>
>> In the theme of point 2, if we can specify installation in a more
>> declarative way than "run these shell commands" there is a change we
>> can perform some of these steps on the host -- for example, stream
>> installation could really just drop a pre-compiled binary at a
>> particular location on the testrootfs before flashing it to the SD
>> card.  Tests can already depend on debian packages to be installed,
>> which I guess is a particular case of this (and "apt-get install"
>> usually works fine when chrooted into a armel or armhf rootfs with
>> qemu-arm-static in the right place).
>>
>> We might want to take different approaches for different backends --
>> for example, running the install steps on real hardware might not be
>> any slower and certainly parallizes better than running them on the
>> host via qemu, but the same is emphatically not the case for fast
>> models.
>>
> Does qemu simulation work for all platforms? AFAIK it has full support on
> beagle/panda, but not other platforms.

No, but I think the sort of things that are done during test
installation -- installing a package from a ppa, compiling a c file --
could be run just as well under QEMU's beagle emulation as something
more like the DUT itself.  But it's something to keep in mind, for sure.

>>
>> Comments?  Thoughts?
>>
>> Cheers,
>> mwh
>>
>> [1] One way of doing this would be to create (on the testrootfs) a
>>    shell script that runs all the tests and an upstart job that runs
>>    the tests on boot -- this would avoid depending on a reliable
>>    network or serial console in the test image (although producing
>>    output on the serial console would still be useful for people
>>    watching the job).
>>
> I think stable network is necessary, at least in test case deployment step.

Yes, for sure.  We've had this goal to run tests without depending on a
working network in the test image but I don't know how important it is
to stick to that -- android tests require network and it doesn't seem to
cause massive problems there...

Cheers,
mwh

_______________________________________________
linaro-validation mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/linaro-validation

Reply via email to