Fathi Boudra <[email protected]> writes: > On 18 May 2012 08:04, Michael Hudson-Doyle <[email protected]> wrote: >> Multiple conversations over the last week have convinced me that >> lava-test, as it currently is, is not well suited to the way LAVA is >> changing. >> >> I should say that I'm writing this email more to start us thinking >> about where we're going rather han any immediate plans to start >> coding. >> >> The fundamental problem is that it runs solely on the device under >> test (DUT). This has two problems: >> >> 1) It seems ill-suited to tests where the not all of the data >> produced by the test originates from the device being tested >> (think power measurement or hdmi capture here). >> >> 2) We do too much work on the DUT. As Zygmunt can tell you, just >> installing lava-test on a fast model is quite a trial; doing the >> test result parsing and bundle formatting there is just silly. >> >> I think that both of these things suggest that the 'brains' of the >> test running process should run on the host side, somewhat as >> lava-android-test does already. >> >> Surprisingly enough, I don't think this necessarily requires changing >> much at all about how we specify the tests. At the end of the day, a >> test definition defines a bunch of shell commands to run, and we could >> move to a model where lava-test sends these to the board[1] to be >> executed rather than running them through os.system or whatever it >> runs now (parsing is different I guess, but if we can get the output >> onto the the host, we can just run parsing there). >> >> To actually solve the problems of 1 and 2 above though we will want >> some extensions I think. >> >> For point 1, we clearly need some way to specify how to get the data >> from the other data source. I don't have any bright ideas here :-) > > Getting data from an external device (and not only the DUT) isn't the > only problem. It will be an interesting discussion at Connect. We'll > have to run tests with lava-test to change the workload on the DUT > and synchronize the data acquisition device to observe what's > happening from hdmi/power point of view with regard to the tests code > path.
Sure. But I was only talking about the specification of tests here, which seems like one of the things that needs to get thought about soonest, because it's such a pain for everyone if we need to change it. >> In the theme of point 2, if we can specify installation in a more >> declarative way than "run these shell commands" there is a change we >> can perform some of these steps on the host -- for example, stream >> installation could really just drop a pre-compiled binary at a >> particular location on the testrootfs before flashing it to the SD >> card. Tests can already depend on debian packages to be installed, >> which I guess is a particular case of this (and "apt-get install" >> usually works fine when chrooted into a armel or armhf rootfs with >> qemu-arm-static in the right place). >> >> We might want to take different approaches for different backends -- >> for example, running the install steps on real hardware might not be >> any slower and certainly parallizes better than running them on the >> host via qemu, but the same is emphatically not the case for fast >> models. >> >> Comments? Thoughts? > > The main issue is related to lava-test being more than a test runner. > It's causing performance issues as we do computation on the DUT. > Parsing and compiling are the main bottlenecks. > +1 to move the parsing on the host > +1 to use pre-compiled binaries when possible > >> Cheers, >> mwh >> >> [1] One way of doing this would be to create (on the testrootfs) a >> shell script that runs all the tests and an upstart job that runs >> the tests on boot > > It should be flexible and not tight to Ubuntu images. This is our use > case but we can have to test other OS that doesn't use upstart. Well, sure. I think all OS's we care about (except possibly Android, which is already in a happier place here) support running a shell script at boot somehow or orther... >> -- this would avoid depending on a reliable >> network or serial console in the test image (although producing >> output on the serial console would still be useful for people >> watching the job). Cheers, mwh
_______________________________________________ linaro-validation mailing list [email protected] http://lists.linaro.org/mailman/listinfo/linaro-validation
