I was recently making some additions to the lava-test-shell code and in
the process more or less had to write some documentation on how things
work. Here is what I wrote (actually what I wrote includes stuff about
test result attachments but that's not in trunk yet):
# LAVA Test Shell implementation details
# ======================================
#
# The idea of lava-test-shell is a YAML test definition is "compiled" into a
# job that is run when the device under test boots and then the output of this
# job is retrieved and analyzed and turned into a bundle of results.
#
# In practice, this means a hierarchy of directories and files is created
# during test installation, a sub-hierarchy is created during execution to
# hold the results and these latter sub-hierarchy whole lot is poked at on the
# host during analysis.
#
# On Ubuntu and OpenEmbedded, the hierarchy is rooted at /lava. / is mounted
# read-only on Android, so there we root the hierarchy at /data/lava. I'll
# assume Ubuntu paths from here for simplicity.
#
# The directory tree that is created during installation looks like this:
#
# /lava/
# bin/ This directory is put on the path when the
# test code is running -- these binaries can
# be viewed as a sort of device-side "API"
# for test authors.
# lava-test-runner The job that runs the tests on boot.
# lava-test-shell A helper to run a test suite.
# tests/
# ${IDX}_${TEST_ID}/ One directory per test to be executed.
# testdef.yml The test definition.
# install.sh The install steps.
# run.sh The run steps.
# [repos] The test definition can specify bzr or git
# repositories to clone into this directory.
#
# In addition, a file /etc/lava-test-runner.conf is created containing the
# names of the directories in /lava/tests/ to execute.
#
# During execution, the following files are created:
#
# /lava/
# results/
# cpuinfo.txt Hardware info.
# meminfo.txt Ditto.
# build.txt Software info.
# pkgs.txt Ditto
# ${IDX}_${TEST_ID}-${TIMESTAMP}/
# testdef.yml Attached to the test run in the bundle for
# archival purposes.
# install.sh Ditto.
# run.sh Ditto.
# stdout.log The standard output of run.sh.
# stderr.log The standard error of run.sh (actually not
# created currently)
# return_code The exit code of run.sh.
#
# After the test run has completed, the /lava/results directory is pulled over
# to the host and turned into a bundle for submission to the dashboard.
Now clearly the work Senthil is doing on getting testdefs from a repo is
going to make some changes to the files that get created during
installation (I had envisioned that the repo that the job specifies
would be cloned to /lava/tests/${IDX}_${TEST_ID} but that might not be
completely safe -- if the repo contains a file called run.sh we don't
want to clobber that! Not sure what to do here).
But the tweaks I want to propose are more to do with what goes into the
/lava/bin and /lava/results directories. For a start, I don't think
it's especially useful that the tests run with lava-test-runner or
lava-test-shell on $PATH -- there's no reason for a test author to call
either of those functions! However I want to add helpers -- called
lava-test-case and lava-test-case-attach in my brain currently:
* lava-test-case will send the test case started signal, run a shell
command, interpret its exit code as a test result and send the test
case stopped signal
* lava-test-case-attach arranges to attach a file to the test result
for a particular test case id
So you could imagine some run steps for an audio capture test:
run:
steps:
- lava-fft -t 3 > generated.wav
- lava-test-case-attach sine-wave generated.wav audio/wav
- lava-test-case sine-wave aplay generated.wav
and appropriate hooks on the host side:
* test case start/stop hooks that would capture audio
* an "analysis hook" that would compare the captured sample with the
attached wav file (and attach the captured wav)
Semi-relatedly, I would like to (at least optionally) store the test
result data more explicitly in the
/lava/results/${IDX}_${TEST_ID}-${TIMESTAMP} directory. Maybe something
like this (in the above style):
# /lava/
# results/
# ... As before
# ${IDX}_${TEST_ID}-${TIMESTAMP}/
# ... All the stuff we had before.
# cases/
# ${TEST_CASE_ID}/
# result This would contain pass/fail/skip/unknown
# units Mb/s, V, W, Hz, whatever
# measurement Self explanatory I expect.
# attachments/
# ${FILENAME} The content of the attachment
# ${FILENAME}.mimetype The mime-type for the attachment
# attributes/
# ${KEY} The content of the file would be the
# value of the attachment.
# ... other things you can stick on test results ...
Basically this would be defining another representation for test
results: on the file system, in addition to the existing two: as JSON or
in a postgres DB.
The reason for doing this is twofold: 1) it's more amenable than JSON to
being incrementally built up by a bunch of shell scripts as a
lava-test-shell test runs and 2) this directory could be presented to an
"analysis hook" written in shell (earlier on today I told Andy Doan that
I though writing hooks in shell would be impractical; now I'm not so
sure). Also: 3) (noone expects the spanish inquisition!) it would allow
us to write a lava-test-shell test that does not depend on parsing
stdout.log.
Apologies for the log ramble!
Cheers,
mwh
_______________________________________________
linaro-validation mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/linaro-validation