On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev <[email protected]>
wrote:

> Hi,
>
> I launched util-linux ptest using automated/linux/ptest/ptest.yaml from
> https://git.linaro.org/qa/test-definitions.git and received the
> following results:
> https://pastebin.com/nj9PYQzE
>
> As you can see some tests failed. However, case util-linux marked as
> passed. It looks like ptest.py only analyze return code of ptest-runner
> -d <ptest_dir> <ptest_name> command. And since ptest-runner finishes
> correctly exit code is 0. Therefore all tests are always marked as
> passed, and users never know when some of the tests fail.
>
> Maybe it worth to analyze each test?
>

Talking about each ptest the result comes from the ptest script in the OE
recipe [1], for convention if the OE ptest returns 0 means pass, so
needs to be fixed in the OE ptest [2].

Regarding the LAVA ptest.py script, I mark the run as succeed if there is
no critical error in the ptest-runner and we have a QA-reports tool to
analyse pass/fails
in detail for every ptest executed [3].

[1]
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest
[2] https://wiki.yoctoproject.org/wiki/Ptest
[3]
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/

Regards,
Anibal

>
> Best regards,
> Alex
>
>
_______________________________________________
linaro-validation mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/linaro-validation

Reply via email to