Hi,

I would like to discuss the following question.
As it was said now we have to analyze pass/fail of everyptest. From my point of view there are couple options.

The first we can parse output and markptestas failed if there is even only one failed test found.

The second we can analyze each test within some packet and record corresponding results. I see a few issues here. First of all there will bea large number oftest results as eachptestcan run lots of tests. Another thing is that we need somehow separate test results between particular packets. As an option we can use lava-test-set feature for that. So each test withinptestwill be marked as test case and packet name we will see as test set.

What do you think about that?

Regards,
Alex

On 23.08.18 16:10, Anibal Limon wrote:


On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev <otere...@cisco.com <mailto:otere...@cisco.com>> wrote:

    Thank you Anibal for the fast response


    On 22.08.18 19:50, Anibal Limon wrote:


    On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev
    <otere...@cisco.com <mailto:otere...@cisco.com>> wrote:

        Hi,

        I launched util-linux ptest using
        automated/linux/ptest/ptest.yaml from
        https://git.linaro.org/qa/test-definitions.git and received the
        following results:
        https://pastebin.com/nj9PYQzE

        As you can see some tests failed. However, case util-linux
        marked as
        passed. It looks like ptest.py only analyze return code of
        ptest-runner
        -d <ptest_dir> <ptest_name> command. And since ptest-runner
        finishes
        correctly exit code is 0. Therefore all tests are always
        marked as
        passed, and users never know when some of the tests fail.

        Maybe it worth to analyze each test?


    Talking about each ptest the result comes from the ptest script
    in the OE recipe [1], for convention if the OE ptest returns 0
    means pass, so
    needs to be fixed in the OE ptest [2].

    I’ve read https://wiki.yoctoproject.org/wiki/Ptest carefully a few
    times more. There are prescriptions about output format. But I
    didn’t find any mention about return code processing or a
    reference to the convention you mentioned in the answer.

    I looked through some OE run-ptest scripts. I suspect they don’t
    verify if some of their tests failed, and exit with 0 even if all
    their tests failed.

    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest



Right, looks that the OEQA test case was update since i worked on it [1], so now it takes into account the pass/fail of every ptest.
So the ptest.py needs to implement the same behavior.

Regards,
Anibal

[1] http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80


    Regarding the LAVA ptest.py script, I mark the run as succeed if
    there is no critical error in the ptest-runner and we have a
    QA-reports tool to analyse pass/fails
    in detail for every ptest executed [3].

        I heard about QA-reports tool but I’ve never used it before,
        so maybe I missed something.
        From
        
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/suite/linux-ptest/tests/
        I see all ptests passed. Still, in log
        
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/log
        I found 54 failed tests and wasn’t able to find a report which
        indicates those failures.

        Is there such a report? It would be really useful to know that
        some tests failed.

        Thanks



    [1]
    
http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest
    [2] https://wiki.yoctoproject.org/wiki/Ptest
    [3]
    
https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/

    Regards,
    Anibal


        Best regards,
        Alex



_______________________________________________
linaro-validation mailing list
linaro-validation@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-validation

Reply via email to