On Wed, 28 Nov 2018 18:26:34 +0200
Oleksandr Terentiev <otere...@cisco.com> wrote:

> Hi,
> 
> I wasn't able to sign up at https://register.linaro.org/ with the 
> following message:
> "Linaro employees and assignees, and people from Member companies
> cannot use this form"

In which case, you should be able to sign in to validation.linaro.org
yourself using your Linaro LDAP details. If your linaro email address
is oleksandr.terent...@linaro.org, your LDAP username to use with LAVA
is oleksandr.terentiev and then your LDAP password. Otherwise, contact
Linaro support / your Tech Lead in Linaro.

Once logged in, just need to ask an admin to give you submission rights.
(JIRA ticket is easiest:
https://projects.linaro.org/secure/CreateIssue!default.jspa and select
LAB & System Software (LSS) - remember to specify validation.linaro.org
as the server you want to access.

> 
> So I uploaded my changes to GitHub:
> https://github.com/oterenti/test-definitions/tree/ptest_modify
> 
> If https://validation.linaro.org/scheduler/job/1890442 can be good 
> example of ptest run
> I'd suggest the following test action:
> 
> - test:
>      namespace: dragonboard-820c
>      name: qcomlt-ptest
>      timeout:
>        minutes: 160
>      definitions:
>      - repository: https://github.com/oterenti/test-definitions.git
>        from: git
>        path: automated/linux/ptest/ptest.yaml
>        name: linux-ptest
>        branch: ptest_modify
>        params:
>          EXCLUDE: bluez5 libxml2 parted python strace

I tried a couple of test jobs but the first failed trying to fastboot
flash and another kernel panicked:
https://lkft-staging.validation.linaro.org/scheduler/job/18795

See if this one completes - it looks hung to me:
https://validation.linaro.org/scheduler/job/1899004

+ ./ptest.py -o ./result.txt -t -e bluez5 libxml2 parted python strace
b'1+0 records in'
b'1+0 records out'
b'1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405323 s, 259 MB/s'
b'mke2fs 1.43.8 (1-Jan-2018)'
b''
b'Filesystem too small for a journal'
[   19.740484]  sdd: sdd1 sdd2 sdd3
b'Makefile:20: ../include/builddefs: No such file or directory'
b\"make: *** No rule to make target '../include/builddefs'.\"
b\"make: Failed to remake makefile '../include/builddefs'.\"

> 
> Regards,
> Alex
> 
> On 28.11.18 10:38, Neil Williams wrote:
> > On Tue, 27 Nov 2018 20:22:42 +0200
> > Oleksandr Terentiev <otere...@cisco.com> wrote:
> >  
> >> Thanks for replay.
> >> Is there any public LAVA instance where I can run the example test?
> >> Maybe on staging.validation.linaro.org
> >> <https://staging.validation.linaro.org/> ?
> >> Can I be registered there and how?  
> > Staging (and other instances hosted by Linaro) share the same
> > authentication support as validation.linaro.org, which is described
> > in the LAVA documentation.
> >
> > https://staging.validation.linaro.org/static/docs/v2/first_steps.html#linaro-lab-users
> >
> > If you do not have a Linaro LDAP account, you can register at
> > https://register.linaro.org/.
> >
> > You could use validation.linaro.org for your test.
> >  
> >> Or maybe there is another approach
> >> to run that.  
> > Once the script is available in a public git repository instead of
> > only as a patch, it can be executed from any suitable LAVA test
> > job. So push the script and associated support to public git
> > somewhere and post the URL here. Someone on this list can put that
> > into a test job and run that. If you have a test job definition
> > ready to go, put that alongside the script and test definition.
> >
> > Any public git repo will be fine, it doesn't have to be the original
> > one on git.linaro.org.
> >  
> >> On 27.11.18 19:41, Anibal Limon wrote:  
> >>>
> >>>
> >>> On Tue, 27 Nov 2018 at 08:29, Oleksandr Terentiev
> >>> <otere...@cisco.com <mailto:otere...@cisco.com>> wrote:
> >>>
> >>>      Hi Abibal,
> >>>
> >>>      In our project we need to analyze a total number of passed
> >>> and failed tests for each packet. To distinguish packets we use
> >>>      lava-test-set feature.
> >>>      In order to implement that I modified ptest.py and
> >>> send-to-lava.sh scripts. Could you please look at the patch and
> >>> express your opinion? Maybe this code can be added to
> >>>      git.linaro.org/qa/test-definitions.git
> >>>      <http://git.linaro.org/qa/test-definitions.git> ?
> >>>
> >>>
> >>> Hi Oleksandr,
> >>>
> >>> The code looks good, can you have an example of the LAVA test run
> >>> to see the actual results?
> >>>
> >>> Regards,
> >>> Anibal
> >>>
> >>>
> >>>      Best regards,
> >>>      Alex
> >>>
> >>>      automated/linux/ptest: Analyze each test in package tests
> >>>
> >>>      Currently ptest.py analyze only exit code of each package
> >>> test to decide if it passed or not. However, ptest-runner can
> >>> return success code even though some tests failed. So we need to
> >>> parse test output and analyze it.
> >>>
> >>>      It also quite useful to see exactly which tests failed. So
> >>> results are
> >>>      recorded for each particular test, and lava-test-set feature
> >>> is used to distinguish packages.
> >>>
> >>>      Signed-off-by: Oleksandr Terentiev <otere...@cisco.com>
> >>>      <mailto:otere...@cisco.com>
> >>>
> >>>      diff --git a/automated/linux/ptest/ptest.py
> >>>      b/automated/linux/ptest/ptest.py
> >>>      index 13feb4d..a28d7f0 100755
> >>>      --- a/automated/linux/ptest/ptest.py
> >>>      +++ b/automated/linux/ptest/ptest.py
> >>>      @@ -84,20 +84,60 @@ def filter_ptests(ptests,
> >>> requested_ptests, exclude):
> >>>
> >>>           return filter_ptests
> >>>
> >>>      +def parse_line(line):
> >>>      +    test_status_list = {
> >>>      +        'pass': re.compile("^PASS:(.+)"),
> >>>      +        'fail': re.compile("^FAIL:(.+)"),
> >>>      +        'skip': re.compile("^SKIP:(.+)")
> >>>      +    }
> >>>      +
> >>>      +    for test_status, status_regex in
> >>> test_status_list.items():
> >>>      +            test_name = status_regex.search(line)
> >>>      +            if test_name:
> >>>      +                return [test_name.group(1), test_status]
> >>>
> >>>      -def check_ptest(ptest_dir, ptest_name, output_log):
> >>>      -    status = 'pass'
> >>>      +    return None
> >>>
> >>>      -    try:
> >>>      -        output = subprocess.check_call('ptest-runner -d %s
> >>> %s' %
> >>>      -                                       (ptest_dir,
> >>> ptest_name), shell=True,
> >>>      - stderr=subprocess.STDOUT)
> >>>      -    except subprocess.CalledProcessError:
> >>>      -        status = 'fail'
> >>>      +def parse_ptest(log_file):
> >>>      +    result = []
> >>>
> >>>      -    with open(output_log, 'a+') as f:
> >>>      -        f.write("%s %s\n" % (ptest_name, status))
> >>>      +    with open(log_file, 'r') as f:
> >>>      +        for line in f:
> >>>      +            result_tuple = parse_line(line)
> >>>      +            if not result_tuple:
> >>>      +                continue
> >>>      +            print(result_tuple)
> >>>      +            result.append(result_tuple)
> >>>      +            continue
> >>>
> >>>      +    return result
> >>>      +
> >>>      +def run_command(command, log_file):
> >>>      +    process = subprocess.Popen(command,
> >>>      +                               shell=True,
> >>>      + stdout=subprocess.PIPE,
> >>>      + stderr=subprocess.STDOUT)
> >>>      +    with open(log_file, 'w') as f:
> >>>      +        while True:
> >>>      +            output = process.stdout.readline()
> >>>      +            if output == '' and process.poll() is not None:
> >>>      +                break
> >>>      +            if output:
> >>>      +                print output.strip()
> >>>      +                f.write("%s\n" % output.strip())
> >>>      +    rc = process.poll()
> >>>      +    return rc
> >>>      +
> >>>      +def check_ptest(ptest_dir, ptest_name, output_log):
> >>>      +    log_name = os.path.join(os.getcwd(), '%s.log' %
> >>> ptest_name)
> >>>      +    status = run_command('ptest-runner -d %s %s' %
> >>> (ptest_dir, ptest_name), log_name)
> >>>      +
> >>>      +    with open(output_log, 'a+') as f:
> >>>      +        f.write("lava-test-set start %s\n" % ptest_name)
> >>>      +        f.write("%s %s\n" % (ptest_name, "pass" if status
> >>> == 0 else "fail"))
> >>>      +        for test, test_status in parse_ptest(log_name):
> >>>      +            f.write("%s %s\n" % (re.sub(r'[^\w-]', '',
> >>> test), test_status))
> >>>      +        f.write("lava-test-set stop %s\n" % ptest_name)
> >>>
> >>>       def main():
> >>>           parser = argparse.ArgumentParser(description="LAVA/OE
> >>> ptest script",
> >>>      diff --git a/automated/utils/send-to-lava.sh
> >>>      b/automated/utils/send-to-lava.sh
> >>>      index bf2a477..db4442c 100755
> >>>      --- a/automated/utils/send-to-lava.sh
> >>>      +++ b/automated/utils/send-to-lava.sh
> >>>      @@ -4,6 +4,8 @@ RESULT_FILE="$1"
> >>>
> >>>       which lava-test-case > /dev/null 2>&1
> >>>       lava_test_case="$?"
> >>>      +which lava-test-set > /dev/null 2>&1
> >>>      +lava_test_set="$?"
> >>>
> >>>       if [ -f "${RESULT_FILE}" ]; then
> >>>           while read -r line; do
> >>>      @@ -31,6 +33,18 @@ if [ -f "${RESULT_FILE}" ]; then
> >>>                   else
> >>>                      echo "<TEST_CASE_ID=${test} RESULT=${result}
> >>>      MEASUREMENT=${measurement} UNITS=${units}>"
> >>>                   fi
> >>>      +        elif echo "${line}" | egrep -iq "^lava-test-set.*";
> >>> then
> >>>      +            test_set_status="$(echo "${line}" | awk '{print
> >>> $2}')"
> >>>      +            test_set_name="$(echo "${line}" | awk '{print
> >>> $3}')"
> >>>      +            if [ "${lava_test_set}" -eq 0 ]; then
> >>>      +                lava-test-set "${test_set_status}"
> >>>      "${test_set_name}"
> >>>      +            else
> >>>      +                if [ "${test_set_status}" == "start" ]; then
> >>>      +                    echo "<LAVA_SIGNAL_TESTSET START  
> >>>      ${test_set_name}>"  
> >>>      +                else
> >>>      +                    echo "<LAVA_SIGNAL_TESTSET STOP>"
> >>>      +                fi
> >>>      +            fi
> >>>               fi
> >>>           done < "${RESULT_FILE}"
> >>>       else
> >>>
> >>>
> >>>
> >>>      On 01.10.18 17:09, Anibal Limon wrote:  
> >>>>      Hi,
> >>>>
> >>>>      I was on vacation, that's the reason for the slow response,
> >>>>      comments below,
> >>>>
> >>>>      On Fri, 28 Sep 2018 at 03:59, Oleksandr Terentiev
> >>>>      <otere...@cisco.com <mailto:otere...@cisco.com>> wrote:
> >>>>
> >>>>          Hi,
> >>>>
> >>>>          I would like to discuss the following question.
> >>>>          As it was said now we have to analyze pass/fail of
> >>>>          every ptest. From my point of view there are couple
> >>>> options.
> >>>>
> >>>>          The first we can parse output and mark ptest as failed
> >>>> if there is even only one failed test found.
> >>>>
> >>>>
> >>>>      Right I will choice this approach changes needs to be done
> >>>> in the ptest lava script [1] to fail when any of the package
> >>>> tests failed like [2].
> >>>>
> >>>>
> >>>>
> >>>>          The second we can analyze each test within some packet
> >>>> and record corresponding results.
> >>>>          I see a few issues here. First of all there will be a
> >>>> large number of test results as each ptest can run lots of tests.
> >>>>
> >>>>
> >>>>      Right but that need to be handled in every OE ptest script,
> >>>> I mean if you want to fail if certain test inside a ptest fails
> >>>>      needs to be done in OE.
> >>>>
> >>>>
> >>>>          Another thing is that we need somehow separate test
> >>>> results between particular packets.
> >>>>
> >>>>
> >>>>      Currently we use QA reports to see only the package test
> >>>> result, now If you want to be able look at details you need to
> >>>> see the ptest log.
> >>>>
> >>>>          As an option we can use lava-test-set feature for that.
> >>>> So each test within ptest will be marked as test case and
> >>>> packet name we will see as test set.
> >>>>
> >>>>          What do you think about that?
> >>>>
> >>>>
> >>>>      May be the lava-test-set is an option.
> >>>>
> >>>>      I would go to do the 1st option and then start to
> >>>>      review/implement the idea of use lava-test-feature.
> >>>>      Regards,
> >>>>      Anibal
> >>>>
> >>>>      [1]
> >>>>      
> >>>> https://git.linaro.org/qa/test-definitions.git/tree/automated/linux/ptest/ptest.py
> >>>>      [2]
> >>>>      
> >>>> http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n87
> >>>>
> >>>>          Regards,
> >>>>          Alex
> >>>>
> >>>>          On 23.08.18 16:10, Anibal Limon wrote:  
> >>>>>
> >>>>>          On Thu, 23 Aug 2018 at 05:54, Oleksandr Terentiev
> >>>>>          <otere...@cisco.com <mailto:otere...@cisco.com>> wrote:
> >>>>>
> >>>>>              Thank you Anibal for the fast response
> >>>>>
> >>>>>
> >>>>>              On 22.08.18 19:50, Anibal Limon wrote:  
> >>>>>>
> >>>>>>              On Wed, 22 Aug 2018 at 11:39, Oleksandr Terentiev
> >>>>>>              <otere...@cisco.com <mailto:otere...@cisco.com>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>                  Hi,
> >>>>>>
> >>>>>>                  I launched util-linux ptest using
> >>>>>>                  automated/linux/ptest/ptest.yaml from
> >>>>>>                  https://git.linaro.org/qa/test-definitions.git
> >>>>>> and received the
> >>>>>>                  following results:
> >>>>>>                  https://pastebin.com/nj9PYQzE
> >>>>>>
> >>>>>>                  As you can see some tests failed. However,
> >>>>>> case util-linux marked as
> >>>>>>                  passed. It looks like ptest.py only analyze
> >>>>>> return code of ptest-runner
> >>>>>>                  -d <ptest_dir> <ptest_name> command. And since
> >>>>>>                  ptest-runner finishes
> >>>>>>                  correctly exit code is 0. Therefore all tests
> >>>>>> are always marked as
> >>>>>>                  passed, and users never know when some of the
> >>>>>> tests fail.
> >>>>>>
> >>>>>>                  Maybe it worth to analyze each test?
> >>>>>>
> >>>>>>
> >>>>>>              Talking about each ptest the result comes from the
> >>>>>>              ptest script in the OE recipe [1], for convention
> >>>>>> if the OE ptest returns 0 means pass, so
> >>>>>>              needs to be fixed in the OE ptest [2].  
> >>>>>              I’ve read https://wiki.yoctoproject.org/wiki/Ptest
> >>>>>              carefully a few times more. There are prescriptions
> >>>>>              about output format. But I didn’t find any mention
> >>>>> about return code processing or a reference to the convention
> >>>>>              you mentioned in the answer.
> >>>>>
> >>>>>              I looked through some OE run-ptest scripts. I
> >>>>> suspect they don’t verify if some of their tests failed, and
> >>>>>              exit with 0 even if all their tests failed.
> >>>>>
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/acl/run-ptest
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-support/attr/files/run-ptest
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/dbus/dbus/run-ptest
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest
> >>>>>              
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-extended/gawk/gawk/run-ptest
> >>>>>
> >>>>>
> >>>>>
> >>>>>          Right, looks that the OEQA test case was update since i
> >>>>>          worked on it [1], so now it takes into account the
> >>>>> pass/fail of every ptest.
> >>>>>          So the ptest.py needs to implement the same behavior.
> >>>>>
> >>>>>          Regards,
> >>>>>          Anibal
> >>>>>
> >>>>>          [1]
> >>>>>          
> >>>>> http://git.openembedded.org/openembedded-core/tree/meta/lib/oeqa/runtime/cases/ptest.py#n80
> >>>>>     
> >>>>>>              Regarding the LAVA ptest.py script, I mark the
> >>>>>> run as succeed if there is no critical error in the
> >>>>>>              ptest-runner and we have a QA-reports tool to
> >>>>>> analyse pass/fails
> >>>>>>              in detail for every ptest executed [3].  
> >>>>>                  I heard about QA-reports tool but I’ve never
> >>>>> used it before, so maybe I missed something.
> >>>>>                  From
> >>>>>                  
> >>>>> https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/suite/linux-ptest/tests/
> >>>>>                  I see all ptests passed. Still, in log
> >>>>>                  
> >>>>> https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/log
> >>>>>                  I found 54 failed tests and wasn’t able to
> >>>>> find a report which indicates those failures.
> >>>>>
> >>>>>                  Is there such a report? It would be really
> >>>>> useful to know that some tests failed.
> >>>>>
> >>>>>                  Thanks
> >>>>>
> >>>>>     
> >>>>>>              [1]
> >>>>>>              
> >>>>>> http://git.openembedded.org/openembedded-core/tree/meta/recipes-core/util-linux/util-linux/run-ptest
> >>>>>>              [2] https://wiki.yoctoproject.org/wiki/Ptest
> >>>>>>              [3]
> >>>>>>              
> >>>>>> https://qa-reports.linaro.org/qcomlt/openembedded-rpb-sumo/build/37/testrun/1890442/
> >>>>>>
> >>>>>>              Regards,
> >>>>>>              Anibal
> >>>>>>
> >>>>>>
> >>>>>>                  Best regards,
> >>>>>>                  Alex
> >>>>>>     
> >>>>>     
> >>>>     
> >  


-- 

Neil Williams
h...@codehelp.co.uk

Attachment: pgpekWRF4AzpK.pgp
Description: OpenPGP digital signature

_______________________________________________
linaro-validation mailing list
linaro-validation@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-validation
  • Re: [Linaro-v... Oleksandr Terentiev
    • Re: [Lin... Anibal Limon
      • Re: ... Oleksandr Terentiev
        • ... Anibal Limon
          • ... Oleksandr Terentiev
            • ... Anibal Limon
            • ... Oleksandr Terentiev
            • ... Anibal Limon
            • ... Neil Williams
            • ... Oleksandr Terentiev
            • ... Neil Williams
            • ... Kevin Hilman
            • ... Kevin Hilman
            • ... Anibal Limon
            • ... Kevin Hilman
            • ... Oleksandr Terentiev
            • ... Oleksandr Terentiev -X (oterenti - GLOBALLOGIC INC at Cisco)
            • ... Anibal Limon
            • ... Anibal Limon
            • ... Kevin Hilman

Reply via email to