Hi Richard,

To address the maintainable concern, I had restructured the codes in order to 
enable oeqa framework to write out a json file directly for testresult.  

Attached were the patches to enble oeqa framework to write testresult into json 
files, where these files will later be used by the future QA test case 
management tools (eg. store testresult, test reporting, execute manual test 
case and write testresult to json). 

This patch include enable oe-selftest to write json testresult. I had tested 
these patches on our local server where it will write out json files for 
testresult as expected.

Please let me know if you have any more feedback. Thank you very much!

Best regards,
Yeoh Ee Peng

-----Original Message-----
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Tuesday, September 11, 2018 11:09 PM
To: Yeoh, Ee Peng <ee.peng.y...@intel.com>; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] test-result-log: testcase management tool to 
store test result

Hi Ee Peng,

I've been having a look at this code and whilst some of it is good, I also have 
some concerns. With it coming in so late in the cycle, its made it hard to have 
time to review it and allow time to get it right.
With something as important as this to the way the future QA work is done, we 
do need to ensure we use the right approach and that its maintainable.

The patches are ok as a first attempt at this. My biggest concern is that its 
currently parsing log files which we control and generate within out own 
codebase and that parsing is likely to break. In particular, these lines worry 
me from the qalogparser:

regex = ".*RESULTS - (?P<case_name>.*) - Testcase .*: 
(?P<status>PASSED|FAILED|SKIPPED|ERROR|UNKNOWN).*$"
regex = "core-image.*().*Ran.*tests in .*s"
regex = "DEBUG: launchcmd=runqemu*
qemu_list = ['qemuarm', 'qemuarm64', 'qemumips', 'qemumips64', 'qemuppc', 
'qemux86', 'qemux86-64']

since here we're hardcoding the list of qemu's we support, we're also only 
allowing core-image-* and we're relying upon the results format not changing. 
That makes it hard for anyone to extend/reuse this or to use it with real 
hardware?

For example the recent oe-selftest parallelisation code did change the output 
of the tests slightly. I'm not sure if this broke the parsing or not but it is 
an example of the kind of fragility this code has.

What would probably work better for us is if the oeqa framework wrote out a 
json file directly containing the information we need in it, then this code 
would just need to collect up the json files.

I'm also a little concerned at the way unittest discovery is being done, 
grepping for *.py files as far as I understand it. We should probably use the 
list options to the various current test pieces? Also, this is something we 
probably only ever need to do once to seed the QA results store?

Finally, much of the code is using "internal" methods prefixed with "_". I can 
understand why but it seems the code doesn't have a good well structured public 
API as a result.

As such there may me a little too much work needed on this to get it in for 2.6 
:(

Cheers,

Richard
--- Begin Message ---
To enable selftest to write testresult into json files, where
these json files will be used by future test case management
tools for test reporting.

Signed-off-by: Yeoh Ee Peng <ee.peng.y...@intel.com>
---
 meta/lib/oeqa/selftest/context.py | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/meta/lib/oeqa/selftest/context.py 
b/meta/lib/oeqa/selftest/context.py
index c78947e..9ed22ff 100644
--- a/meta/lib/oeqa/selftest/context.py
+++ b/meta/lib/oeqa/selftest/context.py
@@ -73,6 +73,9 @@ class OESelftestTestContextExecutor(OETestContextExecutor):

         parser.add_argument('--machine', required=False, choices=['random', 
'all'],
                             help='Run tests on different machines 
(random/all).')
+
+        parser.add_argument('-ej', '--export-json', action='store_true',
+                            help='Output test result in json format to files.')

         parser.set_defaults(func=self.run)

@@ -99,8 +102,8 @@ class OESelftestTestContextExecutor(OETestContextExecutor):
         return cases_paths

     def _process_args(self, logger, args):
-        args.output_log = '%s-results-%s.log' % (self.name,
-                time.strftime("%Y%m%d%H%M%S"))
+        args.test_start_time = time.strftime("%Y%m%d%H%M%S")
+        args.output_log = '%s-results-%s.log' % (self.name, 
args.test_start_time)
         args.test_data_file = None
         args.CASES_PATHS = None

@@ -222,6 +225,11 @@ class OESelftestTestContextExecutor(OETestContextExecutor):
             rc = self.tc.runTests(**self.tc_kwargs['run'])
             rc.logDetails()
             rc.logSummary(self.name)
+            if args.export_json:
+                json_result_dir = 
os.path.join(os.path.dirname(os.path.abspath(args.output_log)),
+                                               'json_testresults-%s' % 
args.test_start_time,
+                                               'oe-selftest')
+                rc.logDetailsInJson(json_result_dir)

         return rc

--
2.7.4


--- End Message ---
--- Begin Message ---
To enable future QA work, we need the oeqa testresult to be written
into json files, which will be used by the future test case
management tools, where these testresult json files will be stored
into git repository for test reporting.

Also this oeqa framework will be used by the future test case
management tools to write testresult for manual test case
executed into json files.

Signed-off-by: Yeoh Ee Peng <ee.peng.y...@intel.com>
---
 meta/lib/oeqa/core/runner.py | 120 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 109 insertions(+), 11 deletions(-)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index eeb625b..8baf5af 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -6,6 +6,8 @@ import time
 import unittest
 import logging
 import re
+import json
+import pathlib

 from unittest import TextTestResult as _TestResult
 from unittest import TextTestRunner as _TestRunner
@@ -44,6 +46,9 @@ class OETestResult(_TestResult):

         self.tc = tc

+        self.result_types = ['failures', 'errors', 'skipped', 
'expectedFailures', 'successes']
+        self.result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 
'PASSED']
+
     def startTest(self, test):
         # May have been set by concurrencytest
         if test.id() not in self.starttime:
@@ -80,7 +85,7 @@ class OETestResult(_TestResult):
             msg += " (skipped=%d)" % skipped
         self.tc.logger.info(msg)

-    def _getDetailsNotPassed(self, case, type, desc):
+    def _isTestResultContainTestCaseWithResultTypeProvided(self, case, type):
         found = False

         for (scase, msg) in getattr(self, type):
@@ -121,16 +126,12 @@ class OETestResult(_TestResult):
         for case_name in self.tc._registry['cases']:
             case = self.tc._registry['cases'][case_name]

-            result_types = ['failures', 'errors', 'skipped', 
'expectedFailures', 'successes']
-            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 
'PASSED']
-
-            fail = False
+            found = False
             desc = None
-            for idx, name in enumerate(result_types):
-                (fail, msg) = self._getDetailsNotPassed(case, 
result_types[idx],
-                        result_desc[idx])
-                if fail:
-                    desc = result_desc[idx]
+            for idx, name in enumerate(self.result_types):
+                (found, msg) = 
self._isTestResultContainTestCaseWithResultTypeProvided(case, 
self.result_types[idx])
+                if found:
+                    desc = self.result_desc[idx]
                     break

             oeid = -1
@@ -143,13 +144,38 @@ class OETestResult(_TestResult):
             if case.id() in self.starttime and case.id() in self.endtime:
                 t = " (" + "{0:.2f}".format(self.endtime[case.id()] - 
self.starttime[case.id()]) + "s)"

-            if fail:
+            if found:
                 self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % 
(case.id(),
                     oeid, desc, t))
             else:
                 self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % 
(case.id(),
                     oeid, 'UNKNOWN', t))

+    def _get_testcase_result_dict(self):
+        testcase_result_dict = {}
+        for case_name in self.tc._registry['cases']:
+            case = self.tc._registry['cases'][case_name]
+
+            found = False
+            desc = None
+            for idx, name in enumerate(self.result_types):
+                (found, msg) = 
self._isTestResultContainTestCaseWithResultTypeProvided(case, 
self.result_types[idx])
+                if found:
+                    desc = self.result_desc[idx]
+                    break
+
+            if found:
+                testcase_result_dict[case.id()] = desc
+            else:
+                testcase_result_dict[case.id()] = "UNKNOWN"
+        return testcase_result_dict
+
+    def logDetailsInJson(self, file_dir):
+        testcase_result_dict = self._get_testcase_result_dict()
+        if len(testcase_result_dict) > 0:
+            jsontresulthelper = OEJSONTestResultHelper(testcase_result_dict)
+            
jsontresulthelper.write_json_testresult_files_by_testmodule(file_dir)
+
 class OEListTestsResult(object):
     def wasSuccessful(self):
         return True
@@ -261,3 +287,75 @@ class OETestRunner(_TestRunner):
             self._list_tests_module(suite)

         return OEListTestsResult()
+
+class OEJSONTestResultHelper(object):
+    def __init__(self, testcase_result_dict):
+        self.testcase_result_dict = testcase_result_dict
+
+    def get_testcase_list(self):
+        return self.testcase_result_dict.keys()
+
+    def get_testsuite_from_testcase(self, testcase):
+        testsuite = testcase[0:testcase.rfind(".")]
+        return testsuite
+
+    def get_testmodule_from_testsuite(self, testsuite):
+        testmodule = testsuite[0:testsuite.find(".")]
+        return testmodule
+
+    def get_testsuite_testcase_dictionary(self):
+        testsuite_testcase_dict = {}
+        for testcase in self.get_testcase_list():
+            testsuite = self.get_testsuite_from_testcase(testcase)
+            if testsuite in testsuite_testcase_dict:
+                testsuite_testcase_dict[testsuite].append(testcase)
+            else:
+                testsuite_testcase_dict[testsuite] = [testcase]
+        return testsuite_testcase_dict
+
+    def get_testmodule_testsuite_dictionary(self, testsuite_testcase_dict):
+        testsuite_list = testsuite_testcase_dict.keys()
+        testmodule_testsuite_dict = {}
+        for testsuite in testsuite_list:
+            testmodule = self.get_testmodule_from_testsuite(testsuite)
+            if testmodule in testmodule_testsuite_dict:
+                testmodule_testsuite_dict[testmodule].append(testsuite)
+            else:
+                testmodule_testsuite_dict[testmodule] = [testsuite]
+        return testmodule_testsuite_dict
+
+    def _get_testcase_result(self, testcase, testcase_status_dict):
+        if testcase in testcase_status_dict:
+            return testcase_status_dict[testcase]
+        return ""
+
+    def _create_testcase_testresult_object(self, testcase_list, 
testcase_result_dict):
+        testcase_dict = {}
+        for testcase in sorted(testcase_list):
+            result = self._get_testcase_result(testcase, testcase_result_dict)
+            testcase_dict[testcase] = {"testresult": result}
+        return testcase_dict
+
+    def _create_json_testsuite_string(self, testsuite_list, 
testsuite_testcase_dict, testcase_result_dict):
+        testsuite_object = {'testsuite': {}}
+        testsuite_dict = testsuite_object['testsuite']
+        for testsuite in sorted(testsuite_list):
+            testsuite_dict[testsuite] = {'testcase': {}}
+            testsuite_dict[testsuite]['testcase'] = 
self._create_testcase_testresult_object(
+                testsuite_testcase_dict[testsuite],
+                testcase_result_dict)
+        return json.dumps(testsuite_object, sort_keys=True, indent=4)
+
+    def write_json_testresult_files_by_testmodule(self, json_testresult_dir):
+        if not os.path.exists(json_testresult_dir):
+            pathlib.Path(json_testresult_dir).mkdir(parents=True, 
exist_ok=True)
+        testsuite_testcase_dict = self.get_testsuite_testcase_dictionary()
+        testmodule_testsuite_dict = 
self.get_testmodule_testsuite_dictionary(testsuite_testcase_dict)
+        for testmodule in testmodule_testsuite_dict.keys():
+            testsuite_list = testmodule_testsuite_dict[testmodule]
+            json_testsuite = 
self._create_json_testsuite_string(testsuite_list, testsuite_testcase_dict,
+                                                                
self.testcase_result_dict)
+            file_name = '%s.json' % testmodule
+            file_path = os.path.join(json_testresult_dir, file_name)
+            with open(file_path, 'w') as the_file:
+                the_file.write(json_testsuite)
--
2.7.4


--- End Message ---
-- 
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to