Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-05-01 Thread Yeoh Ee Peng
Hi Richard,

I had identified the root cause of the failure and fixed it. The earlier patch 
was calling the get_bb_var('DEPLOY_DIR_IMAGE') before the setting configuration 
and thus using the wrong DEPLOY_DIR to check the debug filesystem being 
generated. 

I had submitted v02 patch. 
https://lists.openembedded.org/g/openembedded-core/message/137697

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie  
Sent: Thursday, April 23, 2020 4:53 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Humberto Ibarra 
Subject: Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

On Wed, 2020-04-01 at 13:37 +0800, Yeoh Ee Peng wrote:
> Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes sure that 
> debug filesystem is created accordingly. Test also check for debug 
> symbols for some packages as suggested by Ross Burton.
> 
> [YOCTO #10906]
> 
> Signed-off-by: Humberto Ibarra 
> Signed-off-by: Yeoh Ee Peng 
> ---
>  meta/lib/oeqa/selftest/cases/imagefeatures.py | 33 
> +++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
> b/meta/lib/oeqa/selftest/cases/imagefeatures.py
> index 5c519ac..9ad5c17 100644
> --- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
> +++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
> @@ -262,3 +262,36 @@ PNBLACKLIST[busybox] = "Don't build this"
>  self.write_config(config)
>  
>  bitbake("--graphviz core-image-sato")
> +
> +def test_image_gen_debugfs(self):
> +"""
> +Summary: Check debugfs generation
> +Expected:1. core-image-minimal can be build with 
> IMAGE_GEN_DEBUGFS variable set
> + 2. debug filesystem is created when variable set
> + 3. debug symbols available
> +Product: oe-core
> +Author:  Humberto Ibarra 
> + Yeoh Ee Peng 
> +"""
> +import glob
> +image_name = 'core-image-minimal'
> +deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
> +
> +features = 'IMAGE_GEN_DEBUGFS = "1"\n'
> +features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
> +features += 'MACHINE = "genericx86-64"\n'
> +self.write_config(features)
> +
> +bitbake(image_name)
> +dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
> +debug_files = glob.glob(dbg_tar_file)
> +self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
> generated')
> +result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, 
> dbg_tar_file))
> +self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
> (dbg_tar_file, result.output))
> +result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
> +self.assertTrue("udevadm" in result.output, msg='Failed to find 
> udevadm: %s' % result.output)
> +dbg_symbols_targets = result.output.splitlines()
> +self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: 
> %s' % dbg_symbols_targets)
> +for t in dbg_symbols_targets:
> +result = runCmd('objdump --syms %s | grep debug' % t)
> +self.assertTrue("debug" in result.output, msg='Failed to 
> + find debug symbol: %s' % result.output)

The test failed on the autobuilder:

https://autobuilder.yoctoproject.org/typhoon/#/builders/79/builds/858
https://autobuilder.yoctoproject.org/typhoon/#/builders/80/builds/855
https://autobuilder.yoctoproject.org/typhoon/#/builders/86/builds/861
https://autobuilder.yoctoproject.org/typhoon/#/builders/87/builds/849

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137699): 
https://lists.openembedded.org/g/openembedded-core/message/137699
Mute This Topic: https://lists.openembedded.org/mt/72694485/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: 
https://lists.openembedded.org/g/openembedded-core/leave/8023207/1426099254/xyzzy
  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH v02] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-05-01 Thread Yeoh Ee Peng
Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes
sure that debug filesystem is created accordingly. Test also check
for debug symbols for some packages as suggested by Ross Burton.

[YOCTO #10906]

Signed-off-by: Humberto Ibarra 
Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/imagefeatures.py | 32 +++
 1 file changed, 32 insertions(+)

diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
b/meta/lib/oeqa/selftest/cases/imagefeatures.py
index 5c519ac..2b9c499 100644
--- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
+++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -262,3 +262,35 @@ PNBLACKLIST[busybox] = "Don't build this"
 self.write_config(config)
 
 bitbake("--graphviz core-image-sato")
+
+def test_image_gen_debugfs(self):
+"""
+Summary: Check debugfs generation
+Expected:1. core-image-minimal can be build with IMAGE_GEN_DEBUGFS 
variable set
+ 2. debug filesystem is created when variable set
+ 3. debug symbols available
+Product: oe-core
+Author:  Humberto Ibarra 
+ Yeoh Ee Peng 
+"""
+import glob
+image_name = 'core-image-minimal'
+features = 'IMAGE_GEN_DEBUGFS = "1"\n'
+features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
+features += 'MACHINE = "genericx86-64"\n'
+self.write_config(features)
+
+bitbake(image_name)
+deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
+debug_files = glob.glob(dbg_tar_file)
+self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
generated at %s' % dbg_tar_file)
+result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, dbg_tar_file))
+self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
(dbg_tar_file, result.output))
+result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
+self.assertTrue("udevadm" in result.output, msg='Failed to find 
udevadm: %s' % result.output)
+dbg_symbols_targets = result.output.splitlines()
+self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: %s' 
% dbg_symbols_targets)
+for t in dbg_symbols_targets:
+result = runCmd('objdump --syms %s | grep debug' % t)
+self.assertTrue("debug" in result.output, msg='Failed to find 
debug symbol: %s' % result.output)
-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137697): 
https://lists.openembedded.org/g/openembedded-core/message/137697
Mute This Topic: https://lists.openembedded.org/mt/73393249/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: 
https://lists.openembedded.org/g/openembedded-core/leave/8023207/1426099254/xyzzy
  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-04-24 Thread Yeoh Ee Peng
Hi Richard,

I did rerun the imagefeatures tests on master and master-next, here are my 
findings.

On master (commit: a44b8d2856a937ca3991cbf566788b0cd541d777), 
test_image_gen_debugfs test was passing. 
2020-04-24 16:05:56,417 - oe-selftest - INFO - test_image_gen_debugfs 
(imagefeatures.ImageFeatures)
2020-04-24 16:34:54,821 - oe-selftest - INFO -  ... ok
2020-04-24 16:34:54,834 - oe-selftest - INFO - 
--
2020-04-24 16:34:54,834 - oe-selftest - INFO - Ran 1 test in 1738.418s
2020-04-24 16:34:54,834 - oe-selftest - INFO - OK
2020-04-24 16:35:01,077 - oe-selftest - INFO - RESULTS:
2020-04-24 16:35:01,078 - oe-selftest - INFO - RESULTS - 
imagefeatures.ImageFeatures.test_image_gen_debugfs: PASSED (1738.40s)
2020-04-24 16:35:01,135 - oe-selftest - INFO - SUMMARY:
2020-04-24 16:35:01,135 - oe-selftest - INFO - oe-selftest () - Ran 1 test in 
1740.237s
2020-04-24 16:35:01,136 - oe-selftest - INFO - oe-selftest - OK - All required 
tests passed (successes=1, skipped=0, failures=0, errors=0)

On master-next (commit: a0852af226802e50955e6e5ddd14f773cb42a10f), the test 
consistently failed at bison do_compile. 
| gcc  -DEXEEXT=\"\"   -I. -I./lib -I../bison-3.5.4 -I../bison-3.5.4/lib 
-DINSTALLDIR=\"/data/eyeoh7/tmp/poky/build-image-minimal-gen-debugs-master-next-st/tmp/work/x86_64-linux/bison-native/3.5.4-r0/recipe-sysroot-native/usr/bin\"
 
-isystem/data/eyeoh7/tmp/poky/build-image-minimal-gen-debugs-master-next-st/tmp/work/x86_64-linux/bison-native/3.5.4-r0/recipe-sysroot-native/usr/include
   
-isystem/data/eyeoh7/tmp/poky/build-image-minimal-gen-debugs-master-next-st/tmp/work/x86_64-linux/bison-native/3.5.4-r0/recipe-sysroot-native/usr/include
 -O2 -pipe -c -o src/bison-symtab.o `test -f 'src/symtab.c' || echo 
'../bison-3.5.4/'`src/symtab.c
| ../bison-3.5.4/lib/fcntl.c: In function 'rpl_fcntl_DUPFD_CLOEXEC':
| ../bison-3.5.4/lib/fcntl.c:507:35: error: 'GNULIB_defined_F_DUPFD_CLOEXEC' 
undeclared (first use in this function); did you mean 'rpl_fcntl_DUPFD_CLOEXEC'?
|static int have_dupfd_cloexec = GNULIB_defined_F_DUPFD_CLOEXEC ? -1 : 0;
|^~
|rpl_fcntl_DUPFD_CLOEXEC
| ../bison-3.5.4/lib/fcntl.c:507:35: note: each undeclared identifier is 
reported only once for each function it appears in
| Makefile:5414: recipe for target 'lib/libbison_a-fcntl.o' failed
| make: *** [lib/libbison_a-fcntl.o] Error 1
| make: *** Waiting for unfinished jobs
| mv examples/c/reccalc/scan.stamp.tmp examples/c/reccalc/scan.stamp
| WARNING: exit code 1 from a shell command.
| 
NOTE: recipe bison-native-3.5.4-r0: task do_compile: Failed
ERROR: Task 
(virtual:native:/data/eyeoh7/tmp/poky/meta/recipes-devtools/bison/bison_3.5.4.bb:do_compile)
 failed with exit code '1'

I will continue debugging this.

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Yeoh, Ee Peng 
Sent: Thursday, April 23, 2020 5:33 PM
To: Richard Purdie ; 
openembedded-core@lists.openembedded.org
Subject: RE: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

Hi Richard,

This was surprising, it look like debug filesystem was not being generated 
given the configuration (IMAGE_GEN_DEBUGFS = "1"). This was exactly the type of 
error that this automated test was designed to catch. 

I shall debug this on master and potentially master-next. 

Thanks,
Yeoh Ee Peng 

2020-04-23 03:57:33,491 - oe-selftest - INFO - 
==
2020-04-23 03:57:33,491 - oe-selftest - INFO - FAIL: 
imagefeatures.ImageFeatures.test_image_gen_debugfs (subunit.RemotedTestCase)
2020-04-23 03:57:33,491 - oe-selftest - INFO - 
--
2020-04-23 03:57:33,491 - oe-selftest - INFO - 
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/home/pokybuild/yocto-worker/oe-selftest-fedora/build/meta/lib/oeqa/selftest/cases/imagefeatures.py",
 line 288, in test_image_gen_debugfs
self.assertNotEqual(len(debug_files), 0, 'debug filesystem not generated')
  File "/usr/lib64/python3.7/unittest/case.py", line 861, in assertNotEqual
raise self.failureException(msg)
AssertionError: 0 == 0 : debug filesystem not generated

-Original Message-
From: Richard Purdie 
Sent: Thursday, April 23, 2020 4:53 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Humberto Ibarra 
Subject: Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

On Wed, 2020-04-01 at 13:37 +0800, Yeoh Ee Peng wrote:
> Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes sure that 
> debug filesystem is created accordingly. Test also check for debug 
> symbols for some packages as suggested by Ross Burton.
> 
> [YOCTO #10906]
> 
> Signe

Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-04-23 Thread Yeoh Ee Peng
Hi Richard,

This was surprising, it look like debug filesystem was not being generated 
given the configuration (IMAGE_GEN_DEBUGFS = "1"). This was exactly the type of 
error that this automated test was designed to catch. 

I shall debug this on master and potentially master-next. 

Thanks,
Yeoh Ee Peng 

2020-04-23 03:57:33,491 - oe-selftest - INFO - 
==
2020-04-23 03:57:33,491 - oe-selftest - INFO - FAIL: 
imagefeatures.ImageFeatures.test_image_gen_debugfs (subunit.RemotedTestCase)
2020-04-23 03:57:33,491 - oe-selftest - INFO - 
--
2020-04-23 03:57:33,491 - oe-selftest - INFO - 
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/home/pokybuild/yocto-worker/oe-selftest-fedora/build/meta/lib/oeqa/selftest/cases/imagefeatures.py",
 line 288, in test_image_gen_debugfs
self.assertNotEqual(len(debug_files), 0, 'debug filesystem not generated')
  File "/usr/lib64/python3.7/unittest/case.py", line 861, in assertNotEqual
raise self.failureException(msg)
AssertionError: 0 == 0 : debug filesystem not generated

-Original Message-
From: Richard Purdie  
Sent: Thursday, April 23, 2020 4:53 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Humberto Ibarra 
Subject: Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

On Wed, 2020-04-01 at 13:37 +0800, Yeoh Ee Peng wrote:
> Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes sure that 
> debug filesystem is created accordingly. Test also check for debug 
> symbols for some packages as suggested by Ross Burton.
> 
> [YOCTO #10906]
> 
> Signed-off-by: Humberto Ibarra 
> Signed-off-by: Yeoh Ee Peng 
> ---
>  meta/lib/oeqa/selftest/cases/imagefeatures.py | 33 
> +++
>  1 file changed, 33 insertions(+)
> 
> diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
> b/meta/lib/oeqa/selftest/cases/imagefeatures.py
> index 5c519ac..9ad5c17 100644
> --- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
> +++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
> @@ -262,3 +262,36 @@ PNBLACKLIST[busybox] = "Don't build this"
>  self.write_config(config)
>  
>  bitbake("--graphviz core-image-sato")
> +
> +def test_image_gen_debugfs(self):
> +"""
> +Summary: Check debugfs generation
> +Expected:1. core-image-minimal can be build with 
> IMAGE_GEN_DEBUGFS variable set
> + 2. debug filesystem is created when variable set
> +     3. debug symbols available
> +Product: oe-core
> +Author:  Humberto Ibarra 
> + Yeoh Ee Peng 
> +"""
> +import glob
> +image_name = 'core-image-minimal'
> +deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
> +
> +features = 'IMAGE_GEN_DEBUGFS = "1"\n'
> +features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
> +features += 'MACHINE = "genericx86-64"\n'
> +self.write_config(features)
> +
> +bitbake(image_name)
> +dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
> +debug_files = glob.glob(dbg_tar_file)
> +self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
> generated')
> +result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, 
> dbg_tar_file))
> +self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
> (dbg_tar_file, result.output))
> +result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
> +self.assertTrue("udevadm" in result.output, msg='Failed to find 
> udevadm: %s' % result.output)
> +dbg_symbols_targets = result.output.splitlines()
> +self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: 
> %s' % dbg_symbols_targets)
> +for t in dbg_symbols_targets:
> +result = runCmd('objdump --syms %s | grep debug' % t)
> +self.assertTrue("debug" in result.output, msg='Failed to 
> + find debug symbol: %s' % result.output)

The test failed on the autobuilder:

https://autobuilder.yoctoproject.org/typhoon/#/builders/79/builds/858
https://autobuilder.yoctoproject.org/typhoon/#/builders/80/builds/855
https://autobuilder.yoctoproject.org/typhoon/#/builders/86/builds/861
https://autobuilder.yoctoproject.org/typhoon/#/builders/87/builds/849

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137409): 
https://lists.openembedded.org/g/openembedded-core/message/137409
Mute This Topic: https://lists.openembedded.org/mt/72694485/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-04-20 Thread Yeoh Ee Peng
Hi all,

Anyone has any inputs or suggestions for this patch that enable selftest for 
IMAGE_GEN_DEBUGFS?
Thank you very much for your attention and help!

Best regards,
Yeoh Ee Peng

-Original Message-
From: Yeoh, Ee Peng  
Sent: Wednesday, April 1, 2020 1:38 PM
To: openembedded-core@lists.openembedded.org
Cc: Yeoh, Ee Peng ; Humberto Ibarra 

Subject: [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes sure that debug 
filesystem is created accordingly. Test also check for debug symbols for some 
packages as suggested by Ross Burton.

[YOCTO #10906]

Signed-off-by: Humberto Ibarra 
Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/imagefeatures.py | 33 +++
 1 file changed, 33 insertions(+)

diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
b/meta/lib/oeqa/selftest/cases/imagefeatures.py
index 5c519ac..9ad5c17 100644
--- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
+++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -262,3 +262,36 @@ PNBLACKLIST[busybox] = "Don't build this"
 self.write_config(config)
 
 bitbake("--graphviz core-image-sato")
+
+def test_image_gen_debugfs(self):
+"""
+Summary: Check debugfs generation
+Expected:1. core-image-minimal can be build with IMAGE_GEN_DEBUGFS 
variable set
+ 2. debug filesystem is created when variable set
+ 3. debug symbols available
+Product: oe-core
+Author:  Humberto Ibarra 
+ Yeoh Ee Peng 
+"""
+import glob
+image_name = 'core-image-minimal'
+deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+
+features = 'IMAGE_GEN_DEBUGFS = "1"\n'
+features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
+features += 'MACHINE = "genericx86-64"\n'
+self.write_config(features)
+
+bitbake(image_name)
+dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
+debug_files = glob.glob(dbg_tar_file)
+self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
generated')
+result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, dbg_tar_file))
+self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
(dbg_tar_file, result.output))
+result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
+self.assertTrue("udevadm" in result.output, msg='Failed to find 
udevadm: %s' % result.output)
+dbg_symbols_targets = result.output.splitlines()
+self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: %s' 
% dbg_symbols_targets)
+for t in dbg_symbols_targets:
+result = runCmd('objdump --syms %s | grep debug' % t)
+self.assertTrue("debug" in result.output, msg='Failed to 
+ find debug symbol: %s' % result.output)
--
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137336): 
https://lists.openembedded.org/g/openembedded-core/message/137336
Mute This Topic: https://lists.openembedded.org/mt/72694485/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] [PATCH] oeqa/runtime/weston: Enhance weston tests

2020-04-20 Thread Yeoh Ee Peng
Hi Richard,

I found that I had missed the DISTRO_FEATURES_remove = "x11" configuration 
during previous development and testing, I had successfully reproduce the issue 
after incorporate the DISTRO_FEATURES_remove = "x11". I had tested the v03 
patch and submitted the patch. 
Sorry for my mistake. 

https://lists.openembedded.org/g/openembedded-core/message/137332
https://lists.openembedded.org/g/openembedded-core/message/137333

v03:
 - previous patch missed the DISTRO_FEATURES_remove = "x11"
   configuration in the building of core-image-weston, where
   the test was using x11 DISPLAY for initializing new 
   wayland compositor
 - replace the environment setup 'export DISPLAY=:0' which 
   use x11 with 'export WAYLAND_DISPLAY=wayland-0' since the
   target core-image-weston had removed x11 
 - include logging for wayland compositor initialization 
   to track detail for debugging in case test fail

Thanks,
Ee Peng

-Original Message-
From: Richard Purdie  
Sent: Friday, April 17, 2020 10:24 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] oeqa/runtime/weston: Enhance weston tests

Hi Ee Peng,

On Fri, 2020-04-17 at 01:53 +, Yeoh, Ee Peng wrote:
> Hi Richard,
> 
> After more testing, I realized that it will take different amount of 
> time for each host machine to initialize new wayland compositor. I had 
> added the logic to retry the checking of new wayland compositor to 
> comprehend this host machine differences. I had submitted the v02 
> patch.
> 
> Thank you very much!

I did try the v2 patch but it still failed unfortunately:

https://autobuilder.yoctoproject.org/typhoon/#/builders/40/builds/1788

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137334): 
https://lists.openembedded.org/g/openembedded-core/message/137334
Mute This Topic: https://lists.openembedded.org/mt/72982654/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH v03] oeqa/runtime/weston: Enhance weston tests

2020-04-20 Thread Yeoh Ee Peng
Existing weston test available make sure that a process
for weston-desktop-shell exist when image boot up.

Enhance weston tests by:
 - execute weston-info to make sure weston interface(s)
   are initialized
 - execute weston and make sure it can initialize a
   new wayland compositor (retry checking for
   wayland processes up to 5 times)
 - enable weston logging for debugging when fail
   to initialize wayland compositor

[YOCTO# 10690]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/weston.py | 50 +++
 1 file changed, 50 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/weston.py 
b/meta/lib/oeqa/runtime/cases/weston.py
index f32599a..ac29eca 100644
--- a/meta/lib/oeqa/runtime/cases/weston.py
+++ b/meta/lib/oeqa/runtime/cases/weston.py
@@ -6,8 +6,15 @@ from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfNotFeature
 from oeqa.runtime.decorator.package import OEHasPackage
+import threading
+import time
 
 class WestonTest(OERuntimeTestCase):
+weston_log_file = '/tmp/weston.log'
+
+@classmethod
+def tearDownClass(cls):
+cls.tc.target.run('rm %s' % cls.weston_log_file)
 
 @OETestDepends(['ssh.SSHTest.test_ssh'])
 @OEHasPackage(['weston'])
@@ -17,3 +24,46 @@ class WestonTest(OERuntimeTestCase):
 msg = ('Weston does not appear to be running %s' %
   self.target.run(self.tc.target_cmds['ps'])[1])
 self.assertEqual(status, 0, msg=msg)
+
+def get_processes_of(self, target, error_msg):
+status, output = self.target.run('pidof %s' % target)
+self.assertEqual(status, 0, msg='Retrieve %s (%s) processes error: %s' 
% (target, error_msg, output))
+return output.split(" ")
+
+def get_weston_command(self, cmd):
+return 'export XDG_RUNTIME_DIR=/run/user/0; export 
WAYLAND_DISPLAY=wayland-0; %s' % cmd
+
+def run_weston_init(self):
+self.target.run(self.get_weston_command('weston --log=%s' % 
self.weston_log_file))
+
+def get_new_wayland_processes(self, existing_wl_processes):
+try_cnt = 0
+while try_cnt < 5:
+time.sleep(5 + 5*try_cnt)
+try_cnt += 1
+wl_processes = self.get_processes_of('weston-desktop-shell', 
'existing and new')
+new_wl_processes = [x for x in wl_processes if x not in 
existing_wl_processes]
+if new_wl_processes:
+return new_wl_processes, try_cnt
+
+return new_wl_processes, try_cnt
+
+@OEHasPackage(['weston'])
+def test_weston_info(self):
+status, output = 
self.target.run(self.get_weston_command('weston-info'))
+self.assertEqual(status, 0, msg='weston-info error: %s' % output)
+
+@OEHasPackage(['weston'])
+def test_weston_can_initialize_new_wayland_compositor(self):
+existing_wl_processes = self.get_processes_of('weston-desktop-shell', 
'existing')
+existing_weston_processes = self.get_processes_of('weston', 'existing')
+
+weston_thread = threading.Thread(target=self.run_weston_init)
+weston_thread.start()
+new_wl_processes, try_cnt = 
self.get_new_wayland_processes(existing_wl_processes)
+existing_and_new_weston_processes = self.get_processes_of('weston', 
'existing and new')
+new_weston_processes = [x for x in existing_and_new_weston_processes 
if x not in existing_weston_processes]
+for w in new_weston_processes:
+self.target.run('kill -9 %s' % w)
+__, weston_log = self.target.run('cat %s' % self.weston_log_file)
+self.assertTrue(new_wl_processes, msg='Could not get new 
weston-desktop-shell processes (%s, try_cnt:%s) weston log: %s' % 
(new_wl_processes, try_cnt, weston_log))
-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137333): 
https://lists.openembedded.org/g/openembedded-core/message/137333
Mute This Topic: https://lists.openembedded.org/mt/73163575/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH] enhanced weston automated tests

2020-04-20 Thread Yeoh Ee Peng
v02:
 - add retry checking for wayland processes up to 5 times, as
   it took different amount of time to initialize wayland 
   compositor inside qemu created by host machine

v03:
 - previous patch missed the DISTRO_FEATURES_remove = "x11"
   configuration in the building of core-image-weston, where
   the test was using x11 DISPLAY for initializing new 
   wayland compositor
 - replace the environment setup 'export DISPLAY=:0' which 
   use x11 with 'export WAYLAND_DISPLAY=wayland-0' since the
   target core-image-weston had removed x11 
 - include logging for wayland compositor initialization 
   to track detail for debugging in case test fail

Yeoh Ee Peng (1):
  oeqa/runtime/weston: Enhance weston tests

 meta/lib/oeqa/runtime/cases/weston.py | 50 +++
 1 file changed, 50 insertions(+)

-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137332): 
https://lists.openembedded.org/g/openembedded-core/message/137332
Mute This Topic: https://lists.openembedded.org/mt/73163574/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] [PATCH] oeqa/runtime/weston: Enhance weston tests

2020-04-16 Thread Yeoh Ee Peng
Hi Richard,

After more testing, I realized that it will take different amount of time for 
each host machine to initialize new wayland compositor. I had added the logic 
to retry the checking of new wayland compositor to comprehend this host machine 
differences. I had submitted the v02 patch. 

Thank you very much!

Thanks,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie  
Sent: Friday, April 17, 2020 6:35 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] oeqa/runtime/weston: Enhance weston tests

On Mon, 2020-04-13 at 16:49 +0800, Yeoh Ee Peng wrote:
> Existing weston test available make sure that a process for 
> weston-desktop-shell exist when image boot up.
> 
> Enhance weston tests by:
>  - execute weston-info to make sure weston interface(s)
>are initialized
>  - execute weston and make sure it can initialize a
>new wayland compositor
> 
> [YOCTO# 10690]
> 
> Signed-off-by: Yeoh Ee Peng 

I think this fails under testing:

https://autobuilder.yoctoproject.org/typhoon/#/builders/40/builds/1785

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137262): 
https://lists.openembedded.org/g/openembedded-core/message/137262
Mute This Topic: https://lists.openembedded.org/mt/72982654/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH v02] oeqa/runtime/weston: Enhance weston tests

2020-04-16 Thread Yeoh Ee Peng
Existing weston test available make sure that a process
for weston-desktop-shell exist when image boot up.

Enhance weston tests by:
 - execute weston-info to make sure weston interface(s)
   are initialized
 - execute weston and make sure it can initialize a
   new wayland compositor (retry checking for
   wayland processes up to 5 times)

[YOCTO# 10690]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/weston.py | 40 +++
 1 file changed, 40 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/weston.py 
b/meta/lib/oeqa/runtime/cases/weston.py
index f32599a..5c01765 100644
--- a/meta/lib/oeqa/runtime/cases/weston.py
+++ b/meta/lib/oeqa/runtime/cases/weston.py
@@ -6,6 +6,8 @@ from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfNotFeature
 from oeqa.runtime.decorator.package import OEHasPackage
+import threading
+import time
 
 class WestonTest(OERuntimeTestCase):
 
@@ -17,3 +19,41 @@ class WestonTest(OERuntimeTestCase):
 msg = ('Weston does not appear to be running %s' %
   self.target.run(self.tc.target_cmds['ps'])[1])
 self.assertEqual(status, 0, msg=msg)
+
+def get_weston_command(self, cmd):
+return 'export XDG_RUNTIME_DIR=/run/user/0; export DISPLAY=:0; %s' % 
cmd
+
+def run_weston_init(self):
+self.target.run(self.get_weston_command('weston'))
+
+def get_new_wayland_process(self, existing_wl_processes):
+try_cnt = 0
+while try_cnt < 5:
+time.sleep(5 + 5*try_cnt)
+try_cnt += 1
+status, output = self.target.run('pidof weston-desktop-shell')
+self.assertEqual(status, 0, msg='Retrieve existing and new 
weston-desktop-shell processes error: %s' % output)
+wl_processes = output.split(" ")
+new_wl_processes = [x for x in wl_processes if x not in 
existing_wl_processes]
+if new_wl_processes:
+return new_wl_processes, try_cnt
+
+return new_wl_processes, try_cnt
+
+@OEHasPackage(['weston'])
+def test_weston_info(self):
+status, output = 
self.target.run(self.get_weston_command('weston-info'))
+self.assertEqual(status, 0, msg='weston-info error: %s' % output)
+
+@OEHasPackage(['weston'])
+def test_weston_can_initialize_new_wayland_compositor(self):
+status, output = self.target.run('pidof weston-desktop-shell')
+self.assertEqual(status, 0, msg='Retrieve existing 
weston-desktop-shell processes error: %s' % output)
+existing_wl_processes = output.split(" ")
+
+weston_thread = threading.Thread(target=self.run_weston_init)
+weston_thread.start()
+new_wl_processes, try_cnt = 
self.get_new_wayland_process(existing_wl_processes)
+self.assertTrue(new_wl_processes, msg='Check new weston-desktop-shell 
processes error: %s (try_cnt:%s)' % (new_wl_processes, try_cnt))
+for wl in new_wl_processes:
+self.target.run('kill -9 %s' % wl)
-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137261): 
https://lists.openembedded.org/g/openembedded-core/message/137261
Mute This Topic: https://lists.openembedded.org/mt/73069264/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH] oeqa/runtime/weston: Enhance weston tests

2020-04-13 Thread Yeoh Ee Peng
Existing weston test available make sure that a process
for weston-desktop-shell exist when image boot up.

Enhance weston tests by:
 - execute weston-info to make sure weston interface(s)
   are initialized
 - execute weston and make sure it can initialize a
   new wayland compositor

[YOCTO# 10690]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/weston.py | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/weston.py 
b/meta/lib/oeqa/runtime/cases/weston.py
index f32599a..f79ed64 100644
--- a/meta/lib/oeqa/runtime/cases/weston.py
+++ b/meta/lib/oeqa/runtime/cases/weston.py
@@ -6,6 +6,8 @@ from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfNotFeature
 from oeqa.runtime.decorator.package import OEHasPackage
+import threading
+import time
 
 class WestonTest(OERuntimeTestCase):
 
@@ -17,3 +19,31 @@ class WestonTest(OERuntimeTestCase):
 msg = ('Weston does not appear to be running %s' %
   self.target.run(self.tc.target_cmds['ps'])[1])
 self.assertEqual(status, 0, msg=msg)
+
+def get_weston_command(self, cmd):
+return 'export XDG_RUNTIME_DIR=/run/user/0; export DISPLAY=:0; %s' % 
cmd
+
+def run_weston_init(self):
+self.target.run(self.get_weston_command('weston'))
+
+@OEHasPackage(['weston'])
+def test_weston_info(self):
+status, output = 
self.target.run(self.get_weston_command('weston-info'))
+self.assertEqual(status, 0, msg='weston-info error: %s' % output)
+
+@OEHasPackage(['weston'])
+def test_weston_can_initialize_new_wayland_compositor(self):
+status, output = self.target.run('pidof weston-desktop-shell')
+self.assertEqual(status, 0, msg='Retrieve existing 
weston-desktop-shell processes error: %s' % output)
+existing_wl_processes = output.split(" ")
+
+weston_thread = threading.Thread(target=self.run_weston_init)
+weston_thread.start()
+time.sleep(5)
+status, output = self.target.run('pidof weston-desktop-shell')
+self.assertEqual(status, 0, msg='Retrieve existing and new 
weston-desktop-shell processes error: %s' % output)
+wl_processes = output.split(" ")
+new_wl_processes = [x for x in wl_processes if x not in 
existing_wl_processes]
+self.assertTrue(new_wl_processes, msg='Check new weston-desktop-shell 
processes error: %s' % new_wl_processes)
+for wl in new_wl_processes:
+self.target.run('kill -9 %s' % wl)
-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#137192): 
https://lists.openembedded.org/g/openembedded-core/message/137192
Mute This Topic: https://lists.openembedded.org/mt/72982654/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-03-31 Thread Yeoh Ee Peng
Hi Ross,

This is a following up patch for enable sanity test for IMAGE_GEN_DEBUGFS. You 
provided the review for this patch in the past. Could you take a look and give 
us your inputs? Thank you very much for your attention and help!
https://lists.openembedded.org/g/openembedded-core/message/106075?p=,,,20,0,0,0::Created,,IMAGE_GEN_DEBUGFS,20,2,20,72347194

Thanks,
Ee Peng 

-Original Message-
From: openembedded-core@lists.openembedded.org 
 On Behalf Of Yeoh Ee Peng
Sent: Wednesday, April 1, 2020 1:38 PM
To: openembedded-core@lists.openembedded.org
Cc: Yeoh, Ee Peng ; Humberto Ibarra 

Subject: [OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for 
IMAGE_GEN_DEBUGFS

Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes sure that debug 
filesystem is created accordingly. Test also check for debug symbols for some 
packages as suggested by Ross Burton.

[YOCTO #10906]

Signed-off-by: Humberto Ibarra 
Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/imagefeatures.py | 33 +++
 1 file changed, 33 insertions(+)

diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
b/meta/lib/oeqa/selftest/cases/imagefeatures.py
index 5c519ac..9ad5c17 100644
--- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
+++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -262,3 +262,36 @@ PNBLACKLIST[busybox] = "Don't build this"
 self.write_config(config)
 
 bitbake("--graphviz core-image-sato")
+
+def test_image_gen_debugfs(self):
+"""
+Summary: Check debugfs generation
+Expected:1. core-image-minimal can be build with IMAGE_GEN_DEBUGFS 
variable set
+ 2. debug filesystem is created when variable set
+ 3. debug symbols available
+Product: oe-core
+Author:  Humberto Ibarra 
+ Yeoh Ee Peng 
+"""
+import glob
+image_name = 'core-image-minimal'
+deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+
+features = 'IMAGE_GEN_DEBUGFS = "1"\n'
+features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
+features += 'MACHINE = "genericx86-64"\n'
+self.write_config(features)
+
+bitbake(image_name)
+dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
+debug_files = glob.glob(dbg_tar_file)
+self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
generated')
+result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, dbg_tar_file))
+self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
(dbg_tar_file, result.output))
+result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
+self.assertTrue("udevadm" in result.output, msg='Failed to find 
udevadm: %s' % result.output)
+dbg_symbols_targets = result.output.splitlines()
+self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: %s' 
% dbg_symbols_targets)
+for t in dbg_symbols_targets:
+result = runCmd('objdump --syms %s | grep debug' % t)
+self.assertTrue("debug" in result.output, msg='Failed to 
+ find debug symbol: %s' % result.output)
--
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#136915): 
https://lists.openembedded.org/g/openembedded-core/message/136915
Mute This Topic: https://lists.openembedded.org/mt/72694485/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH] selftest/imagefeatures: Enable sanity test for IMAGE_GEN_DEBUGFS

2020-03-31 Thread Yeoh Ee Peng
Add new testcase to check IMAGE_GEN_DEBUGFS. Test makes
sure that debug filesystem is created accordingly. Test also check
for debug symbols for some packages as suggested by Ross Burton.

[YOCTO #10906]

Signed-off-by: Humberto Ibarra 
Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/imagefeatures.py | 33 +++
 1 file changed, 33 insertions(+)

diff --git a/meta/lib/oeqa/selftest/cases/imagefeatures.py 
b/meta/lib/oeqa/selftest/cases/imagefeatures.py
index 5c519ac..9ad5c17 100644
--- a/meta/lib/oeqa/selftest/cases/imagefeatures.py
+++ b/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -262,3 +262,36 @@ PNBLACKLIST[busybox] = "Don't build this"
 self.write_config(config)
 
 bitbake("--graphviz core-image-sato")
+
+def test_image_gen_debugfs(self):
+"""
+Summary: Check debugfs generation
+Expected:1. core-image-minimal can be build with IMAGE_GEN_DEBUGFS 
variable set
+ 2. debug filesystem is created when variable set
+ 3. debug symbols available
+Product: oe-core
+Author:  Humberto Ibarra 
+ Yeoh Ee Peng 
+"""
+import glob
+image_name = 'core-image-minimal'
+deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+
+features = 'IMAGE_GEN_DEBUGFS = "1"\n'
+features += 'IMAGE_FSTYPES_DEBUGFS = "tar.bz2"\n'
+features += 'MACHINE = "genericx86-64"\n'
+self.write_config(features)
+
+bitbake(image_name)
+dbg_tar_file = os.path.join(deploy_dir_image, "*-dbg.rootfs.tar.bz2")
+debug_files = glob.glob(dbg_tar_file)
+self.assertNotEqual(len(debug_files), 0, 'debug filesystem not 
generated')
+result = runCmd('cd %s; tar xvf %s' % (deploy_dir_image, dbg_tar_file))
+self.assertEqual(result.status, 0, msg='Failed to extract %s: %s' % 
(dbg_tar_file, result.output))
+result = runCmd('find %s -name %s' % (deploy_dir_image, "udevadm"))
+self.assertTrue("udevadm" in result.output, msg='Failed to find 
udevadm: %s' % result.output)
+dbg_symbols_targets = result.output.splitlines()
+self.assertTrue(dbg_symbols_targets, msg='Failed to split udevadm: %s' 
% dbg_symbols_targets)
+for t in dbg_symbols_targets:
+result = runCmd('objdump --syms %s | grep debug' % t)
+self.assertTrue("debug" in result.output, msg='Failed to find 
debug symbol: %s' % result.output)
-- 
2.7.4

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#136914): 
https://lists.openembedded.org/g/openembedded-core/message/136914
Mute This Topic: https://lists.openembedded.org/mt/72694485/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[OE-core] [PATCH] scripts/lib/resulttool/report: Enable report selected test case result

2020-01-31 Thread Yeoh Ee Peng
Enable reporting selected test case result given the user provided
the selected test case id. If both test result id and test case id
were provided, report the selected test case result from the
selected test result id.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/report.py | 33 -
 1 file changed, 28 insertions(+), 5 deletions(-)

diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
index 692dd7a..7ceceac 100644
--- a/scripts/lib/resulttool/report.py
+++ b/scripts/lib/resulttool/report.py
@@ -212,7 +212,21 @@ class ResultsTextReport(object):
  maxlen=maxlen)
 print(output)
 
-def view_test_report(self, logger, source_dir, branch, commit, tag, 
use_regression_map, raw_test):
+def view_test_report(self, logger, source_dir, branch, commit, tag, 
use_regression_map, raw_test, selected_test_case_only):
+def print_selected_testcase_result(testresults, 
selected_test_case_only):
+for testsuite in testresults:
+for resultid in testresults[testsuite]:
+result = testresults[testsuite][resultid]['result']
+test_case_result = result.get(selected_test_case_only, {})
+if test_case_result.get('status'):
+print('Found selected test case result for %s from %s' 
% (selected_test_case_only,
+   
resultid))
+print(test_case_result['status'])
+else:
+print('Could not find selected test case result for %s 
from %s' % (selected_test_case_only,
+   
resultid))
+if test_case_result.get('log'):
+print(test_case_result['log'])
 test_count_reports = []
 configmap = resultutils.store_map
 if use_regression_map:
@@ -235,12 +249,18 @@ class ResultsTextReport(object):
 for testsuite in testresults:
 result = testresults[testsuite].get(raw_test, {})
 if result:
-raw_results[testsuite] = result
+raw_results[testsuite] = {raw_test: result}
 if raw_results:
-print(json.dumps(raw_results, sort_keys=True, indent=4))
+if selected_test_case_only:
+print_selected_testcase_result(raw_results, 
selected_test_case_only)
+else:
+print(json.dumps(raw_results, sort_keys=True, indent=4))
 else:
 print('Could not find raw test result for %s' % raw_test)
 return 0
+if selected_test_case_only:
+print_selected_testcase_result(testresults, 
selected_test_case_only)
+return 0
 for testsuite in testresults:
 for resultid in testresults[testsuite]:
 skip = False
@@ -268,7 +288,7 @@ class ResultsTextReport(object):
 def report(args, logger):
 report = ResultsTextReport()
 report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag, args.use_regression_map,
-args.raw_test_only)
+args.raw_test_only, args.selected_test_case_only)
 return 0
 
 def register_commands(subparsers):
@@ -287,4 +307,7 @@ def register_commands(subparsers):
   help='instead of the default "store_map", use 
the "regression_map" for report')
 parser_build.add_argument('-r', '--raw_test_only', default='',
   help='output raw test result only for the user 
provided test result id')
-
+parser_build.add_argument('-s', '--selected_test_case_only', default='',
+  help='output selected test case result for the 
user provided test case id, if both test '
+   'result id and test case id are provided 
then output the selected test case result '
+   'from the provided test result id')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 3/4] scripts/resulttool/report: Add total statistic to test result.

2019-11-07 Thread Yeoh Ee Peng
Add total passed, failed, and skipped statistic to test result.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/report.py  | 5 +
 scripts/lib/resulttool/template/test_report_full_text.txt | 3 ++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
index 0c83fb6..692dd7a 100644
--- a/scripts/lib/resulttool/report.py
+++ b/scripts/lib/resulttool/report.py
@@ -186,6 +186,10 @@ class ResultsTextReport(object):
 havefailed = True
 if line['machine'] not in machines:
 machines.append(line['machine'])
+reporttotalvalues = {}
+for k in cols:
+reporttotalvalues[k] = '%s' % sum([line[k] for line in 
test_count_reports])
+reporttotalvalues['count'] = '%s' % len(test_count_reports)
 for (machine, report) in self.ptests.items():
 for ptest in self.ptests[machine]:
 if len(ptest) > maxlen['ptest']:
@@ -199,6 +203,7 @@ class ResultsTextReport(object):
 if len(ltpposixtest) > maxlen['ltpposixtest']:
 maxlen['ltpposixtest'] = len(ltpposixtest)
 output = template.render(reportvalues=reportvalues,
+ reporttotalvalues=reporttotalvalues,
  havefailed=havefailed,
  machines=machines,
  ptests=self.ptests,
diff --git a/scripts/lib/resulttool/template/test_report_full_text.txt 
b/scripts/lib/resulttool/template/test_report_full_text.txt
index 17c99cb..2efba2e 100644
--- a/scripts/lib/resulttool/template/test_report_full_text.txt
+++ b/scripts/lib/resulttool/template/test_report_full_text.txt
@@ -8,7 +8,8 @@ Test Result Status Summary (Counts/Percentages sorted by 
testseries, ID)
 {{ report.testseries.ljust(maxlen['testseries']) }} | {{ 
report.result_id.ljust(maxlen['result_id']) }} | {{ 
(report.passed|string).ljust(maxlen['passed']) }} | {{ 
(report.failed|string).ljust(maxlen['failed']) }} | {{ 
(report.skipped|string).ljust(maxlen['skipped']) }}
 {% endfor %}
 
--
-
+{{ 'Total'.ljust(maxlen['testseries']) }} | {{ 
reporttotalvalues['count'].ljust(maxlen['result_id']) }} | {{ 
reporttotalvalues['passed'].ljust(maxlen['passed']) }} | {{ 
reporttotalvalues['failed'].ljust(maxlen['failed']) }} | {{ 
reporttotalvalues['skipped'].ljust(maxlen['skipped']) }}
+--
 
 {% for machine in machines %}
 {% if ptests[machine] %}
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 4/4] resulttool/store.py: Enable add extra test environment data

2019-11-07 Thread Yeoh Ee Peng
Enable the option to add extra test environment data to the
configuration of each test result (as optional).

Example of optional test environment data include:
- custom packages included for runtime test
- detail machine specification used as target
- detail host environment used for bitbake

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/store.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/store.py b/scripts/lib/resulttool/store.py
index 79c83dd..e0951f0 100644
--- a/scripts/lib/resulttool/store.py
+++ b/scripts/lib/resulttool/store.py
@@ -24,6 +24,8 @@ def store(args, logger):
 configvars = resultutils.extra_configvars.copy()
 if args.executed_by:
 configvars['EXECUTED_BY'] = args.executed_by
+if args.extra_test_env:
+configvars['EXTRA_TEST_ENV'] = args.extra_test_env
 results = {}
 logger.info('Reading files from %s' % args.source)
 if resultutils.is_url(args.source) or os.path.isfile(args.source):
@@ -98,4 +100,5 @@ def register_commands(subparsers):
   help='don\'t error if no results to store are 
found')
 parser_build.add_argument('-x', '--executed-by', default='',
   help='add executed-by configuration to each 
result file')
-
+parser_build.add_argument('-t', '--extra-test-env', default='',
+  help='add extra test environment data to each 
result file configuration')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/4] scripts/resulttool/report: Enable output raw test results

2019-11-07 Thread Yeoh Ee Peng
In case of debugging, report user need to acccess the raw
test result. Instead of going back to source file/directory/URL
to manually pull out the raw result, provide alternative
way to let report showing raw test results by providing
the result id (optional).

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/report.py | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
index d2d4d1b..0c83fb6 100644
--- a/scripts/lib/resulttool/report.py
+++ b/scripts/lib/resulttool/report.py
@@ -207,7 +207,7 @@ class ResultsTextReport(object):
  maxlen=maxlen)
 print(output)
 
-def view_test_report(self, logger, source_dir, branch, commit, tag, 
use_regression_map):
+def view_test_report(self, logger, source_dir, branch, commit, tag, 
use_regression_map, raw_test):
 test_count_reports = []
 configmap = resultutils.store_map
 if use_regression_map:
@@ -225,6 +225,17 @@ class ResultsTextReport(object):
 testresults = resultutils.git_get_result(repo, [tag], 
configmap=configmap)
 else:
 testresults = resultutils.load_resultsdata(source_dir, 
configmap=configmap)
+if raw_test:
+raw_results = {}
+for testsuite in testresults:
+result = testresults[testsuite].get(raw_test, {})
+if result:
+raw_results[testsuite] = result
+if raw_results:
+print(json.dumps(raw_results, sort_keys=True, indent=4))
+else:
+print('Could not find raw test result for %s' % raw_test)
+return 0
 for testsuite in testresults:
 for resultid in testresults[testsuite]:
 skip = False
@@ -251,7 +262,8 @@ class ResultsTextReport(object):
 
 def report(args, logger):
 report = ResultsTextReport()
-report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag, args.use_regression_map)
+report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag, args.use_regression_map,
+args.raw_test_only)
 return 0
 
 def register_commands(subparsers):
@@ -268,4 +280,6 @@ def register_commands(subparsers):
   help='source_dir is a git repository, report on 
the tag specified from that repository')
 parser_build.add_argument('-m', '--use_regression_map', 
action='store_true',
   help='instead of the default "store_map", use 
the "regression_map" for report')
+parser_build.add_argument('-r', '--raw_test_only', default='',
+  help='output raw test result only for the user 
provided test result id')
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/4] scripts/resulttool/report: Enable report to use regression_map

2019-11-07 Thread Yeoh Ee Peng
By default, report will use the store_map to generate the key
to reference each result set. In some situation when using store_map
with multiple set of tests sharing similar test configurations,
the report will only showing partial result set for results
that having identical result_id (use of multiconfig to run tests
where it generate identical result_id).

Enable report to have the option to use the regression_map (optional)
instead of the default store_map, where it will take larger
set of configurations to generate the key to reference each
result set, this will prevent the report from only showing
partial result set.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/report.py  | 16 +++-
 scripts/lib/resulttool/resultutils.py |  4 ++--
 2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
index 883b525..d2d4d1b 100644
--- a/scripts/lib/resulttool/report.py
+++ b/scripts/lib/resulttool/report.py
@@ -207,8 +207,11 @@ class ResultsTextReport(object):
  maxlen=maxlen)
 print(output)
 
-def view_test_report(self, logger, source_dir, branch, commit, tag):
+def view_test_report(self, logger, source_dir, branch, commit, tag, 
use_regression_map):
 test_count_reports = []
+configmap = resultutils.store_map
+if use_regression_map:
+configmap = resultutils.regression_map
 if commit:
 if tag:
 logger.warning("Ignoring --tag as --commit was specified")
@@ -216,12 +219,12 @@ class ResultsTextReport(object):
 repo = GitRepo(source_dir)
 revs = gitarchive.get_test_revs(logger, repo, tag_name, 
branch=branch)
 rev_index = gitarchive.rev_find(revs, 'commit', commit)
-testresults = resultutils.git_get_result(repo, revs[rev_index][2])
+testresults = resultutils.git_get_result(repo, revs[rev_index][2], 
configmap=configmap)
 elif tag:
 repo = GitRepo(source_dir)
-testresults = resultutils.git_get_result(repo, [tag])
+testresults = resultutils.git_get_result(repo, [tag], 
configmap=configmap)
 else:
-testresults = resultutils.load_resultsdata(source_dir)
+testresults = resultutils.load_resultsdata(source_dir, 
configmap=configmap)
 for testsuite in testresults:
 for resultid in testresults[testsuite]:
 skip = False
@@ -248,7 +251,7 @@ class ResultsTextReport(object):
 
 def report(args, logger):
 report = ResultsTextReport()
-report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag)
+report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag, args.use_regression_map)
 return 0
 
 def register_commands(subparsers):
@@ -263,3 +266,6 @@ def register_commands(subparsers):
 parser_build.add_argument('--commit', help="Revision to report")
 parser_build.add_argument('-t', '--tag', default='',
   help='source_dir is a git repository, report on 
the tag specified from that repository')
+parser_build.add_argument('-m', '--use_regression_map', 
action='store_true',
+  help='instead of the default "store_map", use 
the "regression_map" for report')
+
diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index 7cb85a6..f0ae8ec 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -177,7 +177,7 @@ def save_resultsdata(results, destdir, 
fn="testresults.json", ptestjson=False, p
 with open(dst.replace(fn, "ptest-%s.log" % i), 
"w+") as f:
 f.write(sectionlog)
 
-def git_get_result(repo, tags):
+def git_get_result(repo, tags, configmap=store_map):
 git_objs = []
 for tag in tags:
 files = repo.run_cmd(['ls-tree', "--name-only", "-r", 
tag]).splitlines()
@@ -200,7 +200,7 @@ def git_get_result(repo, tags):
 # Optimize by reading all data with one git command
 results = {}
 for obj in parse_json_stream(repo.run_cmd(['show'] + git_objs + ['--'])):
-append_resultsdata(results, obj)
+append_resultsdata(results, obj, configmap=configmap)
 
 return results
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 0/4] Resulttool minor enhancements

2019-11-07 Thread Yeoh Ee Peng
Resulttool minor enhancements
 - Enable report to use regression_map
 - Enable report to output raw test results
 - Enable report to print total test result statistic
 - Enable store to add extra test environment config data

Yeoh Ee Peng (4):
  scripts/resulttool/report: Enable report to use regression_map
  scripts/resulttool/report: Enable output raw test results
  scripts/resulttool/report: Add total statistic to test result.
  resulttool/store.py: Enable add extra test environment data

 scripts/lib/resulttool/report.py   | 35 ++
 scripts/lib/resulttool/resultutils.py  |  4 +--
 scripts/lib/resulttool/store.py|  5 +++-
 .../resulttool/template/test_report_full_text.txt  |  3 +-
 4 files changed, 38 insertions(+), 9 deletions(-)

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] scripts/oe-pkgdata-util: Enable list-pkgs to print ordered packages

2019-11-01 Thread Yeoh Ee Peng
The list-pkgs currently print packages in unordered format.
Enable list-pkgs to print ordered packages that will ease
viewing.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/oe-pkgdata-util | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/scripts/oe-pkgdata-util b/scripts/oe-pkgdata-util
index 9cc78d1..93220e3 100755
--- a/scripts/oe-pkgdata-util
+++ b/scripts/oe-pkgdata-util
@@ -389,21 +389,16 @@ def list_pkgs(args):
 return False
 return True
 
+pkglist = []
 if args.recipe:
 packages = get_recipe_pkgs(args.pkgdata_dir, args.recipe, 
args.unpackaged)
 
 if args.runtime:
-pkglist = []
 runtime_pkgs = lookup_pkglist(packages, args.pkgdata_dir, False)
 for rtpkgs in runtime_pkgs.values():
 pkglist.extend(rtpkgs)
 else:
 pkglist = packages
-
-for pkg in pkglist:
-if matchpkg(pkg):
-found = True
-print("%s" % pkg)
 else:
 if args.runtime:
 searchdir = 'runtime-reverse'
@@ -414,9 +409,13 @@ def list_pkgs(args):
 for fn in files:
 if fn.endswith('.packaged'):
 continue
-if matchpkg(fn):
-found = True
-print("%s" % fn)
+pkglist.append(fn)
+
+for pkg in sorted(pkglist):
+if matchpkg(pkg):
+found = True
+print("%s" % pkg)
+
 if not found:
 if args.pkgspec:
 logger.error("Unable to find any package matching %s" % 
args.pkgspec)
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] resulttool/store.py: Enable add extra test environment data

2019-10-28 Thread Yeoh Ee Peng
Enable the option to add extra test environment data to the
configuration of each test result (as optional).

Example of optional test environment data include:
- custom packages included for runtime test
- detail machine specification used as target
- detail host environment used for bitbake

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/store.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/store.py b/scripts/lib/resulttool/store.py
index 79c83dd..e0951f0 100644
--- a/scripts/lib/resulttool/store.py
+++ b/scripts/lib/resulttool/store.py
@@ -24,6 +24,8 @@ def store(args, logger):
 configvars = resultutils.extra_configvars.copy()
 if args.executed_by:
 configvars['EXECUTED_BY'] = args.executed_by
+if args.extra_test_env:
+configvars['EXTRA_TEST_ENV'] = args.extra_test_env
 results = {}
 logger.info('Reading files from %s' % args.source)
 if resultutils.is_url(args.source) or os.path.isfile(args.source):
@@ -98,4 +100,5 @@ def register_commands(subparsers):
   help='don\'t error if no results to store are 
found')
 parser_build.add_argument('-x', '--executed-by', default='',
   help='add executed-by configuration to each 
result file')
-
+parser_build.add_argument('-t', '--extra-test-env', default='',
+  help='add extra test environment data to each 
result file configuration')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] testimage.bbclass: Add kernel provider and version to testresult

2019-10-02 Thread Yeoh Ee Peng
In running QA testing, we sometime need to select custom  provider for
virtual/kernel and version. To track the selected virtual/kernel provider
and version used during test, we need to add these information to
testresult.

This patch add the virtual/kernel and version into the testresult
configuration section.

Example of virtual/kernel and version added to testresult configuration:
   - "KERNEL_PROVIDER_VERSION": "linux-intel_4.19"

Signed-off-by: Yeoh Ee Peng 
---
 meta/classes/testimage.bbclass | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/meta/classes/testimage.bbclass b/meta/classes/testimage.bbclass
index 525c5a6..194d549 100644
--- a/meta/classes/testimage.bbclass
+++ b/meta/classes/testimage.bbclass
@@ -129,14 +129,30 @@ def testimage_sanity(d):
 def get_testimage_configuration(d, test_type, machine):
 import platform
 from oeqa.utils.metadata import get_layers
+distro = d.getVar("DISTRO")
+distrooverride = d.getVar("DISTROOVERRIDES")
+
+kernel_provider = d.getVar("PREFERRED_PROVIDER_virtual/kernel")
+for o in distrooverride.split(":"):
+kernel_provider_override = 
d.getVar("PREFERRED_PROVIDER_virtual/kernel_%s" % o)
+if kernel_provider_override:
+kernel_provider = kernel_provider_override
+
+kernel_provider_version = d.getVar("PREFERRED_VERSION_%s" % 
kernel_provider).replace('%', '')
+for o in distrooverride.split(":"):
+kernel_provider_version_override = d.getVar("PREFERRED_VERSION_%s_%s" 
% (kernel_provider, o))
+if kernel_provider_version_override:
+kernel_provider_version = 
kernel_provider_version_override.replace('%', '')
+
 configuration = {'TEST_TYPE': test_type,
 'MACHINE': machine,
-'DISTRO': d.getVar("DISTRO"),
+'DISTRO': distro,
 'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
 'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
 'STARTTIME': d.getVar("DATETIME"),
 'HOST_DISTRO': oe.lsb.distro_identifier().replace(' ', 
'-'),
-'LAYERS': get_layers(d.getVar("BBLAYERS"))}
+'LAYERS': get_layers(d.getVar("BBLAYERS")),
+'KERNEL_PROVIDER_VERSION': '%s_%s' % (kernel_provider, 
kernel_provider_version)}
 return configuration
 get_testimage_configuration[vardepsexclude] = "DATETIME"
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] testimage.bbclass: Add kernel provider and version to testresult

2019-09-16 Thread Yeoh Ee Peng
In running QA testing, we sometime need to select custom  provider for
virtual/kernel and version. To track the selected virtual/kernel provider
and version used during test, we need to add these information to
testresult.

This patch add the virtual/kernel and version into the testresult
configuration section.

Example of virtual/kernel and version added to testresult configuration:
   - "KERNEL_PROVIDER_VERSION": "linux-intel_4.19"

Signed-off-by: Yeoh Ee Peng 
---
 meta/classes/testimage.bbclass | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/meta/classes/testimage.bbclass b/meta/classes/testimage.bbclass
index 525c5a6..194d549 100644
--- a/meta/classes/testimage.bbclass
+++ b/meta/classes/testimage.bbclass
@@ -129,14 +129,30 @@ def testimage_sanity(d):
 def get_testimage_configuration(d, test_type, machine):
 import platform
 from oeqa.utils.metadata import get_layers
+distro = d.getVar("DISTRO")
+distrooverride = d.getVar("DISTROOVERRIDES")
+
+kernel_provider = d.getVar("PREFERRED_PROVIDER_virtual/kernel")
+for o in distrooverride.split(":"):
+kernel_provider_override = 
d.getVar("PREFERRED_PROVIDER_virtual/kernel_%s" % o)
+if kernel_provider_override:
+kernel_provider = kernel_provider_override
+
+kernel_provider_version = d.getVar("PREFERRED_VERSION_%s" % 
kernel_provider).replace('%', '')
+for o in distrooverride.split(":"):
+kernel_provider_version_override = d.getVar("PREFERRED_VERSION_%s_%s" 
% (kernel_provider, o))
+if kernel_provider_version_override:
+kernel_provider_version = 
kernel_provider_version_override.replace('%', '')
+
 configuration = {'TEST_TYPE': test_type,
 'MACHINE': machine,
-'DISTRO': d.getVar("DISTRO"),
+'DISTRO': distro,
 'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
 'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
 'STARTTIME': d.getVar("DATETIME"),
 'HOST_DISTRO': oe.lsb.distro_identifier().replace(' ', 
'-'),
-'LAYERS': get_layers(d.getVar("BBLAYERS"))}
+'LAYERS': get_layers(d.getVar("BBLAYERS")),
+'KERNEL_PROVIDER_VERSION': '%s_%s' % (kernel_provider, 
kernel_provider_version)}
 return configuration
 get_testimage_configuration[vardepsexclude] = "DATETIME"
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2] oeqa/manual/bsp-hw.json: Remove opengl graphic testing

2019-05-17 Thread Yeoh Ee Peng
Remove the bsps-hw.bsps-hw.Graphics_-_ABAT as it was replaced by the
new automated runtime oeqa/runtime/cases/graphic.py.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/bsp-hw.json | 26 --
 1 file changed, 26 deletions(-)

diff --git a/meta/lib/oeqa/manual/bsp-hw.json b/meta/lib/oeqa/manual/bsp-hw.json
index 4b7c76f..931f40a 100644
--- a/meta/lib/oeqa/manual/bsp-hw.json
+++ b/meta/lib/oeqa/manual/bsp-hw.json
@@ -873,32 +873,6 @@
 },
 {
 "test": {
-"@alias": "bsps-hw.bsps-hw.Graphics_-_ABAT",
-"author": [
-{
-"email": "alexandru.c.george...@intel.com",
-"name": "alexandru.c.george...@intel.com"
-}
-],
-"execution": {
-"1": {
-"action": "Download ABAT test suite from internal git 
repository, git clone git://tinderbox.sh.intel.com/git/abat",
-"expected_results": ""
-},
-"2": {
-"action": "Apply following patch to make it work on yocto 
environment",
-"expected_results": ""
-},
-"3": {
-"action": "Run \"./abat.sh\" to run ABAT test refer to 
abat.patch",
-"expected_results": "All ABAT test should pass. \nNote : 
If below 3 fails appears ignore them. \n- start up X server fail.. due is 
already up \n- module [intel_agp] \n- module [i915]"
-}
-},
-"summary": "Graphics_-_ABAT"
-}
-},
-{
-"test": {
 "@alias": "bsps-hw.bsps-hw.Graphics_-_x11perf_-_2D",
 "author": [
 {
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2] oeqa/runtime/cases/graphic: Enable graphic opengl testing

2019-05-17 Thread Yeoh Ee Peng
Convert manual testcase bsps-hw.bsps-hw.bsps-hw.Graphics_-_ABAT
from oeqa/manual/bsp-hw.json to runtime automated test.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/graphic.py | 25 +
 1 file changed, 25 insertions(+)
 create mode 100644 meta/lib/oeqa/runtime/cases/graphic.py

diff --git a/meta/lib/oeqa/runtime/cases/graphic.py 
b/meta/lib/oeqa/runtime/cases/graphic.py
new file mode 100644
index 000..7d484c4
--- /dev/null
+++ b/meta/lib/oeqa/runtime/cases/graphic.py
@@ -0,0 +1,25 @@
+from oeqa.runtime.case import OERuntimeTestCase
+from oeqa.core.decorator.depends import OETestDepends
+from oeqa.runtime.decorator.package import OEHasPackage
+from oeqa.core.decorator.data import skipIfNotFeature
+import threading
+import time
+
+class GraphicTestThread(threading.Thread):
+   def __init__(self, target_function, target_args=()):
+  threading.Thread.__init__(self, target=target_function, args=target_args)
+
+class GraphicTest(OERuntimeTestCase):
+def run_graphic_test(self, run_args):
+self.target.run(run_args)
+
+@skipIfNotFeature('x11-base', 'Test requires x11 to be in IMAGE_FEATURES')
+@OEHasPackage(['mesa-demos'])
+@OETestDepends(['ssh.SSHTest.test_ssh'])
+def test_graphic_opengl_with_glxgears(self):
+gt_thread = GraphicTestThread(self.run_graphic_test, 
target_args=('export DISPLAY=:0; glxgears',))
+gt_thread.start()
+time.sleep(2)
+status, output = self.target.run('pidof glxgears')
+self.target.run('kill %s' % output)
+self.assertEqual(status, 0, msg='Not able to find process that run 
glxgears.')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 1/2 v3] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-17 Thread Yeoh, Ee Peng
Hi Richard,

Yes, you are right, we are checking directory and package on the host 
server/machine. 
Actually, this message was inherited from within the rpm, I had made the 
correction on this new patch. I shall submit another patch to correction for 
the message inside existing rpm test base code. 

Thank you very much for your inputs. 

Thanks,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Friday, May 17, 2019 2:02 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/2 v3] oeqa/runtime/cases/rpm.py: Enable rpm 
install dependency testing

Hi Ee Peng,

On Fri, 2019-05-17 at 10:07 +0800, Yeoh Ee Peng wrote:
> Convert manual testcase bsps-hw.bsps-hw.rpm_- 
> __install_dependency_package from oeqa/manual/bsp-hw.json to runtime 
> automated test.
> 
> Signed-off-by: Yeoh Ee Peng 
> ---
>  meta/lib/oeqa/runtime/cases/rpm.py | 34
> ++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/meta/lib/oeqa/runtime/cases/rpm.py
> b/meta/lib/oeqa/runtime/cases/rpm.py
> index d8cabd3..ce3fce1 100644
> --- a/meta/lib/oeqa/runtime/cases/rpm.py
> +++ b/meta/lib/oeqa/runtime/cases/rpm.py
> @@ -135,3 +135,37 @@ class RpmInstallRemoveTest(OERuntimeTestCase):
>  # Check that there's enough of them
>  self.assertGreaterEqual(int(output), 80,
> 'Cound not find sufficient amount 
> of rpm entries in /var/log/messages, found {}
> entries'.format(output))
> +
> +@OETestDepends(['rpm.RpmBasicTest.test_rpm_query'])
> +def test_rpm_install_dependency(self):
> +rpmdir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm',
> 'noarch')
> +if not os.path.exists(rpmdir):
> +self.skipTest('No %s on target' % rpmdir)

This message doesn't sound quite right as you're checking for rpmdir locally on 
the build server, not on target? Could you clarify that please?

+if not rpm_tests[rpm]:
+self.skipTest('No %s on target' % os.path.join(rpmdir, 
+ rpm))

The same issue here, these files are not being searched for "on target".

Cheers,

Richard



-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2 v4] oeqa/manual/bsp-hw.json: Remove rpm install dependency

2019-05-17 Thread Yeoh Ee Peng
Remove the bsps-hw.bsps-hw.rpm_-__install_dependency_package
as it was replaced by the test_rpm_install_dependency inside
oeqa/runtime/cases/rpm.py.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/bsp-hw.json | 26 --
 1 file changed, 26 deletions(-)

diff --git a/meta/lib/oeqa/manual/bsp-hw.json b/meta/lib/oeqa/manual/bsp-hw.json
index 4b7c76f..017de9d 100644
--- a/meta/lib/oeqa/manual/bsp-hw.json
+++ b/meta/lib/oeqa/manual/bsp-hw.json
@@ -1,32 +1,6 @@
 [
 {
 "test": {
-"@alias": "bsps-hw.bsps-hw.rpm_-__install_dependency_package",
-"author": [
-{
-"email": "alexandru.c.george...@intel.com",
-"name": "alexandru.c.george...@intel.com"
-}
-],
-"execution": {
-"1": {
-"action": "Get a not previously installed RPM package or 
build one on local machine, which should have run-time dependency.For example, 
\"mc\" (Midnight Commander, which is a visual file manager) should depend on 
\"ncurses-terminfo\".   \n\n$ bitbake mc  \n\n\n",
-"expected_results": ""
-},
-"2": {
-"action": "Copy the package into a system folder (for 
example /home/root/rpm_packages).  \n\n\n",
-"expected_results": ""
-},
-"3": {
-"action": "Run \"rpm -ivh package_name\" and check the 
output, for example \"rpm -ivh mc.rpm*\" should report the dependency on 
\"ncurses-terminfo\".\n\n\n\n",
-"expected_results": "3 . rpm command should report message 
when some RPM installation depends on other packages."
-}
-},
-"summary": "rpm_-__install_dependency_package"
-}
-},
-{
-"test": {
 "@alias": "bsps-hw.bsps-hw.boot_and_install_from_USB",
 "author": [
 {
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2 v4] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-17 Thread Yeoh Ee Peng
Convert manual testcase bsps-hw.bsps-hw.rpm_-__install_dependency_package
from oeqa/manual/bsp-hw.json to runtime automated test.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/rpm.py | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/rpm.py 
b/meta/lib/oeqa/runtime/cases/rpm.py
index d8cabd3..692e925 100644
--- a/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/meta/lib/oeqa/runtime/cases/rpm.py
@@ -135,3 +135,37 @@ class RpmInstallRemoveTest(OERuntimeTestCase):
 # Check that there's enough of them
 self.assertGreaterEqual(int(output), 80,
'Cound not find sufficient amount of rpm 
entries in /var/log/messages, found {} entries'.format(output))
+
+@OETestDepends(['rpm.RpmBasicTest.test_rpm_query'])
+def test_rpm_install_dependency(self):
+rpmdir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm', 'noarch')
+if not os.path.exists(rpmdir):
+self.skipTest('No %s on host machine' % rpmdir)
+rpm_tests = {'run-postinsts-dev': '', 'run-postinsts': ''}
+rpm_pattern = 'run-postinsts-*.noarch.rpm'
+for f in fnmatch.filter(os.listdir(rpmdir), rpm_pattern):
+if 'run-postinsts-dev' not in f and 'run-postinsts-dbg' not in f:
+rpm_tests['run-postinsts'] = f
+continue
+if 'run-postinsts-dev' in f:
+rpm_tests['run-postinsts-dev'] = f
+continue
+rpm_dest_dir = '/tmp'
+for rpm in rpm_tests:
+if not rpm_tests[rpm]:
+self.skipTest('No %s on host machine' % os.path.join(rpmdir, 
rpm))
+self.tc.target.copyTo(os.path.join(rpmdir, rpm_tests[rpm]), 
os.path.join(rpm_dest_dir, rpm_tests[rpm]))
+# remove existing rpm before testing install with dependency
+self.tc.target.run('rpm -e %s' % 'run-postinsts-dev')
+self.tc.target.run('rpm -e %s' % 'run-postinsts')
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 1, 'rpm installed should have failed but it 
was getting %s' % status)
+self.assertTrue('error: Failed dependencies:' in output,
+'rpm installed should have failed with error but it 
was getting: %s' % output)
+# reinstall rpm with dependency
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts'], status, output))
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts-dev'], status, output))
+for rpm in rpm_tests:
+self.tc.target.run('rm -f %s' % os.path.join(rpm_dest_dir, 
rpm_tests[rpm]))
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 1/2 v2] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-16 Thread Yeoh, Ee Peng
Hi Richard,

I verified that this test was failing in multilib because the target package to 
be used for testing was not available. 

I had added skipTest logic in case the target package needed was not available. 
Sent the updated patches. Thank you very much for your inputs. 

Thanks,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Friday, May 17, 2019 3:29 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/2 v2] oeqa/runtime/cases/rpm.py: Enable rpm 
install dependency testing

On Thu, 2019-05-16 at 09:22 +0800, Yeoh Ee Peng wrote:
> Convert manual testcase bsps-hw.bsps-hw.rpm_- 
> __install_dependency_package from oeqa/manual/bsp-hw.json to runtime 
> automated test.
> 
> Signed-off-by: Yeoh Ee Peng 
> ---
>  meta/lib/oeqa/runtime/cases/rpm.py | 30
> ++
>  1 file changed, 30 insertions(+)

Much better, only one failure this time in multilib:

https://autobuilder.yoctoproject.org/typhoon/#/builders/44/builds/612

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2 v3] oeqa/manual/bsp-hw.json: Remove rpm install dependency

2019-05-16 Thread Yeoh Ee Peng
Remove the bsps-hw.bsps-hw.rpm_-__install_dependency_package
as it was replaced by the test_rpm_install_dependency inside
oeqa/runtime/cases/rpm.py.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/bsp-hw.json | 26 --
 1 file changed, 26 deletions(-)

diff --git a/meta/lib/oeqa/manual/bsp-hw.json b/meta/lib/oeqa/manual/bsp-hw.json
index 4b7c76f..017de9d 100644
--- a/meta/lib/oeqa/manual/bsp-hw.json
+++ b/meta/lib/oeqa/manual/bsp-hw.json
@@ -1,32 +1,6 @@
 [
 {
 "test": {
-"@alias": "bsps-hw.bsps-hw.rpm_-__install_dependency_package",
-"author": [
-{
-"email": "alexandru.c.george...@intel.com",
-"name": "alexandru.c.george...@intel.com"
-}
-],
-"execution": {
-"1": {
-"action": "Get a not previously installed RPM package or 
build one on local machine, which should have run-time dependency.For example, 
\"mc\" (Midnight Commander, which is a visual file manager) should depend on 
\"ncurses-terminfo\".   \n\n$ bitbake mc  \n\n\n",
-"expected_results": ""
-},
-"2": {
-"action": "Copy the package into a system folder (for 
example /home/root/rpm_packages).  \n\n\n",
-"expected_results": ""
-},
-"3": {
-"action": "Run \"rpm -ivh package_name\" and check the 
output, for example \"rpm -ivh mc.rpm*\" should report the dependency on 
\"ncurses-terminfo\".\n\n\n\n",
-"expected_results": "3 . rpm command should report message 
when some RPM installation depends on other packages."
-}
-},
-"summary": "rpm_-__install_dependency_package"
-}
-},
-{
-"test": {
 "@alias": "bsps-hw.bsps-hw.boot_and_install_from_USB",
 "author": [
 {
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2 v3] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-16 Thread Yeoh Ee Peng
Convert manual testcase bsps-hw.bsps-hw.rpm_-__install_dependency_package
from oeqa/manual/bsp-hw.json to runtime automated test.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/rpm.py | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/rpm.py 
b/meta/lib/oeqa/runtime/cases/rpm.py
index d8cabd3..ce3fce1 100644
--- a/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/meta/lib/oeqa/runtime/cases/rpm.py
@@ -135,3 +135,37 @@ class RpmInstallRemoveTest(OERuntimeTestCase):
 # Check that there's enough of them
 self.assertGreaterEqual(int(output), 80,
'Cound not find sufficient amount of rpm 
entries in /var/log/messages, found {} entries'.format(output))
+
+@OETestDepends(['rpm.RpmBasicTest.test_rpm_query'])
+def test_rpm_install_dependency(self):
+rpmdir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm', 'noarch')
+if not os.path.exists(rpmdir):
+self.skipTest('No %s on target' % rpmdir)
+rpm_tests = {'run-postinsts-dev': '', 'run-postinsts': ''}
+rpm_pattern = 'run-postinsts-*.noarch.rpm'
+for f in fnmatch.filter(os.listdir(rpmdir), rpm_pattern):
+if 'run-postinsts-dev' not in f and 'run-postinsts-dbg' not in f:
+rpm_tests['run-postinsts'] = f
+continue
+if 'run-postinsts-dev' in f:
+rpm_tests['run-postinsts-dev'] = f
+continue
+rpm_dest_dir = '/tmp'
+for rpm in rpm_tests:
+if not rpm_tests[rpm]:
+self.skipTest('No %s on target' % os.path.join(rpmdir, rpm))
+self.tc.target.copyTo(os.path.join(rpmdir, rpm_tests[rpm]), 
os.path.join(rpm_dest_dir, rpm_tests[rpm]))
+# remove existing rpm before testing install with dependency
+self.tc.target.run('rpm -e %s' % 'run-postinsts-dev')
+self.tc.target.run('rpm -e %s' % 'run-postinsts')
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 1, 'rpm installed should have failed but it 
was getting %s' % status)
+self.assertTrue('error: Failed dependencies:' in output,
+'rpm installed should have failed with error but it 
was getting: %s' % output)
+# reinstall rpm with dependency
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts'], status, output))
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts-dev'], status, output))
+for rpm in rpm_tests:
+self.tc.target.run('rm -f %s' % os.path.join(rpm_dest_dir, 
rpm_tests[rpm]))
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 1/2] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-16 Thread Yeoh, Ee Peng
Hi Richard and Alex,

Thanks for the inputs! 
I added the OETestDepends decorator to the test case.

Best regards,
Yeoh Ee Peng 

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, May 16, 2019 4:08 AM
To: Alexander Kanavin ; Yeoh, Ee Peng 

Cc: OE-core 
Subject: Re: [OE-core] [PATCH 1/2] oeqa/runtime/cases/rpm.py: Enable rpm 
install dependency testing

On Wed, 2019-05-15 at 21:43 +0200, Alexander Kanavin wrote:
> This needs the same condition guard as other rpm tests, e.g.
> 
> @OETestDepends(['rpm.RpmBasicTest.test_rpm_query'])
> 
> should be enough.

Yes, there were failures on the autobuilder from this, e.g.:

https://autobuilder.yoctoproject.org/typhoon/#/builders/45/builds/602

https://autobuilder.yoctoproject.org/typhoon/#/builders/59/builds/596

and so on.

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2 v2] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-15 Thread Yeoh Ee Peng
Convert manual testcase bsps-hw.bsps-hw.rpm_-__install_dependency_package
from oeqa/manual/bsp-hw.json to runtime automated test.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/rpm.py | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/rpm.py 
b/meta/lib/oeqa/runtime/cases/rpm.py
index d8cabd3..f71125f 100644
--- a/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/meta/lib/oeqa/runtime/cases/rpm.py
@@ -135,3 +135,33 @@ class RpmInstallRemoveTest(OERuntimeTestCase):
 # Check that there's enough of them
 self.assertGreaterEqual(int(output), 80,
'Cound not find sufficient amount of rpm 
entries in /var/log/messages, found {} entries'.format(output))
+
+@OETestDepends(['rpm.RpmBasicTest.test_rpm_query'])
+def test_rpm_install_dependency(self):
+rpmdir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm', 'noarch')
+rpm_tests = {'run-postinsts-dev': '', 'run-postinsts': ''}
+rpm_pattern = 'run-postinsts-*.noarch.rpm'
+for f in fnmatch.filter(os.listdir(rpmdir), rpm_pattern):
+if 'run-postinsts-dev' not in f and 'run-postinsts-dbg' not in f:
+rpm_tests['run-postinsts'] = f
+continue
+if 'run-postinsts-dev' in f:
+rpm_tests['run-postinsts-dev'] = f
+continue
+rpm_dest_dir = '/tmp'
+for rpm in rpm_tests:
+self.tc.target.copyTo(os.path.join(rpmdir, rpm_tests[rpm]), 
os.path.join(rpm_dest_dir, rpm_tests[rpm]))
+# remove existing rpm before testing install with dependency
+self.tc.target.run('rpm -e %s' % 'run-postinsts-dev')
+self.tc.target.run('rpm -e %s' % 'run-postinsts')
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 1, 'rpm installed should have failed but it 
was getting %s' % status)
+self.assertTrue('error: Failed dependencies:' in output,
+'rpm installed should have failed with error but it 
was getting: %s' % output)
+# reinstall rpm with dependency
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts'], status, output))
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts-dev'], status, output))
+for rpm in rpm_tests:
+self.tc.target.run('rm -f %s' % os.path.join(rpm_dest_dir, 
rpm_tests[rpm]))
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2 v2] oeqa/manual/bsp-hw.json: Remove rpm install dependency

2019-05-15 Thread Yeoh Ee Peng
Remove the bsps-hw.bsps-hw.rpm_-__install_dependency_package
as it was replaced by the test_rpm_install_dependency inside
oeqa/runtime/cases/rpm.py.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/bsp-hw.json | 26 --
 1 file changed, 26 deletions(-)

diff --git a/meta/lib/oeqa/manual/bsp-hw.json b/meta/lib/oeqa/manual/bsp-hw.json
index 4b7c76f..017de9d 100644
--- a/meta/lib/oeqa/manual/bsp-hw.json
+++ b/meta/lib/oeqa/manual/bsp-hw.json
@@ -1,32 +1,6 @@
 [
 {
 "test": {
-"@alias": "bsps-hw.bsps-hw.rpm_-__install_dependency_package",
-"author": [
-{
-"email": "alexandru.c.george...@intel.com",
-"name": "alexandru.c.george...@intel.com"
-}
-],
-"execution": {
-"1": {
-"action": "Get a not previously installed RPM package or 
build one on local machine, which should have run-time dependency.For example, 
\"mc\" (Midnight Commander, which is a visual file manager) should depend on 
\"ncurses-terminfo\".   \n\n$ bitbake mc  \n\n\n",
-"expected_results": ""
-},
-"2": {
-"action": "Copy the package into a system folder (for 
example /home/root/rpm_packages).  \n\n\n",
-"expected_results": ""
-},
-"3": {
-"action": "Run \"rpm -ivh package_name\" and check the 
output, for example \"rpm -ivh mc.rpm*\" should report the dependency on 
\"ncurses-terminfo\".\n\n\n\n",
-"expected_results": "3 . rpm command should report message 
when some RPM installation depends on other packages."
-}
-},
-"summary": "rpm_-__install_dependency_package"
-}
-},
-{
-"test": {
 "@alias": "bsps-hw.bsps-hw.boot_and_install_from_USB",
 "author": [
 {
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2] oeqa/manual/bsp-hw.json: Remove rpm install dependency

2019-05-14 Thread Yeoh Ee Peng
Remove the bsps-hw.bsps-hw.rpm_-__install_dependency_package
as it was replaced by the test_rpm_install_dependency inside
oeqa/runtime/cases/rpm.py.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/bsp-hw.json | 26 --
 1 file changed, 26 deletions(-)

diff --git a/meta/lib/oeqa/manual/bsp-hw.json b/meta/lib/oeqa/manual/bsp-hw.json
index 4b7c76f..017de9d 100644
--- a/meta/lib/oeqa/manual/bsp-hw.json
+++ b/meta/lib/oeqa/manual/bsp-hw.json
@@ -1,32 +1,6 @@
 [
 {
 "test": {
-"@alias": "bsps-hw.bsps-hw.rpm_-__install_dependency_package",
-"author": [
-{
-"email": "alexandru.c.george...@intel.com",
-"name": "alexandru.c.george...@intel.com"
-}
-],
-"execution": {
-"1": {
-"action": "Get a not previously installed RPM package or 
build one on local machine, which should have run-time dependency.For example, 
\"mc\" (Midnight Commander, which is a visual file manager) should depend on 
\"ncurses-terminfo\".   \n\n$ bitbake mc  \n\n\n",
-"expected_results": ""
-},
-"2": {
-"action": "Copy the package into a system folder (for 
example /home/root/rpm_packages).  \n\n\n",
-"expected_results": ""
-},
-"3": {
-"action": "Run \"rpm -ivh package_name\" and check the 
output, for example \"rpm -ivh mc.rpm*\" should report the dependency on 
\"ncurses-terminfo\".\n\n\n\n",
-"expected_results": "3 . rpm command should report message 
when some RPM installation depends on other packages."
-}
-},
-"summary": "rpm_-__install_dependency_package"
-}
-},
-{
-"test": {
 "@alias": "bsps-hw.bsps-hw.boot_and_install_from_USB",
 "author": [
 {
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2] oeqa/runtime/cases/rpm.py: Enable rpm install dependency testing

2019-05-14 Thread Yeoh Ee Peng
Convert manual testcase bsps-hw.bsps-hw.rpm_-__install_dependency_package
from oeqa/manual/bsp-hw.json to runtime automated test.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/runtime/cases/rpm.py | 29 +
 1 file changed, 29 insertions(+)

diff --git a/meta/lib/oeqa/runtime/cases/rpm.py 
b/meta/lib/oeqa/runtime/cases/rpm.py
index d8cabd3..fe1b489 100644
--- a/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/meta/lib/oeqa/runtime/cases/rpm.py
@@ -135,3 +135,32 @@ class RpmInstallRemoveTest(OERuntimeTestCase):
 # Check that there's enough of them
 self.assertGreaterEqual(int(output), 80,
'Cound not find sufficient amount of rpm 
entries in /var/log/messages, found {} entries'.format(output))
+
+def test_rpm_install_dependency(self):
+rpmdir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm', 'noarch')
+rpm_tests = {'run-postinsts-dev': '', 'run-postinsts': ''}
+rpm_pattern = 'run-postinsts-*.noarch.rpm'
+for f in fnmatch.filter(os.listdir(rpmdir), rpm_pattern):
+if 'run-postinsts-dev' not in f and 'run-postinsts-dbg' not in f:
+rpm_tests['run-postinsts'] = f
+continue
+if 'run-postinsts-dev' in f:
+rpm_tests['run-postinsts-dev'] = f
+continue
+rpm_dest_dir = '/tmp'
+for rpm in rpm_tests:
+self.tc.target.copyTo(os.path.join(rpmdir, rpm_tests[rpm]), 
os.path.join(rpm_dest_dir, rpm_tests[rpm]))
+# remove existing rpm before testing install with dependency
+self.tc.target.run('rpm -e %s' % 'run-postinsts-dev')
+self.tc.target.run('rpm -e %s' % 'run-postinsts')
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 1, 'rpm installed should have failed but it 
was getting %s' % status)
+self.assertTrue('error: Failed dependencies:' in output,
+'rpm installed should have failed with error but it 
was getting: %s' % output)
+# reinstall rpm with dependency
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts'], status, output))
+status, output = self.tc.target.run('rpm -ivh %s' % 
os.path.join(rpm_dest_dir, rpm_tests['run-postinsts-dev']))
+self.assertTrue(status == 0, 'rpm (%s) installed with error: %s (%s)' 
% (rpm_tests['run-postinsts-dev'], status, output))
+for rpm in rpm_tests:
+self.tc.target.run('rm -f %s' % os.path.join(rpm_dest_dir, 
rpm_tests[rpm]))
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] resulttool/manualexecution: Refactor and remove duplicate code

2019-04-10 Thread Yeoh Ee Peng
Remove duplicate codes. Replace unnecessary class variables with
local variables. Rename variables and arguments with simple and
standard name.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 87 ++-
 1 file changed, 40 insertions(+), 47 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 12ef90d..ea44219 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -20,9 +20,9 @@ import re
 from oeqa.core.runner import OETestResultJSONHelper
 
 
-def load_json_file(file):
-with open(file, "r") as f:
-return json.load(f)
+def load_json_file(f):
+with open(f, "r") as filedata:
+return json.load(filedata)
 
 def write_json_file(f, json_data):
 os.makedirs(os.path.dirname(f), exist_ok=True)
@@ -31,9 +31,8 @@ def write_json_file(f, json_data):
 
 class ManualTestRunner(object):
 
-def _get_testcases(self, file):
-self.jdata = load_json_file(file)
-self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+def _get_test_module(self, case_file):
+return os.path.basename(case_file).split('.')[0]
 
 def _get_input(self, config):
 while True:
@@ -57,23 +56,21 @@ class ManualTestRunner(object):
 print('Only integer index inputs from above available 
configuration options are allowed. Please try again.')
 return options[output]
 
-def _create_config(self, config_options):
+def _get_config(self, config_options, test_module):
 from oeqa.utils.metadata import get_layers
 from oeqa.utils.commands import get_bb_var
 from resulttool.resultutils import store_map
 
 layers = get_layers(get_bb_var('BBLAYERS'))
-self.configuration = {}
-self.configuration['LAYERS'] = layers
-current_datetime = datetime.datetime.now()
-self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
-self.configuration['STARTTIME'] = self.starttime
-self.configuration['TEST_TYPE'] = 'manual'
-self.configuration['TEST_MODULE'] = self.test_module
-
-extra_config = set(store_map['manual']) - set(self.configuration)
+configurations = {}
+configurations['LAYERS'] = layers
+configurations['STARTTIME'] = 
datetime.datetime.now().strftime('%Y%m%d%H%M%S')
+configurations['TEST_TYPE'] = 'manual'
+configurations['TEST_MODULE'] = test_module
+
+extra_config = set(store_map['manual']) - set(configurations)
 for config in sorted(extra_config):
-avail_config_options = 
self._get_available_config_options(config_options, self.test_module, config)
+avail_config_options = 
self._get_available_config_options(config_options, test_module, config)
 if avail_config_options:
 print('-')
 print('These are available configuration #%s options:' % 
config)
@@ -89,21 +86,19 @@ class ManualTestRunner(object):
 print('-')
 value_conf = self._get_input('Configuration Value')
 print('-\n')
-self.configuration[config] = value_conf
-
-def _create_result_id(self):
-self.result_id = 'manual_%s_%s' % (self.test_module, self.starttime)
+configurations[config] = value_conf
+return configurations
 
-def _execute_test_steps(self, test):
+def _execute_test_steps(self, case):
 test_result = {}
 
print('')
-print('Executing test case: %s' % test['test']['@alias'])
+print('Executing test case: %s' % case['test']['@alias'])
 
print('')
-print('You have total %s test steps to be executed.' % 
len(test['test']['execution']))
+print('You have total %s test steps to be executed.' % 
len(case['test']['execution']))
 
print('\n')
-for step, _ in sorted(test['test']['execution'].items(), key=lambda x: 
int(x[0])):
-print('Step %s: %s' % (step, 
test['test']['execution'][step]['action']))
-expected_output = 
test['test']['execution'][step]['expected_results']
+for step, _ in sorted(case['test']['execution'].items(), key=lambda x: 
int(x[0])):
+print('Step %s: %s' % (step, 
case['test']['execution'][step]['action']))
+expected_output = 
case['test']['execution'][step]['expected_results']
 if expected_output:
 print('Expected output: %s' % expected_output)
 while True:
@@ -118,31 +113,30 @@ cl

[OE-core] [PATCH 1/2] resulttool/manualexecution: Enable configuration options selection

2019-04-09 Thread Yeoh Ee Peng
Current manualexecution required user to input configuration manually
where there were inconsistent inputs and human typo issues.

Enable manualexecution to have the optional feature where it
will use pre-compiled configuration options file where user will
be able to select configuration from the pre-compiled list instead
of manual key-in the configuration. This will eliminate human error.

Expect the pre-compiled configuration options file in json format below

{
"bsps-hw": {
"IMAGE_BASENAME": {
"1": "core-image-sato-sdk"
},
"MACHINE": {
"1": "beaglebone-yocto",
"2": "edgerouter",
"3": "mpc8315e-rdb",
"4": "genericx86",
"5": "genericx86-64"
}
},
"bsps-qemu": {
"IMAGE_BASENAME": {
"1": "core-image-sato-sdk"
},
"MACHINE": {
"1": "qemuarm",
    "2": "qemuarm64",
"3": "qemumips",
"4": "qemumips64",
"5": "qemuppc",
"6": "qemux86",
"7": "qemux86-64"
}
}
}

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 48 +--
 1 file changed, 39 insertions(+), 9 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index c94f981..57e7b29 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -38,7 +38,21 @@ class ManualTestRunner(object):
 print('Only lowercase alphanumeric, hyphen and dot are allowed. 
Please try again')
 return output
 
-def _create_config(self):
+def _get_available_config_options(self, config_options, test_module, 
target_config):
+avail_config_options = None
+if test_module in config_options:
+avail_config_options = 
config_options[test_module].get(target_config)
+return avail_config_options
+
+def _choose_config_option(self, options):
+while True:
+output = input('{} = '.format('Option index number'))
+if output in options:
+break
+print('Only integer index inputs from above available 
configuration options are allowed. Please try again.')
+return options[output]
+
+def _create_config(self, config_options):
 from oeqa.utils.metadata import get_layers
 from oeqa.utils.commands import get_bb_var
 from resulttool.resultutils import store_map
@@ -54,11 +68,22 @@ class ManualTestRunner(object):
 
 extra_config = set(store_map['manual']) - set(self.configuration)
 for config in sorted(extra_config):
-print('-')
-print('This is configuration #%s. Please provide configuration 
value(use "None" if not applicable).' % config)
-print('-')
-value_conf = self._get_input('Configuration Value')
-print('-\n')
+avail_config_options = 
self._get_available_config_options(config_options, self.test_module, config)
+if avail_config_options:
+print('-')
+print('These are available configuration #%s options:' % 
config)
+print('-')
+for option, _ in sorted(avail_config_options.items(), 
key=lambda x: int(x[0])):
+print('%s: %s' % (option, avail_config_options[option]))
+print('Please select configuration option, enter the integer 
index number.')
+value_conf = self._choose_config_option(avail_config_options)
+print('-\n')
+else:
+print('-')
+print('This is configuration #%s. Please provide configuration 
value(use "None" if not applicable).' % config)
+print('-')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
 self.configuration[config] = value_conf
 
 def _create_result_id(self):
@@ -99,9 +124,12 @@ class ManualTestRunner(object):
 basepath = os.environ['BUILDDIR']
 self.write_dir = basepath + '/tmp/log/manual/'
 
-def run_test(self, fil

[OE-core] [PATCH 2/2] resulttool/manualexecution: Enable creation of configuration option file

2019-04-09 Thread Yeoh Ee Peng
Allow the creation of configuration option file based on user inputs.
Where this configuration option file will be used by the the manual
execution to display options for configuration rather than user
need to inputs configuration manually.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 54 ++-
 1 file changed, 53 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 57e7b29..12ef90d 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -24,6 +24,11 @@ def load_json_file(file):
 with open(file, "r") as f:
 return json.load(f)
 
+def write_json_file(f, json_data):
+os.makedirs(os.path.dirname(f), exist_ok=True)
+with open(f, 'w') as filedata:
+filedata.write(json.dumps(json_data, sort_keys=True, indent=4))
+
 class ManualTestRunner(object):
 
 def _get_testcases(self, file):
@@ -139,8 +144,53 @@ class ManualTestRunner(object):
 test_results.update(test_result)
 return self.configuration, self.result_id, self.write_dir, test_results
 
+def _get_true_false_input(self, input_message):
+yes_list = ['Y', 'YES']
+no_list = ['N', 'NO']
+while True:
+more_config_option = input(input_message).upper()
+if more_config_option in yes_list or more_config_option in no_list:
+break
+print('Invalid input!')
+if more_config_option in no_list:
+return False
+return True
+
+def make_config_option_file(self, logger, manual_case_file, 
config_options_file):
+config_options = {}
+if config_options_file:
+config_options = load_json_file(config_options_file)
+new_test_module = os.path.basename(manual_case_file).split('.')[0]
+print('Creating configuration options file for test module: %s' % 
new_test_module)
+new_config_options = {}
+
+while True:
+config_name = input('\nPlease provide test configuration to 
create:\n').upper()
+new_config_options[config_name] = {}
+while True:
+config_value = self._get_input('Configuration possible option 
value')
+config_option_index = len(new_config_options[config_name]) + 1
+new_config_options[config_name][config_option_index] = 
config_value
+more_config_option = self._get_true_false_input('\nIs there 
more configuration option input: (Y)es/(N)o\n')
+if not more_config_option:
+break
+more_config = self._get_true_false_input('\nIs there more 
configuration to create: (Y)es/(N)o\n')
+if not more_config:
+break
+
+if new_config_options:
+config_options[new_test_module] = new_config_options
+if not config_options_file:
+self._create_write_dir()
+config_options_file = os.path.join(self.write_dir, 
'manual_config_options.json')
+write_json_file(config_options_file, config_options)
+logger.info('Configuration option file created at %s' % 
config_options_file)
+
 def manualexecution(args, logger):
 testrunner = ManualTestRunner()
+if args.make_config_options_file:
+testrunner.make_config_option_file(logger, args.file, 
args.config_options_file)
+return 0
 get_configuration, get_result_id, get_write_dir, get_test_results = 
testrunner.run_test(args.file, args.config_options_file)
 resultjsonhelper = OETestResultJSONHelper()
 resultjsonhelper.dump_testresult_file(get_write_dir, get_configuration, 
get_result_id, get_test_results)
@@ -154,4 +204,6 @@ def register_commands(subparsers):
 parser_build.set_defaults(func=manualexecution)
 parser_build.add_argument('file', help='specify path to manual test case 
JSON file.Note: Please use \"\" to encapsulate the file path.')
 parser_build.add_argument('-c', '--config-options-file', default='',
-  help='the config options file to import and used 
as available configuration option selection')
+  help='the config options file to import and used 
as available configuration option selection or make config option file')
+parser_build.add_argument('-m', '--make-config-options-file', 
action='store_true',
+  help='make the configuration options file based 
on provided inputs')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/4] resulttool/manualexecution: Enable display full steps without press enter

2019-04-04 Thread Yeoh Ee Peng
Current manualexecution required pressing enter button to show each step
information, where this was wasting execution time. Enable display
full steps without needing to any press enter button.

Signed-off-by: Mazliana 
Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 8ce7903..0783540 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -87,8 +87,9 @@ class ManualTestRunner(object):
 
print('\n')
 for step in sorted((self.jdata[test_id]['test']['execution']).keys()):
 print('Step %s: ' % step + 
self.jdata[test_id]['test']['execution']['%s' % step]['action'])
-print('Expected output: ' + 
self.jdata[test_id]['test']['execution']['%s' % step]['expected_results'])
-done = input('\nPlease press ENTER when you are done to proceed to 
next step.\n')
+expected_output = self.jdata[test_id]['test']['execution']['%s' % 
step]['expected_results']
+if expected_output:
+print('Expected output: ' + expected_output)
 while True:
 done = input('\nPlease provide test results: 
(P)assed/(F)ailed/(B)locked/(S)kipped? \n')
 done = done.lower()
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 4/4] resulttool/manualexecution: Refactor and simplify codebase

2019-04-04 Thread Yeoh Ee Peng
Simplify and removed unnecessary codes.
Refactor to allow pythonic loop.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 56 +++
 1 file changed, 20 insertions(+), 36 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 9a29b0b..c94f981 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -24,24 +24,12 @@ def load_json_file(file):
 with open(file, "r") as f:
 return json.load(f)
 
-
 class ManualTestRunner(object):
-def __init__(self):
-self.jdata = ''
-self.test_module = ''
-self.test_cases_id = ''
-self.configuration = ''
-self.starttime = ''
-self.result_id = ''
-self.write_dir = ''
 
 def _get_testcases(self, file):
 self.jdata = load_json_file(file)
-self.test_cases_id = []
 self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
-for i in self.jdata:
-self.test_cases_id.append(i['test']['@alias'])
-
+
 def _get_input(self, config):
 while True:
 output = input('{} = '.format(config))
@@ -67,45 +55,42 @@ class ManualTestRunner(object):
 extra_config = set(store_map['manual']) - set(self.configuration)
 for config in sorted(extra_config):
 print('-')
-print('This is configuration #%s. Please provide configuration 
value(use "None" if not applicable).'
-  % config)
+print('This is configuration #%s. Please provide configuration 
value(use "None" if not applicable).' % config)
 print('-')
 value_conf = self._get_input('Configuration Value')
 print('-\n')
 self.configuration[config] = value_conf
 
 def _create_result_id(self):
-self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+self.result_id = 'manual_%s_%s' % (self.test_module, self.starttime)
 
-def _execute_test_steps(self, test_id):
+def _execute_test_steps(self, test):
 test_result = {}
-total_steps = len(self.jdata[test_id]['test']['execution'].keys())
 
print('')
-print('Executing test case:' + '' '' + self.test_cases_id[test_id])
+print('Executing test case: %s' % test['test']['@alias'])
 
print('')
-print('You have total ' + str(total_steps) + ' test steps to be 
executed.')
+print('You have total %s test steps to be executed.' % 
len(test['test']['execution']))
 
print('\n')
-for step, _ in 
sorted(self.jdata[test_id]['test']['execution'].items(), key=lambda x: 
int(x[0])):
-print('Step %s: ' % step + 
self.jdata[test_id]['test']['execution']['%s' % step]['action'])
-expected_output = self.jdata[test_id]['test']['execution']['%s' % 
step]['expected_results']
+for step, _ in sorted(test['test']['execution'].items(), key=lambda x: 
int(x[0])):
+print('Step %s: %s' % (step, 
test['test']['execution'][step]['action']))
+expected_output = 
test['test']['execution'][step]['expected_results']
 if expected_output:
-print('Expected output: ' + expected_output)
+print('Expected output: %s' % expected_output)
 while True:
-done = input('\nPlease provide test results: 
(P)assed/(F)ailed/(B)locked/(S)kipped? \n')
-done = done.lower()
+done = input('\nPlease provide test results: 
(P)assed/(F)ailed/(B)locked/(S)kipped? \n').lower()
 result_types = {'p':'PASSED',
-'f':'FAILED',
-'b':'BLOCKED',
-'s':'SKIPPED'}
+'f':'FAILED',
+'b':'BLOCKED',
+'s':'SKIPPED'}
 if done in result_types:
 for r in result_types:
 if done == r:
 res = result_types[r]
 if res == 'FAILED':
 log_input = input('\nPlease enter the error and 
the description of the log: (Ex:log:211 Error Bitbake)\n')
-test_result.update({self.test_cases_id[test_id]: 
{'status': '%s' % res, 'log': '%s' % log_input}})
+test_result.update({test['test']['@alias']: 
{'status': '%s' % res, 'log': '%s' % log_input}})
 else:
-

[OE-core] [PATCH 1/4] resulttool/manualexecution: Standardize input check

2019-04-04 Thread Yeoh Ee Peng
Current input checking does not match the standard input practiced
by QA team. Change the input checking to match the standard
input practiced by the QA team.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 6487cd9..8ce7903 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -45,9 +45,9 @@ class ManualTestRunner(object):
 def _get_input(self, config):
 while True:
 output = input('{} = '.format(config))
-if re.match('^[a-zA-Z0-9_-]+$', output):
+if re.match('^[a-z0-9-.]+$', output):
 break
-print('Only alphanumeric and underscore/hyphen are allowed. Please 
try again')
+print('Only lowercase alphanumeric, hyphen and dot are allowed. 
Please try again')
 return output
 
 def _create_config(self):
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 3/4] resulttool/manualexecution: Fixed step sorted by integer

2019-04-04 Thread Yeoh Ee Peng
Currently the manual execution display step by sorting
the step as string, where steps were not being sorted
correctly when there are more than 9 steps.

Fixed the step sorting by sorting step as integer.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index 0783540..9a29b0b 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -85,7 +85,7 @@ class ManualTestRunner(object):
 
print('')
 print('You have total ' + str(total_steps) + ' test steps to be 
executed.')
 
print('\n')
-for step in sorted((self.jdata[test_id]['test']['execution']).keys()):
+for step, _ in 
sorted(self.jdata[test_id]['test']['execution'].items(), key=lambda x: 
int(x[0])):
 print('Step %s: ' % step + 
self.jdata[test_id]['test']['execution']['%s' % step]['action'])
 expected_output = self.jdata[test_id]['test']['execution']['%s' % 
step]['expected_results']
 if expected_output:
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 0/4] resulttool/manualexecution: Enhancement and fixes

2019-04-04 Thread Yeoh Ee Peng
These series of patches include enhancement and fixes for manualexecution:
 - Enhance input check to standardize checking
 - Enhancement to display full steps without press enter 
 - Fix test steps display order
 - Refactor to simplify code and align to pythonic style

Yeoh Ee Peng (4):
  resulttool/manualexecution: Standardize input check
  resulttool/manualexecution: Enable display full steps without press
enter
  resulttool/manualexecution: Fixed step sorted by integer
  resulttool/manualexecution: Refactor and simplify codebase

 scripts/lib/resulttool/manualexecution.py | 61 ---
 1 file changed, 23 insertions(+), 38 deletions(-)

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/3] resulttool/store: Enable add EXECUTED_BY config to results

2019-04-03 Thread Yeoh Ee Peng
Current results stored does not have information needed to trace who
executed the tests. Enable store to add EXECUTED_BY configuration
to results file in order to track who executed the tests.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/store.py | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/scripts/lib/resulttool/store.py b/scripts/lib/resulttool/store.py
index 5e33716..7e89692 100644
--- a/scripts/lib/resulttool/store.py
+++ b/scripts/lib/resulttool/store.py
@@ -27,13 +27,16 @@ import oeqa.utils.gitarchive as gitarchive
 def store(args, logger):
 tempdir = tempfile.mkdtemp(prefix='testresults.')
 try:
+configs = resultutils.extra_configs.copy()
+if args.executed_by:
+configs['EXECUTED_BY'] = args.executed_by
 results = {}
 logger.info('Reading files from %s' % args.source)
 for root, dirs,  files in os.walk(args.source):
 for name in files:
 f = os.path.join(root, name)
 if name == "testresults.json":
-resultutils.append_resultsdata(results, f)
+resultutils.append_resultsdata(results, f, configs=configs)
 elif args.all:
 dst = f.replace(args.source, tempdir + "/")
 os.makedirs(os.path.dirname(dst), exist_ok=True)
@@ -96,4 +99,6 @@ def register_commands(subparsers):
   help='include all files, not just 
testresults.json files')
 parser_build.add_argument('-e', '--allow-empty', action='store_true',
   help='don\'t error if no results to store are 
found')
+parser_build.add_argument('-x', '--executed-by', default='',
+  help='add executed-by configuration to each 
result file')
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 3/3] resulttool/merge: Enable control TESTSERIES and extra configurations

2019-04-03 Thread Yeoh Ee Peng
Current QA team need to merge test result files from multiple sources.
Adding TESTSERIES configuration too early will have negative
implication to report and regression. Enable control to add TESTSERIES
when needed. Also enable adding EXECUTED_BY configuration when
needed.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/merge.py | 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/scripts/lib/resulttool/merge.py b/scripts/lib/resulttool/merge.py
index 3e4b7a3..d40a72d 100644
--- a/scripts/lib/resulttool/merge.py
+++ b/scripts/lib/resulttool/merge.py
@@ -17,16 +17,21 @@ import json
 import resulttool.resultutils as resultutils
 
 def merge(args, logger):
+configs = {}
+if not args.not_add_testseries:
+configs = resultutils.extra_configs.copy()
+if args.executed_by:
+configs['EXECUTED_BY'] = args.executed_by
 if os.path.isdir(args.target_results):
-results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map)
-resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
+results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map, configs=configs)
+resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map, configs=configs)
 resultutils.save_resultsdata(results, args.target_results)
 else:
-results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map)
+results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map, configs=configs)
 if os.path.exists(args.target_results):
-resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map)
+resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map, configs=configs)
 resultutils.save_resultsdata(results, 
os.path.dirname(args.target_results), fn=os.path.basename(args.target_results))
-
+logger.info('Merged results to %s' % os.path.dirname(args.target_results))
 return 0
 
 def register_commands(subparsers):
@@ -39,4 +44,7 @@ def register_commands(subparsers):
   help='the results file/directory to import')
 parser_build.add_argument('target_results',
   help='the target file or directory to merge the 
base_results with')
-
+parser_build.add_argument('-t', '--not-add-testseries', 
action='store_true',
+  help='do not add testseries configuration to 
results')
+parser_build.add_argument('-x', '--executed-by', default='',
+  help='add executed-by configuration to each 
result file')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH v03] resulttool/merge: enhance merge and control testseries

2019-04-03 Thread Yeoh Ee Peng
v01:
Enable merging results where both base and target are directory. 

v02:
Follow suggestion from RP to reused the existing resultutils code base where it 
will merge results to flat file. 
Refactor resultutils code base to enable merging results where both base and 
target are directory. 
Add control for creation of testseries configuration. 

v03:
Follow suggestion from RP to breakdown the patches to ease reviewing. 
Enable resultutils library to control the adding of "TESTSERIES"
configuration as well as allow adding extra configurations when needed.
Enable store to add "EXECUTED_BY" configuration to track who executed 
each results file. 
Enable merge to control the adding of "TESTSERIES" configuration 
as well as allow adding "EXECUTED_BY" configuration when needed.

Yeoh Ee Peng (3):
  resulttool/resultutils: Enable add extra configurations to results
  resulttool/store: Enable add EXECUTED_BY config to results
  resulttool/merge: Enable control TESTSERIES and extra configurations

 scripts/lib/resulttool/merge.py   | 20 ++--
 scripts/lib/resulttool/resultutils.py | 19 ---
 scripts/lib/resulttool/store.py   |  7 ++-
 3 files changed, 32 insertions(+), 14 deletions(-)

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/3] resulttool/resultutils: Enable add extra configurations to results

2019-04-03 Thread Yeoh Ee Peng
Current resultutils library always add "TESTSERIES" configuration
to results. Enhance this to allow control of adding "TESTSERIES"
configuration as well as allow adding extra configurations
when needed.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/resultutils.py | 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index 153f2b8..bfbd381 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -39,10 +39,12 @@ store_map = {
 "manual": ['TEST_TYPE', 'TEST_MODULE', 'MACHINE', 'IMAGE_BASENAME']
 }
 
+extra_configs = {'TESTSERIES': ''}
+
 #
 # Load the json file and append the results data into the provided results dict
 #
-def append_resultsdata(results, f, configmap=store_map):
+def append_resultsdata(results, f, configmap=store_map, configs=extra_configs):
 if type(f) is str:
 with open(f, "r") as filedata:
 data = json.load(filedata)
@@ -51,12 +53,15 @@ def append_resultsdata(results, f, configmap=store_map):
 for res in data:
 if "configuration" not in data[res] or "result" not in data[res]:
 raise ValueError("Test results data without configuration or 
result section?")
-if "TESTSERIES" not in data[res]["configuration"]:
-data[res]["configuration"]["TESTSERIES"] = 
os.path.basename(os.path.dirname(f))
+for config in configs:
+if config == "TESTSERIES" and "TESTSERIES" not in 
data[res]["configuration"]:
+data[res]["configuration"]["TESTSERIES"] = 
os.path.basename(os.path.dirname(f))
+continue
+if config not in data[res]["configuration"]:
+data[res]["configuration"][config] = configs[config]
 testtype = data[res]["configuration"].get("TEST_TYPE")
 if testtype not in configmap:
 raise ValueError("Unknown test type %s" % testtype)
-configvars = configmap[testtype]
 testpath = "/".join(data[res]["configuration"].get(i) for i in 
configmap[testtype])
 if testpath not in results:
 results[testpath] = {}
@@ -72,16 +77,16 @@ def append_resultsdata(results, f, configmap=store_map):
 # Walk a directory and find/load results data
 # or load directly from a file
 #
-def load_resultsdata(source, configmap=store_map):
+def load_resultsdata(source, configmap=store_map, configs=extra_configs):
 results = {}
 if os.path.isfile(source):
-append_resultsdata(results, source, configmap)
+append_resultsdata(results, source, configmap, configs)
 return results
 for root, dirs, files in os.walk(source):
 for name in files:
 f = os.path.join(root, name)
 if name == "testresults.json":
-append_resultsdata(results, f, configmap)
+append_resultsdata(results, f, configmap, configs)
 return results
 
 def filter_resultsdata(results, resultid):
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2] resulttool/merge: Enable turn off testseries configuration creation

2019-04-02 Thread Yeoh Ee Peng
Testseries configuration has important implication to report and
regression. Enable turn off testseries configuration creation
during results merge.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/merge.py   | 13 +++--
 scripts/lib/resulttool/resultutils.py | 10 +-
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/scripts/lib/resulttool/merge.py b/scripts/lib/resulttool/merge.py
index 3e4b7a3..5fffb54 100644
--- a/scripts/lib/resulttool/merge.py
+++ b/scripts/lib/resulttool/merge.py
@@ -18,15 +18,15 @@ import resulttool.resultutils as resultutils
 
 def merge(args, logger):
 if os.path.isdir(args.target_results):
-results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map)
-resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
+results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map, add_testseries=args.off_add_testseries)
+resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map, add_testseries=args.off_add_testseries)
 resultutils.save_resultsdata(results, args.target_results)
 else:
-results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map)
+results = resultutils.load_resultsdata(args.base_results, 
configmap=resultutils.flatten_map, add_testseries=args.off_add_testseries)
 if os.path.exists(args.target_results):
-resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map)
+resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map, add_testseries=args.off_add_testseries)
 resultutils.save_resultsdata(results, 
os.path.dirname(args.target_results), fn=os.path.basename(args.target_results))
-
+logger.info('Merged results to %s' % os.path.dirname(args.target_results))
 return 0
 
 def register_commands(subparsers):
@@ -39,4 +39,5 @@ def register_commands(subparsers):
   help='the results file/directory to import')
 parser_build.add_argument('target_results',
   help='the target file or directory to merge the 
base_results with')
-
+parser_build.add_argument('-o', '--off-add-testseries', 
action='store_false',
+  help='turn off add testseries configuration to 
results')
diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index 153f2b8..4318ee7 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -42,7 +42,7 @@ store_map = {
 #
 # Load the json file and append the results data into the provided results dict
 #
-def append_resultsdata(results, f, configmap=store_map):
+def append_resultsdata(results, f, configmap=store_map, add_testseries=True):
 if type(f) is str:
 with open(f, "r") as filedata:
 data = json.load(filedata)
@@ -51,7 +51,7 @@ def append_resultsdata(results, f, configmap=store_map):
 for res in data:
 if "configuration" not in data[res] or "result" not in data[res]:
 raise ValueError("Test results data without configuration or 
result section?")
-if "TESTSERIES" not in data[res]["configuration"]:
+if add_testseries and "TESTSERIES" not in data[res]["configuration"]:
 data[res]["configuration"]["TESTSERIES"] = 
os.path.basename(os.path.dirname(f))
 testtype = data[res]["configuration"].get("TEST_TYPE")
 if testtype not in configmap:
@@ -72,16 +72,16 @@ def append_resultsdata(results, f, configmap=store_map):
 # Walk a directory and find/load results data
 # or load directly from a file
 #
-def load_resultsdata(source, configmap=store_map):
+def load_resultsdata(source, configmap=store_map, add_testseries=True):
 results = {}
 if os.path.isfile(source):
-append_resultsdata(results, source, configmap)
+append_resultsdata(results, source, configmap, add_testseries)
 return results
 for root, dirs, files in os.walk(source):
 for name in files:
 f = os.path.join(root, name)
 if name == "testresults.json":
-append_resultsdata(results, f, configmap)
+append_resultsdata(results, f, configmap, add_testseries)
 return results
 
 def filter_resultsdata(results, resultid):
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2] resulttool: Enable report for single result file

2019-04-02 Thread Yeoh Ee Peng
Current validation check function inside resulttool disallow the
report for single result file although the underlying library
was able to handle both directory and file as source input to report.
Removed the validation check as it was no longer needed and to
enable report for single result file.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/resulttool | 10 --
 1 file changed, 10 deletions(-)

diff --git a/scripts/resulttool b/scripts/resulttool
index 5a89e1c..18ac101 100755
--- a/scripts/resulttool
+++ b/scripts/resulttool
@@ -51,13 +51,6 @@ import resulttool.report
 import resulttool.manualexecution
 logger = scriptutils.logger_create('resulttool')
 
-def _validate_user_input_arguments(args):
-if hasattr(args, "source_dir"):
-if not os.path.isdir(args.source_dir):
-logger.error('source_dir argument need to be a directory : %s' % 
args.source_dir)
-return False
-return True
-
 def main():
 parser = argparse_oe.ArgumentParser(description="OEQA test result 
manipulation tool.",
 epilog="Use %(prog)s  
--help to get help on a specific command")
@@ -80,9 +73,6 @@ def main():
 elif args.quiet:
 logger.setLevel(logging.ERROR)
 
-if not _validate_user_input_arguments(args):
-return -1
-
 try:
 ret = args.func(args, logger)
 except argparse_oe.ArgumentUsageError as ae:
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH] resulttool/merge: Merge files from folders and control add testseries

2019-03-28 Thread Yeoh, Ee Peng
Hi RP,

Yes, we will separate the changes into different patches as suggested.
Thank you for your inputs. 

Thanks,
Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, March 28, 2019 3:43 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] resulttool/merge: Merge files from folders and 
control add testseries

Hi Ee Peng,

This patch isn't really in a form where it can be easily reviewed or accepted. 
A given patch needs to do one specific thing, so for example if you're 
renaming/refactoring a function that would belong in its own patch. If you're 
changing functionality, that would also be best in its own patch.

In the patch below you make various refactoring changes as well as making 
functionality changes meaning its very hard to separate the real functionality 
change from the rest of the "noise" of the renaming and refactoring. This makes 
review extremely difficult.

I'm not even sure I agree with some of the renaming/refactoring, e.g.
make_directory_and_write_json_file is a horrible name for a function and it 
only appears to be called once? There are probably other issues but its really 
hard to tell.

Could you split up this series, ideally showing the functionality change in its 
own patch please. I'm not promising it would all be accepted but it would at 
least allow review and allow the changes to be understood.

Cheers,

Richard

On Thu, 2019-03-28 at 12:54 +0800, Yeoh Ee Peng wrote:
> QA team execute extra testing that create multiple test result files, 
> where these test result files need to be merged under various use 
> cases.
> Furthermore, during results merging, user need control over the 
> testseries configuration creation as this configuration has important 
> implication to report and regression.
> 
> Current merge do not support merge results where both base and target 
> are directory.
> 
> Traceback (most recent call last):
>   File "/home/poky/scripts/resulttool", line 93, in 
> sys.exit(main())
>   File "/home/poky/scripts/resulttool", line 87, in main
> ret = args.func(args, logger)
>   File "/home/poky/scripts/lib/resulttool/merge.py", line 22, in merge
> resultutils.append_resultsdata(results, args.base_results,
> configmap=resultutils.store_map)
>   File "/home/poky/scripts/lib/resulttool/resultutils.py", line 47, in 
> append_resultsdata
> with open(f, "r") as filedata:
> IsADirectoryError: [Errno 21] Is a directory:
> ''
> 
> This patches enable merge for both base and target as directory.
> Also, enable control the creation of testseries configuration.
> 
> Previously the append_resultsdata function only allow append the 
> results data to the map_results data (where map_results data wrapped 
> the results data with configuration map as the key).
> Initially, we tried to implement an extra function where it will 
> enable append one map_results to another map_results data. But we 
> abandoned this alternative as this new append function will be pretty 
> much a duplicated function to the original append_resultsdata, and 
> these will create two append functions which they might be both hard 
> to maintain and confusing. Thus, we tried to refactor the append 
> function to enable a single append function to be used in all the 
> situation. Futhermore, since the map_results were only needed by 
> report and regression, we pulled the instructions used to turn results 
> data to map_results data to another function.
> Finally, we renamed the functions and arguments to clearly seperated 
> the functions using results data from the one using map_results data.
> 
> Signed-off-by: Yeoh Ee Peng 
> 

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH] resulttool/merge: Enable merge results to one file

2019-03-27 Thread Yeoh, Ee Peng
Hi RP,

Yes, you are right, I missed the current feature to merge to flat file. Thanks 
for your inputs and sharing. 

After we tested the merge to flat file features, these are our findings. 
1) Existing merge could not support use case where both base and target are 
directory. 
Traceback (most recent call last):
  File "/home/poky/scripts/resulttool", line 93, in 
sys.exit(main())
  File "/home/poky/scripts/resulttool", line 87, in main
ret = args.func(args, logger)
  File "/home/poky/scripts/lib/resulttool/merge.py", line 22, in merge
resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
  File "/home/poky/scripts/lib/resulttool/resultutils.py", line 47, in 
append_resultsdata
with open(f, "r") as filedata:
IsADirectoryError: [Errno 21] Is a directory: ''

2) Existing merge will always create testseries configuration where this 
configuration has important implication to report and regression. 

For current QA process, we need the merge to be more flexible where it will 
allow merge where both base and target are directory, furthermore we need 
control over when we want to create the testseries configuration. 

To fulfill both of these requirements, we tried the below: 

Previously the append_resultsdata function only allow append the results data 
to the map_results data (where map_results data wrapped the results data with 
configuration map as the key).
 
Initially, we tried to implement an extra function where it will enable append 
one map_results to another map_results data. But we abandoned this alternative 
as this new append function will be pretty much a duplicated function to the 
original append_resultsdata, and these will create two append functions which 
they might be both hard to maintain and confusing. Thus, we tried to refactor 
the append function to enable a single append function to be used  for most of 
the situation. Furthermore, since the map_results were
only needed by report and regression, we pulled the instructions used to turn 
results data to map_results data to another function. Finally, we renamed the 
functions and arguments to clearly seperated the functions using results data 
from the one using map_results data.

The new patches are at below:
http://lists.openembedded.org/pipermail/openembedded-core/2019-March/280547.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-March/280548.html

Please let us know if you have any more inputs or questions. Thank you very 
much for your attention and help!

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Tuesday, March 26, 2019 8:52 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH] resulttool/merge: Enable merge results to one 
file

On Tue, 2019-03-26 at 10:02 +0800, Yeoh Ee Peng wrote:
> QA team execute extra testing that create multiple test result files, 
> where these test result files need to be merged into a single file 
> under certain use case.
> 
> Enable merge to allow merging results into a single test result file.
> 
> Signed-off-by: Yeoh Ee Peng 
> ---
>  scripts/lib/resulttool/merge.py   | 29 -
>  scripts/lib/resulttool/resultutils.py | 76 
> +--
>  2 files changed, 82 insertions(+), 23 deletions(-)
> 
> diff --git a/scripts/lib/resulttool/merge.py 
> b/scripts/lib/resulttool/merge.py index 3e4b7a3..90b3cb3 100644
> --- a/scripts/lib/resulttool/merge.py
> +++ b/scripts/lib/resulttool/merge.py
> @@ -17,6 +17,26 @@ import json
>  import resulttool.resultutils as resultutils
>  
>  def merge(args, logger):
> +if args.merge_to_one:
> +if os.path.isdir(args.target_results):
> +target_results = resultutils.load_results(args.target_results)
> +else:
> +target_results = resultutils.append_results({}, 
> + args.target_results)

Looking at load_resultsdata:

def load_resultsdata(source, configmap=store_map):
results = {}
if os.path.isfile(source):
append_resultsdata(results, source, configmap)
return results

The code above can therefore be simplified to:

target_results = resultutils.load_results(args.target_results)

?

> +if os.path.isdir(args.base_results):
> +base_results = resultutils.load_results(args.base_results)
> +results = resultutils.append_results(target_results, 
> base_results)
> +else:
> +results = resultutils.append_results(target_results, 
> + args.base_results)


Again, I'm not sure you need to differentiate between a file and a directory 
given the way the code works internally?

> +
> +target_file_dir = os.path.join(os.path.dirname(args.target_results), 
> 'merged_r

[OE-core] [PATCH v02] resulttool/merge: enhance merge and control testseries

2019-03-27 Thread Yeoh Ee Peng
v01:
Enable merging results where both base and target are directory. 

v02:
Follow suggestion from RP to reused the existing resultutils
code base where it will merge results to flat file. 
Refactor resultutils code base to enable merging results where both
base and target are directory. 
Add control for creation of testseries configuration. 

Yeoh Ee Peng (1):
  resulttool/merge: Merge files from folders and control add testseries

 meta/lib/oeqa/selftest/cases/resulttooltests.py |  16 ++--
 scripts/lib/resulttool/merge.py |  29 --
 scripts/lib/resulttool/regression.py|  10 +-
 scripts/lib/resulttool/report.py|   6 +-
 scripts/lib/resulttool/resultutils.py   | 118 +++-
 scripts/lib/resulttool/store.py |   5 +-
 6 files changed, 112 insertions(+), 72 deletions(-)

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] resulttool/merge: Merge files from folders and control add testseries

2019-03-27 Thread Yeoh Ee Peng
QA team execute extra testing that create multiple test result files,
where these test result files need to be merged under various use cases.
Furthermore, during results merging, user need control over the
testseries configuration creation as this configuration has important
implication to report and regression.

Current merge do not support merge results where both base and target
are directory.

Traceback (most recent call last):
  File "/home/poky/scripts/resulttool", line 93, in 
sys.exit(main())
  File "/home/poky/scripts/resulttool", line 87, in main
ret = args.func(args, logger)
  File "/home/poky/scripts/lib/resulttool/merge.py", line 22, in merge
resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
  File "/home/poky/scripts/lib/resulttool/resultutils.py", line 47, in 
append_resultsdata
with open(f, "r") as filedata:
IsADirectoryError: [Errno 21] Is a directory: ''

This patches enable merge for both base and target as directory.
Also, enable control the creation of testseries configuration.

Previously the append_resultsdata function only allow append
the results data to the map_results data (where map_results data
wrapped the results data with configuration map as the key).
Initially, we tried to implement an extra function where it will
enable append one map_results to another map_results data. But
we abandoned this alternative as this new append function will be
pretty much a duplicated function to the original append_resultsdata,
and these will create two append functions which they might be both
hard to maintain and confusing. Thus, we tried to refactor the
append function to enable a single append function to be used
in all the situation. Futhermore, since the map_results were
only needed by report and regression, we pulled the instructions
used to turn results data to map_results data to another function.
Finally, we renamed the functions and arguments to clearly
seperated the functions using results data from the one using
map_results data.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/resulttooltests.py |  16 ++--
 scripts/lib/resulttool/merge.py |  29 --
 scripts/lib/resulttool/regression.py|  10 +-
 scripts/lib/resulttool/report.py|   6 +-
 scripts/lib/resulttool/resultutils.py   | 118 +++-
 scripts/lib/resulttool/store.py |   5 +-
 6 files changed, 112 insertions(+), 72 deletions(-)

diff --git a/meta/lib/oeqa/selftest/cases/resulttooltests.py 
b/meta/lib/oeqa/selftest/cases/resulttooltests.py
index 0a089c0..ea7d02e 100644
--- a/meta/lib/oeqa/selftest/cases/resulttooltests.py
+++ b/meta/lib/oeqa/selftest/cases/resulttooltests.py
@@ -60,10 +60,11 @@ class ResultToolTests(OESelftestTestCase):
 def test_regression_can_get_regression_base_target_pair(self):
 
 results = {}
-resultutils.append_resultsdata(results, 
ResultToolTests.base_results_data)
-resultutils.append_resultsdata(results, 
ResultToolTests.target_results_data)
-self.assertTrue('target_result1' in 
results['runtime/mydistro/qemux86/image'], msg="Pair not correct:%s" % results)
-self.assertTrue('target_result3' in 
results['runtime/mydistro/qemux86-64/image'], msg="Pair not correct:%s" % 
results)
+resultutils.append_results(results, ResultToolTests.base_results_data)
+resultutils.append_results(results, 
ResultToolTests.target_results_data)
+map_results = resultutils.get_map_results(results)
+self.assertTrue('target_result1' in 
map_results['runtime/mydistro/qemux86/image'], msg="Pair not correct:%s" % 
map_results)
+self.assertTrue('target_result3' in 
map_results['runtime/mydistro/qemux86-64/image'], msg="Pair not correct:%s" % 
map_results)
 
 def test_regrresion_can_get_regression_result(self):
 base_result_data = {'result': {'test1': {'status': 'PASSED'},
@@ -88,7 +89,8 @@ class ResultToolTests(OESelftestTestCase):
 
 def test_merge_can_merged_results(self):
 results = {}
-resultutils.append_resultsdata(results, 
ResultToolTests.base_results_data, configmap=resultutils.flatten_map)
-resultutils.append_resultsdata(results, 
ResultToolTests.target_results_data, configmap=resultutils.flatten_map)
-self.assertEqual(len(results[''].keys()), 5, msg="Flattened results 
not correct %s" % str(results))
+resultutils.append_results(results, ResultToolTests.base_results_data)
+resultutils.append_results(results, 
ResultToolTests.target_results_data)
+map_results = resultutils.get_map_results(results, 
configmap=resultutils.flatten_map)
+self.assertEqual(len(map_results[''].keys()), 5, msg="Flattened 
results not correct %s" % str(map_results))
 
diff --git a/scripts/lib/resulttool/merge.py b/scr

[OE-core] [PATCH] resulttool/merge: Enable merge results to one file

2019-03-25 Thread Yeoh Ee Peng
QA team execute extra testing that create multiple test result files,
where these test result files need to be merged into a single file
under certain use case.

Enable merge to allow merging results into a single test result file.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/merge.py   | 29 -
 scripts/lib/resulttool/resultutils.py | 76 +--
 2 files changed, 82 insertions(+), 23 deletions(-)

diff --git a/scripts/lib/resulttool/merge.py b/scripts/lib/resulttool/merge.py
index 3e4b7a3..90b3cb3 100644
--- a/scripts/lib/resulttool/merge.py
+++ b/scripts/lib/resulttool/merge.py
@@ -17,6 +17,26 @@ import json
 import resulttool.resultutils as resultutils
 
 def merge(args, logger):
+if args.merge_to_one:
+if os.path.isdir(args.target_results):
+target_results = resultutils.load_results(args.target_results)
+else:
+target_results = resultutils.append_results({}, 
args.target_results)
+if os.path.isdir(args.base_results):
+base_results = resultutils.load_results(args.base_results)
+results = resultutils.append_results(target_results, base_results)
+else:
+results = resultutils.append_results(target_results, 
args.base_results)
+
+target_file_dir = os.path.join(os.path.dirname(args.target_results), 
'merged_results/testresults.json')
+if os.path.isdir(args.target_results):
+target_file_dir = os.path.join(args.target_results, 
'merged_results/testresults.json')
+if args.merge_to_one_dir:
+target_file_dir = os.path.join(args.merge_to_one_dir, 
'testresults.json')
+resultutils.make_directory_and_write_json_file(target_file_dir, 
results)
+logger.info('Merged results to %s' % target_file_dir)
+return 0
+
 if os.path.isdir(args.target_results):
 results = resultutils.load_resultsdata(args.target_results, 
configmap=resultutils.store_map)
 resultutils.append_resultsdata(results, args.base_results, 
configmap=resultutils.store_map)
@@ -26,7 +46,7 @@ def merge(args, logger):
 if os.path.exists(args.target_results):
 resultutils.append_resultsdata(results, args.target_results, 
configmap=resultutils.flatten_map)
 resultutils.save_resultsdata(results, 
os.path.dirname(args.target_results), fn=os.path.basename(args.target_results))
-
+logger.info('Merged results to %s' % os.path.dirname(args.target_results))
 return 0
 
 def register_commands(subparsers):
@@ -39,4 +59,9 @@ def register_commands(subparsers):
   help='the results file/directory to import')
 parser_build.add_argument('target_results',
   help='the target file or directory to merge the 
base_results with')
-
+parser_build.add_argument('-o', '--merge-to-one', action='store_true',
+  help='merge results into one file only, does not 
add any new configurations to results '
+   'and does not create additional directory 
structure which based on configurations')
+parser_build.add_argument('-d', '--merge-to-one-dir', default='',
+  help='target directory to merge results into one 
file, default directory was based on '
+   'target_results directory')
diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index 153f2b8..e76045b 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -39,18 +39,47 @@ store_map = {
 "manual": ['TEST_TYPE', 'TEST_MODULE', 'MACHINE', 'IMAGE_BASENAME']
 }
 
+def load_json_data(f):
+if type(f) is str:
+with open(f, "r") as filedata:
+return json.load(filedata)
+else:
+return f
+
+def validate_result(result):
+if "configuration" not in result or "result" not in result:
+raise ValueError("Test results data without configuration or result 
section?")
+
+def delete_extra_ptest_data(result):
+if 'ptestresult.rawlogs' in result['result']:
+del result['result']['ptestresult.rawlogs']
+if 'ptestresult.sections' in result['result']:
+for i in result['result']['ptestresult.sections']:
+if 'log' in result['result']['ptestresult.sections'][i]:
+del result['result']['ptestresult.sections'][i]['log']
+
+def get_testresults_files(source):
+testresults_files = []
+for root, dirs, files in os.walk(source):
+for name in files:
+f = os.path.join(root, name)
+if name == "testresults.json":
+testresults_files.append(f)
+return testresults_files
+
+def make_directory_and_write_json_file(dst, results):
+os.makedirs(os.path.dirname(dst), exist_ok=True)
+with open(dst, 'w') as f:
+f.write

[OE-core] [PATCH] oeqa/manual/toaster: updated test id naming

2019-03-18 Thread Yeoh Ee Peng
All test id (eg. @alias) inside manual testcase file shall follow the same
test id naming convention from oeqa automated tests (eg. selftest,
runtime, sdk, etc), where the test id consists of
... Furthermore, there shall be
only 1 unique test_module per each manual testcases file, where
test_module match the file name itself.

This file was using test_module name that does not match the file name
itself. Fixed test_module name as well as the test_suite name.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/manual/toaster-managed-mode.json   | 130 +++
 meta/lib/oeqa/manual/toaster-unmanaged-mode.json |  56 +-
 2 files changed, 93 insertions(+), 93 deletions(-)

diff --git a/meta/lib/oeqa/manual/toaster-managed-mode.json 
b/meta/lib/oeqa/manual/toaster-managed-mode.json
index ba0658f..812f57d 100644
--- a/meta/lib/oeqa/manual/toaster-managed-mode.json
+++ b/meta/lib/oeqa/manual/toaster-managed-mode.json
@@ -1,7 +1,7 @@
 [
   {
 "test": {
-  "@alias": "toaster.toaster.All_layers:_default_view",
+  "@alias": 
"toaster-managed-mode.toaster-managed.All_layers:_default_view",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -47,7 +47,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.All_layers:_Add/delete_layers",
+  "@alias": 
"toaster-managed-mode.toaster-managed.All_layers:_Add/delete_layers",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -85,7 +85,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.All_targets:_Default_view",
+  "@alias": 
"toaster-managed-mode.toaster-managed.All_targets:_Default_view",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -119,7 +119,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.Configuration_variables:_default_view",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Configuration_variables:_default_view",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -153,7 +153,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.Configuration_variables:_Test_UI_elements",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Configuration_variables:_Test_UI_elements",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -215,7 +215,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.Project_builds:_Default_view",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Project_builds:_Default_view",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -249,7 +249,7 @@
   },
   {
 "test": {
-  "@alias": 
"toaster.toaster.Project_builds:_Sorting_the_project_builds_table",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Project_builds:_Sorting_the_project_builds_table",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -287,7 +287,7 @@
   },
   {
 "test": {
-  "@alias": 
"toaster.toaster.Project_builds:_customize_the_columns_of_the_table",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Project_builds:_customize_the_columns_of_the_table",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -321,7 +321,7 @@
   },
   {
 "test": {
-  "@alias": 
"toaster.toaster.Project_builds:_filter_the_contents_of_the_table",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Project_builds:_filter_the_contents_of_the_table",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -355,7 +355,7 @@
   },
   {
 "test": {
-  "@alias": 
"toaster.toaster.Project_builds:_search_the_contents_of_the_table",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Project_builds:_search_the_contents_of_the_table",
   "author": [
 {
   "email": "stanciux.mih...@intel.com",
@@ -393,7 +393,7 @@
   },
   {
 "test": {
-  "@alias": "toaster.toaster.Layer_details_page:_Default_view",
+  "@alias": 
"toaster-managed-mode.toaster-managed.Layer_details_page:_Default_view",
   "author": [
 

[OE-core] [PATCH] resulttool/report: Enable roll-up report for a commit

2019-03-11 Thread Yeoh Ee Peng
Enable roll-up all test results belong to a commit
and to provide a roll-up report.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/report.py | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
index ff1b32c..9008620 100644
--- a/scripts/lib/resulttool/report.py
+++ b/scripts/lib/resulttool/report.py
@@ -107,9 +107,17 @@ class ResultsTextReport(object):
  maxlen=maxlen)
 print(output)
 
-def view_test_report(self, logger, source_dir, tag):
+def view_test_report(self, logger, source_dir, branch, commit, tag):
 test_count_reports = []
-if tag:
+if commit:
+if tag:
+logger.warning("Ignoring --tag as --commit was specified")
+tag_name = "{branch}/{commit_number}-g{commit}/{tag_number}"
+repo = GitRepo(source_dir)
+revs = gitarchive.get_test_revs(logger, repo, tag_name, 
branch=branch)
+rev_index = gitarchive.rev_find(revs, 'commit', commit)
+testresults = resultutils.git_get_result(repo, revs[rev_index][2])
+elif tag:
 repo = GitRepo(source_dir)
 testresults = resultutils.git_get_result(repo, [tag])
 else:
@@ -125,7 +133,7 @@ class ResultsTextReport(object):
 
 def report(args, logger):
 report = ResultsTextReport()
-report.view_test_report(logger, args.source_dir, args.tag)
+report.view_test_report(logger, args.source_dir, args.branch, args.commit, 
args.tag)
 return 0
 
 def register_commands(subparsers):
@@ -136,5 +144,7 @@ def register_commands(subparsers):
 parser_build.set_defaults(func=report)
 parser_build.add_argument('source_dir',
   help='source file/directory that contain the 
test result files to summarise')
+parser_build.add_argument('--branch', '-B', default='master', help="Branch 
to find commit in")
+parser_build.add_argument('--commit', help="Revision to report")
 parser_build.add_argument('-t', '--tag', default='',
   help='source_dir is a git repository, report on 
the tag specified from that repository')
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH] scripts/resulttool: Enable manual result store and regression

2019-03-06 Thread Yeoh Ee Peng
To enable store for testresults.json file from manualexecution,
add layers metadata to configuration and add "manual" map to
resultutils.store_map.

To enable regression for manual, add "manual" map to
resultutils.regression_map. Also added compulsory configurations
('MACHINE', 'IMAGE_BASENAME') to manualexecution.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/manualexecution.py | 36 +--
 scripts/lib/resulttool/resultutils.py |  9 +---
 2 files changed, 26 insertions(+), 19 deletions(-)

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
index ecdc4e7..a44cc86 100755
--- a/scripts/lib/resulttool/manualexecution.py
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -19,6 +19,7 @@ import datetime
 import re
 from oeqa.core.runner import OETestResultJSONHelper
 
+
 def load_json_file(file):
 with open(file, "r") as f:
 return json.load(f)
@@ -46,31 +47,34 @@ class ManualTestRunner(object):
 def _get_input(self, config):
 while True:
 output = input('{} = '.format(config))
-if re.match('^[a-zA-Z0-9_]+$', output):
+if re.match('^[a-zA-Z0-9_-]+$', output):
 break
-print('Only alphanumeric and underscore are allowed. Please try 
again')
+print('Only alphanumeric and underscore/hyphen are allowed. Please 
try again')
 return output
 
 def _create_config(self):
+from oeqa.utils.metadata import get_layers
+from oeqa.utils.commands import get_bb_var
+from resulttool.resultutils import store_map
+
+layers = get_layers(get_bb_var('BBLAYERS'))
 self.configuration = {}
-while True:
-try:
-conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
-break
-except ValueError:
-print('Invalid input. Please provide input as a number not 
character.')
-for i in range(conf_total):
+self.configuration['LAYERS'] = layers
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = 'manual'
+self.configuration['TEST_MODULE'] = self.test_module
+
+extra_config = set(store_map['manual']) - set(self.configuration)
+for config in sorted(extra_config):
 print('-')
-print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('This is configuration #%s. Please provide configuration 
value(use "None" if not applicable).'
+  % config)
 print('-')
-name_conf = self._get_input('Configuration Name')
 value_conf = self._get_input('Configuration Value')
 print('-\n')
-self.configuration[name_conf.upper()] = value_conf
-current_datetime = datetime.datetime.now()
-self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
-self.configuration['STARTTIME'] = self.starttime
-self.configuration['TEST_TYPE'] = self.test_module
+self.configuration[config] = value_conf
 
 def _create_result_id(self):
 self.result_id = 'manual_' + self.test_module + '_' + self.starttime
diff --git a/scripts/lib/resulttool/resultutils.py 
b/scripts/lib/resulttool/resultutils.py
index c8ccf1b..153f2b8 100644
--- a/scripts/lib/resulttool/resultutils.py
+++ b/scripts/lib/resulttool/resultutils.py
@@ -21,19 +21,22 @@ flatten_map = {
 "oeselftest": [],
 "runtime": [],
 "sdk": [],
-"sdkext": []
+"sdkext": [],
+"manual": []
 }
 regression_map = {
 "oeselftest": ['TEST_TYPE', 'MACHINE'],
 "runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'IMAGE_PKGTYPE', 'DISTRO'],
 "sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE'],
-"sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE']
+"sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE'],
+"manual": ['TEST_TYPE', 'TEST_MODULE', 'IMAGE_BASENAME', 'MACHINE']
 }
 store_map = {
 "oeselftest": ['TEST_TYPE'],
 "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
 "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
-"sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME']
+"sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
+"

[OE-core] [PATCH] resulttool/regression: Ensure regressoin results are sorted

2019-02-27 Thread Yeoh Ee Peng
Sorted regression results to provide friendly viewing of report.

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resulttool/regression.py | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/scripts/lib/resulttool/regression.py 
b/scripts/lib/resulttool/regression.py
index ff77332..bdf531d 100644
--- a/scripts/lib/resulttool/regression.py
+++ b/scripts/lib/resulttool/regression.py
@@ -35,7 +35,7 @@ def compare_result(logger, base_name, target_name, 
base_result, target_result):
 logger.error('Failed to retrieved base test case status: %s' % 
k)
 if result:
 resultstring = "Regression: %s\n%s\n" % (base_name, 
target_name)
-for k in result:
+for k in sorted(result):
 resultstring += '%s: %s -> %s\n' % (k, result[k]['base'], 
result[k]['target'])
 else:
 resultstring = "Match: %s\n   %s" % (base_name, target_name)
@@ -82,9 +82,9 @@ def regression_common(args, logger, base_results, 
target_results):
 regressions.append(resstr)
 else:
 notfound.append("%s not found in target" % a)
-print("\n".join(matches))
-print("\n".join(regressions))
-print("\n".join(notfound))
+print("\n".join(sorted(matches)))
+print("\n".join(sorted(regressions)))
+print("\n".join(sorted(notfound)))
 
 return 0
 
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-20 Thread Yeoh, Ee Peng
Hi RP,

Noted, thank you once again for your great help and inputs! Really glad to hear 
that resulttool was ready! 
We shall plan forward for future improvement in html reports and graphs. Also 
we shall look into future test case development if needed.

Cheers,
Yeoh Ee Peng

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, February 21, 2019 5:44 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Burton, Ross ; Eggleton, Paul 

Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Wed, 2019-02-20 at 06:27 +, Yeoh, Ee Peng wrote:
> Thank you very much for all your help and inputs! 
> Would you like us to take all the improvements from your branch to 
> merge or squash with the base patchset and move forward with the one 
> remaining improvement below?

I've done some further work on this today and the good news is I was able to 
sort out the git repo handling pieces and fix the test cases.
With two of the test cases I ended up removing them as I've changed the 
functionality enough that they'd need to be rewritten.

I've sent out a patch on top of your original work as well as a second patch to 
move some functionality into library functions to allow us to use it from the 
new code. I think this combination of patches should now be ready to merge.

There will be fixes and improvements on top of this, e.g. I'd love to get some 
html reports and graphs but those are things that come later.

The next step once this is merged is to start storing autobuilder test result 
data, generating reports and regression reports automatically from each test 
run.

Its great to see this all coming together!

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-20 Thread Yeoh, Ee Peng
Hi RP,

Noted, thank you once again for your help and inputs! Really glad to hear that 
resulttool was ready! 
We shall plan forward for future improvement in html reports and graphs. Also 
we shall look into future test case development if needed.

Cheers,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, February 21, 2019 5:44 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Burton, Ross ; Eggleton, Paul 

Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Wed, 2019-02-20 at 06:27 +, Yeoh, Ee Peng wrote:
> Thank you very much for all your help and inputs! 
> Would you like us to take all the improvements from your branch to 
> merge or squash with the base patchset and move forward with the one 
> remaining improvement below?

I've done some further work on this today and the good news is I was able to 
sort out the git repo handling pieces and fix the test cases.
With two of the test cases I ended up removing them as I've changed the 
functionality enough that they'd need to be rewritten.

I've sent out a patch on top of your original work as well as a second patch to 
move some functionality into library functions to allow us to use it from the 
new code. I think this combination of patches should now be ready to merge.

There will be fixes and improvements on top of this, e.g. I'd love to get some 
html reports and graphs but those are things that come later.

The next step once this is merged is to start storing autobuilder test result 
data, generating reports and regression reports automatically from each test 
run.

Its great to see this all coming together!

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-19 Thread Yeoh, Ee Peng
Hi RP,

Thank you very much for all your help and inputs! 
Would you like us to take all the improvements from your branch to merge or 
squash with the base patchset and move forward with the one remaining 
improvement below? 

> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos 
> > for
> >   comparisons:
> >   - Its able to query data from git repos without changing the 
> > current
> > working branch, 
> >   - it can search on tag formats to find comparison data

Best regards,
Yeoh Ee Peng

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:46 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Sun, 2019-02-17 at 17:54 +, Richard Purdie wrote:
> > Despite my changes there are things that still need to be done.
> > Essential things which need to happen before this code merges:
> > 
> > * oe-git-archive is importing using the commit/branch of the current
> >   repo, not the data in the results file.

Also now fixed. I put my patches into master-next too.

With this working, I was able to run something along the lines of:

for D in $1/*; do
resulttool store $D $2 --allow-empty done

on the autobuilder's recent results which lead to the creation of this
repository:

http://git.yoctoproject.org/cgit.cgi/yocto-testresults/


> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos 
> > for
> >   comparisons:
> >   - Its able to query data from git repos without changing the 
> > current
> > working branch, 
> >   - it can search on tag formats to find comparison data

Which means we now need to make the git branch functionality of the report and 
regression commands compare with the above repo, so we're a step closer to 
getting thie merged.

Ultimately we'll auto-populate the above repo by having the autobuilder run a 
"store" command at the end of its runs.

I have a feeling I may have broken the resulttool selftests so that is 
something else which will need to be fixed before anything merges. Time for me 
to step away from the keyboard for a bit too.

Cheers,

Richard


-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-18 Thread Yeoh, Ee Peng
RP, 
Noted, thanks. 

Cheers,
Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:12 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Mon, 2019-02-18 at 09:20 +, Yeoh, Ee Peng wrote:
> Thank you for sharing on the selftest comparison consideration! 
> 
> I agreed with you that in the high level, selftest should be 
> independent of which HOST_DISTRO, it shall compared 2 selftest even 
> when the host distro are different.
> 
> But in the case that the build have multiple set of selftest each with 
> slightly different environments (eg. host distro), in that case, will 
> it better to compare selftest more closely if possible with same host 
> distro used?

In an ideal world, yes. In reality trying to do that and making it conditional 
will complicate the code for little "real" end difference though?

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-18 Thread Yeoh, Ee Peng
Hi Richard,

Thank you for sharing on the selftest comparison consideration! 

I agreed with you that in the high level, selftest should be independent of 
which HOST_DISTRO, it shall compared 2 selftest even when the host distro are 
different. 

But in the case that the build have multiple set of selftest each with slightly 
different environments (eg. host distro), in that case, will it better to 
compare selftest more closely if possible with same host distro used? 

Cheers,
Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, February 18, 2019 5:08 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Mon, 2019-02-18 at 08:09 +, Yeoh, Ee Peng wrote: 
> I did some testing with the latest from resulttool: Update to use 
> gitarchive library function.
> http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t2
> 22=b9eecaabe56db5bcafff31e67cdabadc42e2d2e4
> 
> I had 2 questions. 
> 1. For "resulttool regression", currently it was comparing result id 
> set without comprehending the difference in the host distro used to 
> executed the oeselftest. Example: it was matching oeselftest run with
> fedora28 host distro with oeselftest run with ubuntu18 host distro, is 
> this the expected behavior?
> Match: oeselftest_fedora-28_qemux86-64_20190201181656
>oeselftest_ubuntu-18.04_qemux86-64_20190201175023
> Match: oeselftest_fedora-26_qemux86-64_20190131144317
>oeselftest_fedora-26_qemux86-64_20190131144317
> Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
>oeselftest_fedora-28_qemux86-64_20190201181656
> Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
>oeselftest_opensuse-42.3_qemux86-64_20190126152612

There were two reasons for this:

a) the results of the selftest should be independent of which HOST_DISTRO 
they're run on so they can be compared.

b) some builds only have one oe-selftest (a-quick) and some have four (a-full). 
In an a-quick build, the HOST_DISTRO would likely therefore be different 
between two builds but we still would like the tool to compare them.

> 2. For "resulttool store", I had noticed that it will now generally 
> stored testresults.json in a meaningful file directory structure based 
> on the store_map except oeselftest. oeselftest currently store 
> multiple result id set inside oselftest file directory without 
> comprehend the host distro.
> 
> For example runtime, store testresult.json with the configured 
> store_map.
> ├── oeselftest
> │   └── testresults.json
> ├── runtime
> │   ├── poky
> │   │   ├── qemuarm
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │   └── testresults.json
> │   │   ├── qemuarm64
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │   └── testresults.json
> 
> I believe that we shall again comprehend the 'HOST_DISTRO'
> configuration inside the store_map.  
> store_map = {
> -"oeselftest": ['TEST_TYPE'],
> +"oeselftest": ['TEST_TYPE','HOST_DISTRO'],
>  "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
>  "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
>  "sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 
> 'IMAGE_BASENAME']
> 
> Doing so, it will store oeselftest in a more useful file directory 
> structure with host distro comprehended.
> └── oeselftest
> ├── fedora-26
> │   └── testresults.json
> ├── fedora-28
> │   └── testresults.json
> ├── opensuse-42.3
> │   └── testresults.json
> └── ubuntu-18.04
> └── testresults.json

The reasoning is the same as the above, its more useful to allow the files to 
be directly compared between different host distros.

Cheers,

Richard



-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-18 Thread Yeoh, Ee Peng
Hi RP,

I have a question for "TESTSERIES".
* Formalised the handling of "file_name" to "TESTSERIES" which the code will 
now add into the json configuration data if its not present, based on the 
directory name.

May I know why was "TESTSERIES" was added as one of the key configuration for 
regression comparison selection inside regression_map? 
regression_map = {
"oeselftest": ['TEST_TYPE', 'MACHINE'],
"runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'IMAGE_PKGTYPE', 'DISTRO'],
"sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE'],
"sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE']
}

Firstly, from the current yocto-testresults repository, I noticed that 
"TESTSERIES" was mostly duplicated with "MACHINE", or "MACHINE" & "DISTRO", or 
"TEST_TYPE" for selftest case. 

Secondly, since "TESTSERIES" was created based on directory name from the 
source directory being used, will this introduce unexpected complication to 
regression comparison in the future if directory name for the source was 
changed? If directory name was changed even slightly, example for 
runtime_core-image-lsb, if the source directory name changed from "qemuarm-lsb" 
to "qemuarm_lsb", I believe the regression comparison will not able to compare 
the result id set even though they were having same configurations and they 
were meant to be compare directly. 

Examples: 
"runtime_core-image-minimal_qemuarm_20190215014628": {
"configuration": {
"DISTRO": "poky",
"HOST_DISTRO": "ubuntu-18.04",
"IMAGE_BASENAME": "core-image-minimal",
"IMAGE_PKGTYPE": "rpm",
"LAYERS": {
"meta": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
},
"meta-poky": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
},
"meta-yocto-bsp": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
}
},
"MACHINE": "qemuarm",
"STARTTIME": "20190215014628",
"TESTSERIES": "qemuarm",
"TEST_TYPE": "runtime"
},

"runtime_core-image-lsb_qemuarm_20190215014624": {
"configuration": {
"DISTRO": "poky-lsb",
"HOST_DISTRO": "ubuntu-18.04",
"IMAGE_BASENAME": "core-image-lsb",
"IMAGE_PKGTYPE": "rpm",
"LAYERS": {
"meta": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
},
"meta-poky": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
},
"meta-yocto-bsp": {
"branch": "master",
"commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
}
},
"MACHINE": "qemuarm",
"STARTTIME": "20190215014624",
"TESTSERIES": "qemuarm-lsb",
"TEST_TYPE": "runtime"
},

"oeselftest_debian-9_qemux86-64_20190215010815": {
"configuration": {
"HOST_DISTRO": "debian-9",
"HOST_NAME": "debian9-ty-2.yocto.io",
"LAYERS": {
"meta": {
"branch": "master",
    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
"commit_count": 53265
}

Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-18 Thread Yeoh, Ee Peng
Hi RP,

Thank you very much again for continuously providing your precious feedbacks to 
me.
Also thank you very much for spending great amount of time to improve this 
patchset siginificantly. 
 
I did some testing with the latest from resulttool: Update to use gitarchive 
library function. 
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222=b9eecaabe56db5bcafff31e67cdabadc42e2d2e4

I had 2 questions. 
1. For "resulttool regression", currently it was comparing result id set 
without comprehending the difference in the host distro used to executed the 
oeselftest. Example: it was matching oeselftest run with fedora28 host distro 
with oeselftest run with ubuntu18 host distro, is this the expected behavior? 
Match: oeselftest_fedora-28_qemux86-64_20190201181656
   oeselftest_ubuntu-18.04_qemux86-64_20190201175023
Match: oeselftest_fedora-26_qemux86-64_20190131144317
   oeselftest_fedora-26_qemux86-64_20190131144317
Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
   oeselftest_fedora-28_qemux86-64_20190201181656
Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
   oeselftest_opensuse-42.3_qemux86-64_20190126152612

I believe that we shall comprehend the 'HOST_DISTRO' configuration inside the 
regression_map.  
regression_map = {
-"oeselftest": ['TEST_TYPE', 'MACHINE'],
+"oeselftest": ['TEST_TYPE', 'HOST_DISTRO', 'MACHINE'],
 "runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'IMAGE_PKGTYPE', 'DISTRO'],
 "sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE'],
 "sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 
'SDKMACHINE']
 }

After comprehending this 'HOST_DISTRO', it was able to perform regression for 
oeselftest with the matching host distro.
Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
   oeselftest_ubuntu-18.04_qemux86-64_20190201175023
Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
   oeselftest_opensuse-42.3_qemux86-64_20190126152612
Match: oeselftest_fedora-26_qemux86-64_20190131144317
   oeselftest_fedora-26_qemux86-64_20190131144317
Match: oeselftest_fedora-28_qemux86-64_20190201181656
   oeselftest_fedora-28_qemux86-64_20190201181656

2. For "resulttool store", I had noticed that it will now generally stored 
testresults.json in a meaningful file directory structure based on the 
store_map except oeselftest. oeselftest currently store multiple result id set 
inside oselftest file directory without comprehend the host distro. 

For example runtime, store testresult.json with the configured store_map. 
├── oeselftest
│   └── testresults.json
├── runtime
│   ├── poky
│   │   ├── qemuarm
│   │   │   ├── core-image-minimal
│   │   │   │   └── testresults.json
│   │   │   ├── core-image-sato
│   │   │   │   └── testresults.json
│   │   │   └── core-image-sato-sdk
│   │   │   └── testresults.json
│   │   ├── qemuarm64
│   │   │   ├── core-image-minimal
│   │   │   │   └── testresults.json
│   │   │   ├── core-image-sato
│   │   │   │   └── testresults.json
│   │   │   └── core-image-sato-sdk
│   │   │   └── testresults.json

I believe that we shall again comprehend the 'HOST_DISTRO' configuration inside 
the store_map.  
store_map = {
-"oeselftest": ['TEST_TYPE'],
+"oeselftest": ['TEST_TYPE','HOST_DISTRO'],
 "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
 "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
 "sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME']

Doing so, it will store oeselftest in a more useful file directory structure 
with host distro comprehended. 
└── oeselftest
├── fedora-26
│   └── testresults.json
├── fedora-28
│   └── testresults.json
├── opensuse-42.3
│   └── testresults.json
└── ubuntu-18.04
└── testresults.json

Please let me know if you have any question related to above. 

Best regards,
Yeoh Ee Peng 

-Original Message-----
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:46 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Sun, 2019-02-17 at 17:54 +, Richard Purdie wrote:
> > Despite my changes there are things that still need to be done.
> > Essential things which need to happen before this code merges:
> > 
> > * oe-git-archive is importing using the commit/branch of the current
> >   repo, not the data in the results file.

Also now fixed. I put my patches into master-next too.

With this working, I was able to run something along the lines of:

for D in $1/*; do
resulttool store $D $2 --allow-empty done

on the autobuilder's recent results which lead to the creation of this
repository:

http://git.yoctoproject.

Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-17 Thread Yeoh, Ee Peng
Hi RP,

Thank you very much for providing me your precious advices and I will 
definitely look into them. 

Let me look into all the improvements that you had developed and I will try my 
best to provide further improvement needed. 

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, February 18, 2019 12:10 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Thu, 2019-02-14 at 13:50 +0800, Yeoh Ee Peng wrote:
> v1:
>   Face key error from oe-git-archive
>   Undesirable behavior when storing to multiple git branch
> 
> v2: 
>   Include fix for oe-git-archive
>   Include fix for store result to multiple git branch
>   Improve git commit message   
> 
> v3:
>   Enhance fix for oe-git-archive by using exception catch to
>   improve code readability and easy to understand
> 
> v4:
>   Add new features, merge result files & regression analysis 
>   Add selftest to merge, store, report and regression functionalities
>   Revise codebase for pythonic
>   
> v5:
>   Add required files for selftest testing store
>   
> v6:
>   Add regression for directory and git repository
>   Enable regression pairing base set to multiple target sets 
>   Revise selftest testing for regression
>   
> v7: 
>   Optimize regression computation for ptest results
>   Rename entry point script to resulttool
> 
> Mazliana (1):
>   scripts/resulttool: enable manual execution and result creation
> 
> Yeoh Ee Peng (1):
>   resulttool: enable merge, store, report and regression analysis

Hi Ee Peng,

Thanks for working on this, it does get better each iteration. I've been 
struggling a little to explain what we need to do to finish this off. Firstly I 
wanted to give some feedback on some general python
tips:

a) We can't use subprocess.run() as its a python 3.6 feature and we have 
autobuilder workers with 3.5. This lead to failures like: 
https://autobuilder.yoctoproject.org/typhoon/#/builders/56/builds/242
We can use check_call or other functions instead.

b) I'd not recommend using "file" as a variable name in python as its a 
keyword, similarly "dict" (in resultutils.py).

c) get_dict_value() is something we can probably avoid needing if we use the 
.get() methods of dicts (you can specify a value to return if a value isn't 
present).

I started to experiment with the tool to try and get it to follow the workflow 
we need with the autobuilder QA process. Right now I'm heavily focusing on what 
we need it to do to generate reports from the autobuilder, to the extent that 
I'm ignoring most other workflows.

The reason for this is that I want to get it merged and use this to run
2.7 M3 testing on the autobuilder. The other workflows can be added if/as/when 
we find we have need of them.

I ended up making a few changes to alter the tool to do the things I think we 
need it to and to improve its output/usability. I'll send out a separate patch 
with my changes so far. I've tried to summarise some of the reasoning here:

* Rename resultsutils -> resultutils to match the resultstool -> resulttool 
rename

* Formalised the handling of "file_name" to "TESTSERIES" which the code will 
now add into the json configuration data if its not present, based on the 
directory name.

* When we don't have failed test cases, print something saying so instead of an 
empty table

* Tweak the table headers in the report to be more readable (reference "Test 
Series" instead if file_id and ID instead of results_id)

* Improve/simplify the max string length handling

* Merge the counts and percentage data into one table in the report since 
printing two reports of the same data confuses the user

* Removed the confusing header in the regression report

* Show matches, then regressions, then unmatched runs in the regression report, 
also remove chatting unneeded output

* Try harder to "pair" up matching configurations to reduce noise in the 
regressions report

* Abstracted the "mapping" table concept used to pairing in the regression code 
to general code in resultutils

* Created multiple mappings for results analysis, results storage and 
'flattening' results data in a merge

* Simplify the merge command to take a source and a destination, letting the 
destination be a directory or a file, removing the need for an output directory 
parameter

* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression mappings

* Have the store command place the testresults files in a layout from the 
mapping, making commits into the git repo for results storage more useful for 
simple comparison purposes

* Set the oe-git-archive tag format appropriately for oeqa results storage (and 
simplify the commit messages closer

Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report and regression analysis

2019-02-13 Thread Yeoh, Ee Peng
Hi RP,

I had executed the runtime/ptest using the latest master with the latest 
changes to understand the new improvement. 

For now, the resulttool regression will ignore both 'ptestresult.rawlogs' & 
'ptestresult.sections' as current regression operation focuses on comparing the 
"status" differences and it does not need the log as well as the new section 
information.  By ignoring both 'ptestresult.rawlogs' & 'ptestresult.sections' , 
the regression time was optimized to seconds instead of minutes for ptest.  

For additional information inside 'ptestresult.sections', do we need similar 
regression? Any idea which data inside 'ptestresult.sections' will be useful 
for regression? 

Currently, resulttool regression will only print text based report, if html 
report was needed, it can be extend by using jinja2 framework.  Do we need html 
report for this regression? Any requirement for the html report? 

http://lists.openembedded.org/pipermail/openembedded-core/2019-February/278971.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-February/278972.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-February/278973.html

Thanks,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Friday, February 1, 2019 7:40 AM
To: Yeoh, Ee Peng ; 
'openembedded-core@lists.openembedded.org' 

Cc: Eggleton, Paul ; Burton, Ross 

Subject: Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report 
and regression analysis

On Thu, 2019-01-31 at 05:23 +0000, Yeoh, Ee Peng wrote:
> Hi RP,
> 
> I looked into ptest and regression. The existing "resultstool 
> regression" can be used to perform regression on ptest, since the 
> testresults.json capture ptest status. I had executed regression 
> script for the below 2 ptest testresults.json. Attached was the 
> regression report for ptest.
> 
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M2.rc1/testresults
> /qemux86-64-ptest/testresults.json
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M1.rc1/testresults
> /qemux86-64-ptest/testresults.json
> 
> The only challenges now was since ptest result set was relatively 
> large, it was taking some time for computing the regression. Also 
> there was this "ptestresult.rawlogs" testcase that does not contain 
> status but the large rawlog.
> 
> I did an experiment where I run the regression on testresults.json 
> with and without the ptest rawlog. It shows the time taken for 
> regression was significantly larger when it contain the rawlog. I will 
> try to improve the regression time by throw away the rawlog at runtime 
> when perform computing.
> testresults.json with rawlog
> Regression start time: 20190131122805
> Regression end time:   20190131124425
> Time taken: 16 mins 20 sec
> 
> testresults.json without rawlog
> Regression start time: 20190131124512
> Regression end time:   20190131124529
> Time taken: 17 sec

Analysing the rawlog makes no sense so the tool needs to simply ignore that. 16 
minutes is far too long! 

I've just merged some changes which mean there are probably some other sections 
it will need to ignore now too since the logs are now being split out per ptest 
(section). I've left rawlogs in as its useful for debugging but once the 
section splits are working we could remove it.

This adds in timing data so we know how long each ptest took to run (in 
seconds), it also adds in exit code and timeout data. These all complicate the 
regression analysis but the fact that lttng has been timing out (for example) 
has been overlooked until now and shows we need to analyse these things.

I'm considering whether we should have a command in resulttool which takes json 
data and writes it out in a "filesystem" form.

The code in logparser.py already has a rudimentary version of this for ptest 
data. It could be extended to write out a X.log for each ptest based on the 
split out data and maybe duration and timeout information in some form too.

The idea behind flat filesystem representations of the data is that a user can 
more easily explore or compare them, they also show up well in git.

Its also worth thinking about how we'll end up using this. testresult will get 
called at the end of builds (particularly) release builds and we'll want it to 
generate a QA report for the automated test data. The autobuilder will likely 
put an http link in the "release build ready"
email to an html like report stored alongside the testresults json files.

I'm still trying to figure out how to make this all fit together and allow 
automated comparisons but the build performance data would also fit into this 
(and already has html reports).

Cheers,

Richard

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis

2019-02-13 Thread Yeoh Ee Peng
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.

These resulttool operations expect the testresults.json file to use
the json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resulttool

To store test result from oeqa automated tests, execute the below
$ resulttool store  

To merge multiple testresults.json files, execute the below
$ resulttool merge  

To report test report, execute the below
$ resulttool report 

To perform regression file analysis, execute the below
$ resulttool regression-file  

To perform regression dir analysis, execute the below
$ resulttool regression-dir  

To perform regression git analysis, execute the below
$ resulttool regression-git   

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/files/testresults/testresults.json   |  40 
 meta/lib/oeqa/selftest/cases/resulttooltests.py| 104 +++
 scripts/lib/resulttool/__init__.py |   0
 scripts/lib/resulttool/merge.py|  71 +++
 scripts/lib/resulttool/regression.py   | 208 +
 scripts/lib/resulttool/report.py   | 113 +++
 scripts/lib/resulttool/resultsutils.py |  67 +++
 scripts/lib/resulttool/store.py| 110 +++
 .../resulttool/template/test_report_full_text.txt  |  35 
 scripts/resulttool |  84 +
 10 files changed, 832 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resulttooltests.py
 create mode 100644 scripts/lib/resulttool/__init__.py
 create mode 100644 scripts/lib/resulttool/merge.py
 create mode 100644 scripts/lib/resulttool/regression.py
 create mode 100644 scripts/lib/resulttool/report.py
 create mode 100644 scripts/lib/resulttool/resultsutils.py
 create mode 100644 scripts/lib/resulttool/store.py
 create mode 100644 scripts/lib/resulttool/template/test_report_full_text.txt
 create mode 100755 scripts/resulttool

diff --git a/meta/lib/oeqa/files/testresults/testresults.json 
b/meta/lib/oeqa/files/testresults/testresults.json
new file mode 100644
index 000..1a62155
--- /dev/null
+++ b/meta/lib/oeqa/files/testresults/testresults.json
@@ -0,0 +1,40 @@
+{
+"runtime_core-image-minimal_qemuarm_20181225195701": {
+"configuration": {
+"DISTRO": "poky",
+"HOST_DISTRO": "ubuntu-16.04",
+"IMAGE_BASENAME"

[OE-core] [PATCH 0/2 v7] test-case-mgmt

2019-02-13 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

v4:
  Add new features, merge result files & regression analysis 
  Add selftest to merge, store, report and regression functionalities
  Revise codebase for pythonic
  
v5:
  Add required files for selftest testing store
  
v6:
  Add regression for directory and git repository
  Enable regression pairing base set to multiple target sets 
  Revise selftest testing for regression
  
v7: 
  Optimize regression computation for ptest results
  Rename entry point script to resulttool

Mazliana (1):
  scripts/resulttool: enable manual execution and result creation

Yeoh Ee Peng (1):
  resulttool: enable merge, store, report and regression analysis

 meta/lib/oeqa/files/testresults/testresults.json   |  40 
 meta/lib/oeqa/selftest/cases/resulttooltests.py| 104 +++
 scripts/lib/resulttool/__init__.py |   0
 scripts/lib/resulttool/manualexecution.py  | 137 ++
 scripts/lib/resulttool/merge.py|  71 +++
 scripts/lib/resulttool/regression.py   | 208 +
 scripts/lib/resulttool/report.py   | 113 +++
 scripts/lib/resulttool/resultsutils.py |  67 +++
 scripts/lib/resulttool/store.py| 110 +++
 .../resulttool/template/test_report_full_text.txt  |  35 
 scripts/resulttool |  92 +
 11 files changed, 977 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resulttooltests.py
 create mode 100644 scripts/lib/resulttool/__init__.py
 create mode 100755 scripts/lib/resulttool/manualexecution.py
 create mode 100644 scripts/lib/resulttool/merge.py
 create mode 100644 scripts/lib/resulttool/regression.py
 create mode 100644 scripts/lib/resulttool/report.py
 create mode 100644 scripts/lib/resulttool/resultsutils.py
 create mode 100644 scripts/lib/resulttool/store.py
 create mode 100644 scripts/lib/resulttool/template/test_report_full_text.txt
 create mode 100755 scripts/resulttool

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2 v7] scripts/resulttool: enable manual execution and result creation

2019-02-13 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to resulttool scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resulttool

To execute manual test cases, execute the below
$ resulttool manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
---
 scripts/lib/resulttool/manualexecution.py | 137 ++
 scripts/resulttool|   8 ++
 2 files changed, 145 insertions(+)
 create mode 100755 scripts/lib/resulttool/manualexecution.py

diff --git a/scripts/lib/resulttool/manualexecution.py 
b/scripts/lib/resulttool/manualexecution.py
new file mode 100755
index 000..64ec581
--- /dev/null
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -0,0 +1,137 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+from resulttool.resultsutils import load_json_file
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_cases = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _get_testcases(self, file):
+self.jdata = load_json_file(file)
+self.test_cases = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in self.jdata:
+self.test_cases.append(i['test']['@alias'].split('.', 2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+test_result = {}
+testcase_id = self.test_module + '.' 

Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report and regression analysis

2019-01-29 Thread Yeoh, Ee Peng
Hi RP,

I had submitted the v6 patches with below changes.
v6:
  Add regression for directory and git repository
  Enable regression pairing base set to multiple target sets 
  Revise selftest testing for regression
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278486.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278487.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278488.html

For regression directory and git, it can support arbitrary directory layout.  
The regression will select pair of result instances for comparison based on the 
unique configurations data inside the result instance itself. 

I have some questions regarding below items:
>I think there is a third thing we also need to look at:
>
>It would be great if there was some way of allowing some kind of templating 
>when storing into the git >repository. This way a general local log file from 
>tmp/log/oeqa could be stored into the git repo, being >split according to the 
>layout of the repo if needed.
>
>Our default layout could match that from the autobuilder but the repository 
>could define a layout?
Before developing custom template layout for store git repo, I would like to 
understand more so that I will make sure the output will fulfill the 
requirement. May I know what was the intention to store result into git repo 
with custom layout template? Can you share the use case? 

For ptest and perform tests, let me look into them. Thank you for sharing the 
logparser. 
http://git.yoctoproject.org/cgit.cgi/poky-contrib/tree/meta/lib/oeqa/utils/logparser.py#n101

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Tuesday, January 29, 2019 12:29 AM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Eggleton, Paul ; Burton, Ross 

Subject: Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report 
and regression analysis

Hi Ee Peng,

On Mon, 2019-01-28 at 02:12 +, Yeoh, Ee Peng wrote:
> Thanks for providing the precious inputs. 
> Agreed with you that the current patch that enable files based 
> regression was not enough for other use cases.
> 
> From the information that you had shared, there are 2 more regression 
> use cases that I have in mind:
> Use case#1: directory based regression Given that Autobuilder stored 
> result files inside /testresults directories, user shall be able to 
> perform the directory based regression using output from Autobuilder 
> directly, such as below Autobuilder directories.
> https://autobuilder.yocto.io/pub/releases/yocto-2.6.1.rc1/testresults/
> qemux86/testresults.json 
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M1.rc1/testresults
> /qemux86/testresults.json 
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M2.rc1/testresults
> /qemux86/testresults.json
> 
> Assumed that there are 2 directories storing list of result files.
> User shall provide these 2 directories for regression, regression 
> scripts will first parse through all the available files inside each 
> directories, then perform regression based on available configuration 
> data to determine the regression pair (eg. select result_set_1 from
> directory#1 and result_set_x from directory#2 if they both have 
> matching configurations).

Yes, this would be very useful. I suspect you don't need to have matching 
layouts, just import from all the json files in a given directory for the 
comparison.

This way we can support arbitrary layouts.

> Use case#2: git branch based regression Given that Autobuilder stored 
> result files inside /testresults directories, user shall first store 
> these directories and the result files in each git branch accordingly 
> using the existing store plugin.
> After that, user can used the git branch based regression to analysis 
> the information.
> Store in yocto-2.6.1.rc1, yocto-2.7_M1.rc1, yocto-2.7_M2.rc1 git 
> branch accordingly 
> https://autobuilder.yocto.io/pub/releases/yocto-2.6.1.rc1/testresults/
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M1.rc1/testresults
> / 
> https://autobuilder.yocto.io/pub/releases/yocto-2.7_M2.rc1/testresults
> /
>  
> Assumed that result files are stored inside git repository with 
> specific git branch storing result files for single commit. User shall 
> provide the 2 specific git branches for regression, regression scripts 
> will first parse through all the available files inside each git 
> branch, then perform regression based on available configuration data 
> to determine the regression pair (eg. select result_set_1 from
> git_branch_1 and result_set_x from git_branch_2 if they both have 
> matching configurations).
> 
> The current codebase can be easily extended to enable both

[OE-core] [PATCH 1/2 v6] resultstool: enable merge, store, report and regression analysis

2019-01-29 Thread Yeoh Ee Peng
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.

These resultstool operations expect the testresults.json file to use
the json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To store test result from oeqa automated tests, execute the below
$ resultstool store  

To merge multiple testresults.json files, execute the below
$ resultstool merge  

To report test report, execute the below
$ resultstool report 

To perform regression file analysis, execute the below
$ resultstool regression-file  

To perform regression dir analysis, execute the below
$ resultstool regression-dir  

To perform regression git analysis, execute the below
$ resultstool regression-git   

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/files/testresults/testresults.json   |  40 +
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 +++
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/merge.py   |  71 
 scripts/lib/resultstool/regression.py  | 195 +
 scripts/lib/resultstool/report.py  | 113 
 scripts/lib/resultstool/resultsutils.py|  57 ++
 scripts/lib/resultstool/store.py   | 110 
 .../resultstool/template/test_report_full_text.txt |  35 
 scripts/resultstool|  84 +
 10 files changed, 809 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

diff --git a/meta/lib/oeqa/files/testresults/testresults.json 
b/meta/lib/oeqa/files/testresults/testresults.json
new file mode 100644
index 000..1a62155
--- /dev/null
+++ b/meta/lib/oeqa/files/testresults/testresults.json
@@ -0,0 +1,40 @@
+{
+"runtime_core-image-minimal_qemuarm_20181225195701": {
+"configuration": {
+"DISTRO": "poky",
+"HOST_DISTRO": "ubuntu-16.04",
+  

[OE-core] [PATCH 2/2 v6] scripts/resultstool: enable manual execution and result creation

2019-01-29 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To execute manual test cases, execute the below
$ resultstool  manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resultstool/manualexecution.py | 137 +
 scripts/resultstool|   8 ++
 2 files changed, 145 insertions(+)
 create mode 100644 scripts/lib/resultstool/manualexecution.py

diff --git a/scripts/lib/resultstool/manualexecution.py 
b/scripts/lib/resultstool/manualexecution.py
new file mode 100644
index 000..e0c0c36
--- /dev/null
+++ b/scripts/lib/resultstool/manualexecution.py
@@ -0,0 +1,137 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+from resultstool.resultsutils import load_json_file
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_cases = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _get_testcases(self, file):
+self.jdata = load_json_file(file)
+self.test_cases = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in self.jdata:
+self.test_cases.append(i['test']['@alias'].split('.', 2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+t

[OE-core] [PATCH 0/2 v6] test-case-mgmt

2019-01-29 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

v4:
  Add new features, merge result files & regression analysis 
  Add selftest to merge, store, report and regression functionalities
  Revise codebase for pythonic
  
v5:
  Add required files for selftest testing store
  
v6:
  Add regression for directory and git repository
  Enable regression pairing base set to multiple target sets 
  Revise selftest testing for regression

Mazliana (1):
  scripts/resultstool: enable manual execution and result creation

Yeoh Ee Peng (1):
  resultstool: enable merge, store, report and regression analysis

 meta/lib/oeqa/files/testresults/testresults.json   |  40 +
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 +++
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/manualexecution.py | 137 +++
 scripts/lib/resultstool/merge.py   |  71 
 scripts/lib/resultstool/regression.py  | 195 +
 scripts/lib/resultstool/report.py  | 113 
 scripts/lib/resultstool/resultsutils.py|  57 ++
 scripts/lib/resultstool/store.py   | 110 
 .../resultstool/template/test_report_full_text.txt |  35 
 scripts/resultstool|  92 ++
 11 files changed, 954 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/manualexecution.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report and regression analysis

2019-01-27 Thread Yeoh, Ee Peng
Hi RP,

Thanks for providing the precious inputs. 
Agreed with you that the current patch that enable files based regression was 
not enough for other use cases. 

>From the information that you had shared, there are 2 more regression use 
>cases that I have in mind:
Use case#1: directory based regression
Given that Autobuilder stored result files inside /testresults directories, 
user shall be able to perform the directory based regression using output from 
Autobuilder directly, such as below Autobuilder directories.
https://autobuilder.yocto.io/pub/releases/yocto-2.6.1.rc1/testresults/qemux86/testresults.json
https://autobuilder.yocto.io/pub/releases/yocto-2.7_M1.rc1/testresults/qemux86/testresults.json
https://autobuilder.yocto.io/pub/releases/yocto-2.7_M2.rc1/testresults/qemux86/testresults.json

Assumed that there are 2 directories storing list of result files. User shall 
provide these 2 directories for regression, regression scripts will first parse 
through all the available files inside each directories, then perform 
regression based on available configuration data to determine the regression 
pair (eg. select result_set_1 from directory#1 and result_set_x from 
directory#2 if they both have matching configurations). 


Use case#2: git branch based regression
Given that Autobuilder stored result files inside /testresults directories, 
user shall first store these directories and the result files in each git 
branch accordingly using the existing store plugin. After that, user can used 
the git branch based regression to analysis the information.
Store in yocto-2.6.1.rc1, yocto-2.7_M1.rc1, yocto-2.7_M2.rc1 git branch 
accordingly
https://autobuilder.yocto.io/pub/releases/yocto-2.6.1.rc1/testresults/
https://autobuilder.yocto.io/pub/releases/yocto-2.7_M1.rc1/testresults/
https://autobuilder.yocto.io/pub/releases/yocto-2.7_M2.rc1/testresults/
 
Assumed that result files are stored inside git repository with specific git 
branch storing result files for single commit. User shall provide the 2 
specific git branches for regression, regression scripts will first parse 
through all the available files inside each git branch, then perform regression 
based on available configuration data to determine the regression pair (eg. 
select result_set_1 from git_branch_1 and result_set_x from git_branch_2 if 
they both have matching configurations).

The current codebase can be easily extended to enable both use cases above. 
Please let me know if both use cases above are important and please give us 
your inputs. 

Thanks,
Ee Peng 

-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Friday, January 25, 2019 11:44 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Eggleton, Paul ; Burton, Ross 

Subject: Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report 
and regression analysis

On Tue, 2019-01-22 at 17:42 +0800, Yeoh Ee Peng wrote:
> OEQA outputs test results into json files and these files were 
> archived by Autobuilder during QA releases. Example: each oe-selftest 
> run by Autobuilder for different host distro generate a 
> testresults.json file.
> 
> These scripts were developed as a test result tools to manage these 
> testresults.json file.
> 
> Using the "store" operation, user can store multiple testresults.json 
> files as well as the pre-configured directories used to hold those 
> files.
> 
> Using the "merge" operation, user can merge multiple testresults.json 
> files to a target file.
> 
> Using the "report" operation, user can view the test result summary 
> for all available testresults.json files inside a ordinary directory 
> or a git repository.
> 
> Using the "regression" operation, user can perform regression analysis 
> on testresults.json files specified.

Thanks Ee Peng, this version is much improved!

As an experiment I had a local test results file and I was able to run:

$ resultstool regression /tmp/repo/testresults.json /tmp/repo/testresults.json 
-b sdk_core-image-sato_x86_64_qemumips_20181219111311 -t 
sdk_core-image-sato_x86_64_qemumips_20181219200052
Successfully loaded base test results from: /tmp/repo/testresults.json 
Successfully loaded target test results from: /tmp/repo/testresults.json 
Getting base test result with 
result_id=sdk_core-image-sato_x86_64_qemumips_20181219111311
Getting target test result with 
result_id=sdk_core-image-sato_x86_64_qemumips_20181219200052
Start Regression
Only print regression if base status not equal target  :  ->  

assimp.BuildAssimp.test_assimp : ERROR -> PASSED 
==End Regression==

I was able to clearly see that my failing test case went from ERROR to PA

Re: [OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report and regression analysis

2019-01-23 Thread Yeoh, Ee Peng
RP,

The current patch allow files based regression, meaning if you have file#1 and 
file#2, the regression will select result instances for regression based on the 
configuration data available. 

There are 2 more regression use cases that I have in mind:
Use case#1: directory based regression - Assumed that there are 2 directories 
storing list of result files. User shall provide these 2 directories for 
regression, regression scripts will first parse through all the available files 
inside each directories, then perform regression based on available 
configuration data to determine the regression pair (eg. select result_set_1 
from directory#1 and result_set_x from directory#2 if they both have matching 
configurations). 

Use case#2: git branch based regression - Assumed that result files are stored 
inside git repository with specific git branch storing result files for single 
commit. User shall provide the 2 specific git branches for regression, 
regression scripts will first parse through all the available files inside each 
git branch, then perform regression based on available configuration data to 
determine the regression pair (eg. select result_set_1 from git_branch_1 and 
result_set_x from git_branch_2 if they both have matching configurations).

Any idea which regression use cases above was needed? We shall develop the next 
regression functionalities based on your inputs. 

Best regards,
Yeoh Ee Peng 

-Original Message-
From: Yeoh, Ee Peng 
Sent: Tuesday, January 22, 2019 5:42 PM
To: openembedded-core@lists.openembedded.org
Cc: Yeoh, Ee Peng 
Subject: [PATCH 1/2 v5] resultstool: enable merge, store, report and regression 
analysis

OEQA outputs test results into json files and these files were archived by 
Autobuilder during QA releases. Example: each oe-selftest run by Autobuilder 
for different host distro generate a testresults.json file.

These scripts were developed as a test result tools to manage these 
testresults.json file.

Using the "store" operation, user can store multiple testresults.json files as 
well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json files to 
a target file.

Using the "report" operation, user can view the test result summary for all 
available testresults.json files inside a ordinary directory or a git 
repository.

Using the "regression" operation, user can perform regression analysis on 
testresults.json files specified.

These resultstool operations expect the testresults.json file to use the json 
format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
        }
},
}

To use these scripts, first source oe environment, then run the entry point 
script to look for help.
$ resultstool

To store test result from oeqa automated tests, execute the below
$ resultstool store  

To merge multiple testresults.json files, execute the below
$ resultstool merge  

To report test report, execute the below
$ resultstool report 

To perform regression analysis, execute the below
$ resultstool regression  

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/files/testresults/testresults.json   |  40 ++
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/merge.py   |  71 +++
 scripts/lib/resultstool/regression.py  | 134 +
 scripts/lib/resultstool/report.py  | 122 +++
 scripts/lib/resultstool/resultsut

Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

2019-01-22 Thread Yeoh, Ee Peng
Sorry, I realized that I had missed to include the files used for oe-selftest 
that testing the store operation.
Submitted v5 patches that added the required files for oe-selftest -r 
resultstooltests.

http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278243.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278244.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278245.html

-Original Message-
From: Yeoh, Ee Peng 
Sent: Tuesday, January 22, 2019 5:45 PM
To: Richard Purdie ; 
openembedded-core@lists.openembedded.org
Cc: Burton, Ross ; Paul Eggleton 

Subject: RE: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result 
and reporting

Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope 
to improve the code readability and ease of maintenance. Also new 
functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis 
for two specified testresults.json 2. Add selftest to test merge, store, report 
and regression functionalities 3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results 
for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? 
or ???
- Are branches used for each release series (master, thud, sumo etc?) 
Basically, the layout we'd use to import the autobuilder results for each 
master run for example remains unclear to me, or how we'd look up the status of 
a given commit.

The target layout shall be a specific git branch for each commit tested, where 
the file directories shall be  based on existing Autobuilder results archive 
(eg. assuming store command was executed inside Autobuilder machine that stored 
the testresults.json files and predefined directory), simply execute: $ 
resultstool store   where source_dir was the top 
directory used by Autobuilder to archive all testresults.json file, git_branch 
was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git 
repository under // directory. To update files to be stored, 
simply execute $ resultstool store   -d 
//.

2. The code doesn't support comparison of two sets of test results (which tests 
were added/removed? passed when previously failed? failed when previously 
passed?)

Assuming results from a particular tested commit were merged into a single file 
(using existing "merge" functionality), user shall use the newly added 
"regression" functionality for comparing results status for two 
testresults.json files. Based on the configurations data for each result_id 
set, the comparison logic will select result with same configurations for 
comparison. More advance regression and automation can be developed from 
current code base. 

3. The code also doesn't allow investigation of test report "subdata" like 
looking at the ptest results, comparing them to previous runs, showing the logs 
for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your 
sharing and help!

Thanks,
Yeoh Ee Peng 



-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org]
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Burton, Ross ; Paul Eggleton 

Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result 
and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-, runtime-- 
> ).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scri

[OE-core] [PATCH 2/2 v5] scripts/resultstool: enable manual execution and result creation

2019-01-22 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To execute manual test cases, execute the below
$ resultstool  manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resultstool/manualexecution.py | 137 +
 scripts/resultstool|   8 ++
 2 files changed, 145 insertions(+)
 create mode 100644 scripts/lib/resultstool/manualexecution.py

diff --git a/scripts/lib/resultstool/manualexecution.py 
b/scripts/lib/resultstool/manualexecution.py
new file mode 100644
index 000..e0c0c36
--- /dev/null
+++ b/scripts/lib/resultstool/manualexecution.py
@@ -0,0 +1,137 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+from resultstool.resultsutils import load_json_file
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_cases = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _get_testcases(self, file):
+self.jdata = load_json_file(file)
+self.test_cases = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in self.jdata:
+self.test_cases.append(i['test']['@alias'].split('.', 2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+t

[OE-core] [PATCH 1/2 v5] resultstool: enable merge, store, report and regression analysis

2019-01-22 Thread Yeoh Ee Peng
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression" operation, user can perform regression analysis
on testresults.json files specified.

These resultstool operations expect the testresults.json file to use
the json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
    },
    }
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To store test result from oeqa automated tests, execute the below
$ resultstool store  

To merge multiple testresults.json files, execute the below
$ resultstool merge  

To report test report, execute the below
$ resultstool report 

To perform regression analysis, execute the below
$ resultstool regression  

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/files/testresults/testresults.json   |  40 ++
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/merge.py   |  71 +++
 scripts/lib/resultstool/regression.py  | 134 +
 scripts/lib/resultstool/report.py  | 122 +++
 scripts/lib/resultstool/resultsutils.py|  47 
 scripts/lib/resultstool/store.py   | 110 +
 .../resultstool/template/test_report_full_text.txt |  35 ++
 scripts/resultstool|  84 +
 10 files changed, 747 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

diff --git a/meta/lib/oeqa/files/testresults/testresults.json 
b/meta/lib/oeqa/files/testresults/testresults.json
new file mode 100644
index 000..1a62155
--- /dev/null
+++ b/meta/lib/oeqa/files/testresults/testresults.json
@@ -0,0 +1,40 @@
+{
+"runtime_core-image-minimal_qemuarm_20181225195701": {
+"configuration": {
+"DISTRO": "poky",
+"HOST_DISTRO": "ubuntu-16.04",
+"IMAGE_BASENAME": "core-image-minimal",
+"IMAGE_PKGTYPE": "rpm",
+"LAYERS": {
+"meta": {
+"branch": "master",
+"commit": "801745d918e83f97

[OE-core] [PATCH 0/2 v5] test-case-mgmt

2019-01-22 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

v4:
  Add new features, merge result files & regression analysis 
  Add selftest to merge, store, report and regression functionalities
  Revise codebase for pythonic
  
v5:
  Add required files for selftest testing store

Mazliana (1):
  scripts/resultstool: enable manual execution and result creation

Yeoh Ee Peng (1):
  resultstool: enable merge, store, report and regression analysis

 meta/lib/oeqa/files/testresults/testresults.json   |  40 ++
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/manualexecution.py | 137 +
 scripts/lib/resultstool/merge.py   |  71 +++
 scripts/lib/resultstool/regression.py  | 134 
 scripts/lib/resultstool/report.py  | 122 ++
 scripts/lib/resultstool/resultsutils.py|  47 +++
 scripts/lib/resultstool/store.py   | 110 +
 .../resultstool/template/test_report_full_text.txt |  35 ++
 scripts/resultstool|  92 ++
 11 files changed, 892 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/manualexecution.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

2019-01-22 Thread Yeoh, Ee Peng
Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope 
to improve the code readability and ease of maintenance. Also new 
functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis 
for two specified testresults.json
2. Add selftest to test merge, store, report and regression functionalities
3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results 
for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? 
or ???
- Are branches used for each release series (master, thud, sumo etc?) 
Basically, the layout we'd use to import the autobuilder results for each 
master run for example remains unclear to me, or how we'd look up the status of 
a given commit.

The target layout shall be a specific git branch for each commit tested, where 
the file directories shall be  based on existing Autobuilder results archive 
(eg. assuming store command was executed inside Autobuilder machine that stored 
the testresults.json files and predefined directory), simply execute: $ 
resultstool store   where source_dir was the top 
directory used by Autobuilder to archive all testresults.json file, git_branch 
was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git 
repository under // directory. To update files to be stored, 
simply execute $ resultstool store   -d 
//.

2. The code doesn't support comparison of two sets of test results (which tests 
were added/removed? passed when previously failed? failed when previously 
passed?)

Assuming results from a particular tested commit were merged into a single file 
(using existing "merge" functionality), user shall use the newly added 
"regression" functionality for comparing results status for two 
testresults.json files. Based on the configurations data for each result_id 
set, the comparison logic will select result with same configurations for 
comparison. More advance regression and automation can be developed from 
current code base. 

3. The code also doesn't allow investigation of test report "subdata" like 
looking at the ptest results, comparing them to previous runs, showing the logs 
for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your 
sharing and help!

Thanks,
Yeoh Ee Peng 



-Original Message-
From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org] 
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng ; 
openembedded-core@lists.openembedded.org
Cc: Burton, Ross ; Paul Eggleton 

Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result 
and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-, runtime-- 
> ).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous 
versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution 
piece of this, I do have some questions/concerns with the result 
storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results 
for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? 
or ???
- 

[OE-core] [PATCH 2/2 v4] scripts/resultstool: enable manual execution and result creation

2019-01-22 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To execute manual test cases, execute the below
$ resultstool  manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/resultstool/manualexecution.py | 137 +
 scripts/resultstool|   8 ++
 2 files changed, 145 insertions(+)
 create mode 100644 scripts/lib/resultstool/manualexecution.py

diff --git a/scripts/lib/resultstool/manualexecution.py 
b/scripts/lib/resultstool/manualexecution.py
new file mode 100644
index 000..e0c0c36
--- /dev/null
+++ b/scripts/lib/resultstool/manualexecution.py
@@ -0,0 +1,137 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+from resultstool.resultsutils import load_json_file
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_cases = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _get_testcases(self, file):
+self.jdata = load_json_file(file)
+self.test_cases = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in self.jdata:
+self.test_cases.append(i['test']['@alias'].split('.', 2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+t

[OE-core] [PATCH 0/2 v4] test-case-mgmt

2019-01-22 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

v4:
  Add new features, merge result files & regression analysis 
  Add selftest to merge, store, report and regression functionalities
  Revise codebase for pythonic

Mazliana (1):
  scripts/resultstool: enable manual execution and result creation

Yeoh Ee Peng (1):
  resultstool: enable merge, store, report and regression analysis

 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/manualexecution.py | 137 +
 scripts/lib/resultstool/merge.py   |  71 +++
 scripts/lib/resultstool/regression.py  | 134 
 scripts/lib/resultstool/report.py  | 122 ++
 scripts/lib/resultstool/resultsutils.py|  47 +++
 scripts/lib/resultstool/store.py   | 110 +
 .../resultstool/template/test_report_full_text.txt |  35 ++
 scripts/resultstool|  92 ++
 10 files changed, 852 insertions(+)
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/manualexecution.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/2 v4] resultstool: enable merge, store, report and regression analysis

2019-01-22 Thread Yeoh Ee Peng
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression" operation, user can perform regression analysis
on testresults.json files specified.

These resultstool operations expect the testresults.json file to use
the json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
    },
    }
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resultstool

To store test result from oeqa automated tests, execute the below
$ resultstool store  

To merge multiple testresults.json files, execute the below
$ resultstool merge  

To report test report, execute the below
$ resultstool report 

To perform regression analysis, execute the below
$ resultstool regression  

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/selftest/cases/resultstooltests.py   | 104 
 scripts/lib/resultstool/__init__.py|   0
 scripts/lib/resultstool/merge.py   |  71 +++
 scripts/lib/resultstool/regression.py  | 134 +
 scripts/lib/resultstool/report.py  | 122 +++
 scripts/lib/resultstool/resultsutils.py|  47 
 scripts/lib/resultstool/store.py   | 110 +
 .../resultstool/template/test_report_full_text.txt |  35 ++
 scripts/resultstool|  84 +
 9 files changed, 707 insertions(+)
 create mode 100644 meta/lib/oeqa/selftest/cases/resultstooltests.py
 create mode 100644 scripts/lib/resultstool/__init__.py
 create mode 100644 scripts/lib/resultstool/merge.py
 create mode 100644 scripts/lib/resultstool/regression.py
 create mode 100644 scripts/lib/resultstool/report.py
 create mode 100644 scripts/lib/resultstool/resultsutils.py
 create mode 100644 scripts/lib/resultstool/store.py
 create mode 100644 scripts/lib/resultstool/template/test_report_full_text.txt
 create mode 100755 scripts/resultstool

diff --git a/meta/lib/oeqa/selftest/cases/resultstooltests.py 
b/meta/lib/oeqa/selftest/cases/resultstooltests.py
new file mode 100644
index 000..28bfa94
--- /dev/null
+++ b/meta/lib/oeqa/selftest/cases/resultstooltests.py
@@ -0,0 +1,104 @@
+import os
+import sys
+basepath = os.path.abspath(os.path.dirname(__file__) + '/../../../../../')
+lib_path = basepath + '/scripts/lib'
+sys.path = sys.path + [lib_path]
+from resultstool.report import ResultsTextReport
+from resultstool.regression import ResultsRegressionSelector, ResultsRegression
+from resultstool.merge import ResultsMerge
+from resultstool.store import ResultsGitStore
+from resultstool.resultsutils import checkout_git_dir
+from oeqa.selftest.case import OESelftestTestCase
+
+class ResultsToolTests(OESelftestTestCase):
+
+def test_report_can_aggregate_test_result(self):
+result_data = {'result': {'test1': {'status':'PASSED'},
+  'test2': 

[OE-core] [PATCH 3/3 v3] scripts/test-case-mgmt: enable manual execution and result creation

2019-01-03 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ test-case-mgmt

To execute manual test cases, execute the below
$ test-case-mgmt manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
---
 scripts/lib/testcasemgmt/manualexecution.py | 142 
 scripts/test-case-mgmt  |  11 ++-
 2 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py 
b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 000..8fd378d
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,142 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_case = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _read_json(self, file):
+self.jdata = json.load(open('%s' % file))
+self.test_case = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in range(0, len(self.jdata)):
+self.test_case.append(self.jdata[i]['test']['@alias'].split('.', 
2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+test_result = {}
+testcase_id = 

[OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

2019-01-03 Thread Yeoh Ee Peng
These scripts were developed as an alternative testcase management
tool to Testopia. Using these scripts, user can manage the
testresults.json files generated by oeqa automated tests. Using the
"store" operation, user can store multiple groups of test result each
into individual git branch. Within each git branch, user can store
multiple testresults.json files under different directories (eg.
categorize directory by selftest-, runtime--).
Then, using the "report" operation, user can view the test result
summary for all available testresults.json files being stored that
were grouped by directory and test configuration.

The "report" operation expect the testresults.json file to use the
json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ test-case-mgmt

To store test result from oeqa automated tests, execute the below
$ test-case-mgmt store  
By default, test result will be stored at /testresults

To store test result from oeqa automated tests under a specific
directory, execute the below
$ test-case-mgmt store   -s 

To view test report, execute the below
$ test-case-mgmt view 

This scripts depends on scripts/oe-git-archive where it was
facing error if gitpython package was not installed. Refer to
[YOCTO# 13082] for more detail.

[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/testcasemgmt/__init__.py   |   0
 scripts/lib/testcasemgmt/gitstore.py   | 172 +
 scripts/lib/testcasemgmt/report.py | 136 
 scripts/lib/testcasemgmt/store.py  |  40 +
 .../template/test_report_full_text.txt |  33 
 scripts/test-case-mgmt |  96 
 6 files changed, 477 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

diff --git a/scripts/lib/testcasemgmt/__init__.py 
b/scripts/lib/testcasemgmt/__init__.py
new file mode 100644
index 000..e69de29
diff --git a/scripts/lib/testcasemgmt/gitstore.py 
b/scripts/lib/testcasemgmt/gitstore.py
new file mode 100644
index 000..19ff28f
--- /dev/null
+++ b/scripts/lib/testcasemgmt/gitstore.py
@@ -0,0 +1,172 @@
+# test case management tool - store test result & log to git repository
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import tempfile
+import os
+import subprocess
+import shutil
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+
+class GitStore(object):
+
+def __init__(self, git_dir, git_branch):
+self.git_dir = git_dir
+self.git_branch = git_branch
+
+def _git_init(self):
+return GitRepo(self.git_dir, is_topdir=True)
+
+def _run_git_cmd(self, repo, cmd):
+try:
+ 

[OE-core] [PATCH 0/3 v3] test-case-mgmt

2019-01-03 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

Mazliana (1):
  scripts/test-case-mgmt: enable manual execution and result creation

Yeoh Ee Peng (2):
  scripts/oe-git-archive: fix non-existent key referencing error
  scripts/test-case-mgmt: store test result and reporting

 scripts/lib/testcasemgmt/__init__.py   |   0
 scripts/lib/testcasemgmt/gitstore.py   | 172 +
 scripts/lib/testcasemgmt/manualexecution.py| 142 +
 scripts/lib/testcasemgmt/report.py | 136 
 scripts/lib/testcasemgmt/store.py  |  40 +
 .../template/test_report_full_text.txt |  33 
 scripts/oe-git-archive |  19 ++-
 scripts/test-case-mgmt | 105 +
 8 files changed, 641 insertions(+), 6 deletions(-)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error

2019-01-03 Thread Yeoh Ee Peng
Without installing gitpython package, oe-git-archive will face error
below, where it was referencing key that was non-existent inside
metadata object.

Traceback (most recent call last):
  File "/scripts/oe-git-archive", line 271, in 
sys.exit(main())
  File "/scripts/oe-git-archive", line 229, in main
'commit_count': metadata['layers']['meta']['commit_count'],
KeyError: 'commit_count'

Fix this error by adding exception catch when referencing
non-existent key (based on inputs provided by Richard Purdie).

[YOCTO# 13082]

Signed-off-by: Yeoh Ee Peng 
---
 scripts/oe-git-archive | 19 +--
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/scripts/oe-git-archive b/scripts/oe-git-archive
index ab19cb9..913291a 100755
--- a/scripts/oe-git-archive
+++ b/scripts/oe-git-archive
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
 #
 # Helper script for committing data to git and pushing upstream
 #
@@ -208,6 +208,13 @@ def parse_args(argv):
 help="Data to commit")
 return parser.parse_args(argv)
 
+def get_nested(d, list_of_keys):
+try:
+for k in list_of_keys:
+d = d[k]
+return d
+except KeyError:
+return ""
 
 def main(argv=None):
 """Script entry point"""
@@ -223,11 +230,11 @@ def main(argv=None):
 
 # Get keywords to be used in tag and branch names and messages
 metadata = metadata_from_bb()
-keywords = {'hostname': metadata['hostname'],
-'branch': metadata['layers']['meta']['branch'],
-'commit': metadata['layers']['meta']['commit'],
-'commit_count': metadata['layers']['meta']['commit_count'],
-'machine': metadata['config']['MACHINE']}
+keywords = {'hostname': get_nested(metadata, ['hostname']),
+'branch': get_nested(metadata, ['layers', 'meta', 
'branch']),
+'commit': get_nested(metadata, ['layers', 'meta', 
'commit']),
+'commit_count': get_nested(metadata, ['layers', 'meta', 
'commit_count']),
+'machine': get_nested(metadata, ['config', 'MACHINE'])}
 
 # Expand strings early in order to avoid getting into inconsistent
 # state (e.g. no tag even if data was committed)
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/3 v2] scripts/test-case-mgmt: store test result and reporting

2019-01-02 Thread Yeoh Ee Peng
These scripts were developed as an alternative testcase management
tool to Testopia. Using these scripts, user can manage the
testresults.json files generated by oeqa automated tests. Using the
"store" operation, user can store multiple groups of test result each
into individual git branch. Within each git branch, user can store
multiple testresults.json files under different directories (eg.
categorize directory by selftest-, runtime--).
Then, using the "report" operation, user can view the test result
summary for all available testresults.json files being stored that
were grouped by directory and test configuration.

The "report" operation expect the testresults.json file to use the
json format below.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ test-case-mgmt

To store test result from oeqa automated tests, execute the below
$ test-case-mgmt store  
By default, test result will be stored at /testresults

To store test result from oeqa automated tests under a specific
directory, execute the below
$ test-case-mgmt store   -s 

To view test report, execute the below
$ test-case-mgmt view 

This scripts depends on scripts/oe-git-archive where it was
facing error if gitpython package was not installed. Refer to
[YOCTO# 13082] for more detail.

[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/testcasemgmt/__init__.py   |   0
 scripts/lib/testcasemgmt/gitstore.py   | 172 +
 scripts/lib/testcasemgmt/report.py | 136 
 scripts/lib/testcasemgmt/store.py  |  40 +
 .../template/test_report_full_text.txt |  33 
 scripts/test-case-mgmt |  96 
 6 files changed, 477 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

diff --git a/scripts/lib/testcasemgmt/__init__.py 
b/scripts/lib/testcasemgmt/__init__.py
new file mode 100644
index 000..e69de29
diff --git a/scripts/lib/testcasemgmt/gitstore.py 
b/scripts/lib/testcasemgmt/gitstore.py
new file mode 100644
index 000..3a151b0
--- /dev/null
+++ b/scripts/lib/testcasemgmt/gitstore.py
@@ -0,0 +1,172 @@
+# test case management tool - store test result & log to git repository
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import tempfile
+import os
+import subprocess
+import shutil
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+
+class GitStore(object):
+
+def __init__(self, git_dir, git_branch):
+self.git_dir = git_dir
+self.git_branch = git_branch
+
+def _git_init(self):
+return GitRepo(self.git_dir, is_topdir=True)
+
+def _run_git_cmd(self, repo, cmd):
+try:
+ 

[OE-core] [PATCH 3/3 v2] scripts/test-case-mgmt: enable manual execution and result creation

2019-01-02 Thread Yeoh Ee Peng
From: Mazliana 

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ test-case-mgmt

To execute manual test cases, execute the below
$ test-case-mgmt manualexecution 

By default testresults.json store in /tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana 
---
 scripts/lib/testcasemgmt/manualexecution.py | 142 
 scripts/test-case-mgmt  |  11 ++-
 2 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py 
b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 000..8fd378d
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,142 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_case = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _read_json(self, file):
+self.jdata = json.load(open('%s' % file))
+self.test_case = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in range(0, len(self.jdata)):
+self.test_case.append(self.jdata[i]['test']['@alias'].split('.', 
2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+self.configuration['STARTTIME'] = self.starttime
+self.configuration['TEST_TYPE'] = self.test_module
+
+def _create_result_id(self):
+self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+def _execute_test_steps(self, test_id):
+test_result = {}
+testcase_id = 

[OE-core] [PATCH 1/3 v2] scripts/oe-git-archive: fix non-existent key referencing error

2019-01-02 Thread Yeoh Ee Peng
Without installing gitpython package, oe-git-archive will face error
below, where it was referencing key that was non-existent inside
metadata object.

Traceback (most recent call last):
  File "/scripts/oe-git-archive", line 271, in 
sys.exit(main())
  File "/scripts/oe-git-archive", line 229, in main
'commit_count': metadata['layers']['meta']['commit_count'],
KeyError: 'commit_count'

Fix this error by checking the key existent from metadata before
referencing it.

[YOCTO# 13082]

Signed-off-by: Yeoh Ee Peng 
---
 scripts/oe-git-archive | 28 ++--
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/scripts/oe-git-archive b/scripts/oe-git-archive
index ab19cb9..80c2379 100755
--- a/scripts/oe-git-archive
+++ b/scripts/oe-git-archive
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
 #
 # Helper script for committing data to git and pushing upstream
 #
@@ -208,6 +208,18 @@ def parse_args(argv):
 help="Data to commit")
 return parser.parse_args(argv)
 
+def _get_metadata_value(metadata, keys):
+if len(keys) > 0:
+key = keys.pop(0)
+if key in metadata:
+if len(keys) == 0:
+return metadata[key]
+else:
+return _get_metadata_value(metadata[key], keys)
+else:
+return ""
+else:
+return ""
 
 def main(argv=None):
 """Script entry point"""
@@ -223,11 +235,15 @@ def main(argv=None):
 
 # Get keywords to be used in tag and branch names and messages
 metadata = metadata_from_bb()
-keywords = {'hostname': metadata['hostname'],
-'branch': metadata['layers']['meta']['branch'],
-'commit': metadata['layers']['meta']['commit'],
-'commit_count': metadata['layers']['meta']['commit_count'],
-'machine': metadata['config']['MACHINE']}
+keywords_map = {'hostname': ['hostname'],
+'branch': ['layers', 'meta', 'branch'],
+'commit': ['layers', 'meta', 'commit'],
+'commit_count': ['layers', 'meta', 'commit_count'],
+'machine': ['config', 'MACHINE']}
+keywords = {}
+for map_key in keywords_map.keys():
+keywords_value = _get_metadata_value(metadata, 
keywords_map[map_key])
+keywords[map_key] = keywords_value
 
 # Expand strings early in order to avoid getting into inconsistent
 # state (e.g. no tag even if data was committed)
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 0/3] test-case-mgmt

2019-01-02 Thread Yeoh Ee Peng
v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

Mazliana (1):
  scripts/test-case-mgmt: enable manual execution and result creation

Yeoh Ee Peng (2):
  scripts/oe-git-archive: fix non-existent key referencing error
  scripts/test-case-mgmt: store test result and reporting

 scripts/lib/testcasemgmt/__init__.py   |   0
 scripts/lib/testcasemgmt/gitstore.py   | 172 +
 scripts/lib/testcasemgmt/manualexecution.py| 142 +
 scripts/lib/testcasemgmt/report.py | 136 
 scripts/lib/testcasemgmt/store.py  |  40 +
 .../template/test_report_full_text.txt |  33 
 scripts/oe-git-archive |  28 +++-
 scripts/test-case-mgmt | 105 +
 8 files changed, 650 insertions(+), 6 deletions(-)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function

2018-12-24 Thread Yeoh Ee Peng
From: Mazliana 

Manual execution script is a helper script to execute all manual test cases in 
baseline command,
which consists of user guideline steps and the expected results. The last step 
will ask user to
provide their input to execute result. The input options are 
passed/failed/blocked/skipped status.
The result given will be written in testresults.json including log error from 
the user input
and configuration if there is any. The output test result for json file is 
created by using
OEQA library.
 
The configuration part is manually key-in by the user. The system allow user to 
specify how many
configuration they want to add and they need to define the required 
configuration name and value
pair. In QA perspective, "configuration" means the test environments and 
parameters used during
QA setup before testing can be carry out. Example of configurations: image used 
for boot up, host
machine distro used, poky configurations, etc.
 
The purpose of adding the configuration is to standardize the output test 
result format between
automation and manual execution.

scripts/test-case-mgmt: add "manualexecution" as a tool

Integrated the test-case-mgmt "store", "report" with "manual execution".Manual 
test execution
is one of an alternative test case management tool of Testopia. This script has 
only a
bare-minimum function. Bare-minimum function refer to function where the user 
can only execute all of the
test cases that component have.

To use these scripts, first source oe environment, then run the entry point
script to look for help.
$ test-case-mgmt

To execute manual test cases, execute the below
$ test-case-mgmt manualexecution 

By default testresults.json store in /tmp/log/manual

[YOCTO #12651]

Signed-off-by: Mazliana 
---
 scripts/lib/testcasemgmt/manualexecution.py | 142 
 scripts/test-case-mgmt  |  11 ++-
 2 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py 
b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 000..c6c450f
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,142 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+def __init__(self):
+self.jdata = ''
+self.test_module = ''
+self.test_suite = ''
+self.test_case = ''
+self.configuration = ''
+self.starttime = ''
+self.result_id = ''
+self.write_dir = ''
+
+def _read_json(self, file):
+self.jdata = json.load(open('%s' % file))
+self.test_case = []
+self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+for i in range(0, len(self.jdata)):
+self.test_case.append(self.jdata[i]['test']['@alias'].split('.', 
2)[2])
+
+def _get_input(self, config):
+while True:
+output = input('{} = '.format(config))
+if re.match('^[a-zA-Z0-9_]+$', output):
+break
+print('Only alphanumeric and underscore are allowed. Please try 
again')
+return output
+
+def _create_config(self):
+self.configuration = {}
+while True:
+try:
+conf_total = int(input('\nPlease provide how many 
configuration you want to save \n'))
+break
+except ValueError:
+print('Invalid input. Please provide input as a number not 
character.')
+for i in range(conf_total):
+print('-')
+print('This is configuration #%s ' % (i + 1) + '. Please provide 
configuration name and its value')
+print('-')
+name_conf = self._get_input('Configuration Name')
+value_conf = self._get_input('Configuration Value')
+print('-\n')
+self.configuration[name_conf.upper()] = value_conf
+current_datetime = datetime.datetime.now()
+self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+

[OE-core] [PATCH 1/2] scripts/test-case-mgmt: store test result and reporting

2018-12-24 Thread Yeoh Ee Peng
These scripts were developed as an alternative test case management
tool to Testopia. Using these scripts, user can manage the
testresults.json files generated by oeqa automated tests. Using the
"store" operation, user can store multiple testresults.json files
under different directories (eg. categorize directory by
selftest-, runtime--). Then, using the
"report" operation, user can view the test result summary
for all available testresults.json files being stored that
were grouped by directory and test configuration.

The "report" operation expect the testresults.json file to use the
json format below. OEQA implemented the codes to create test result
in this format.
{
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
...
"": {
"configuration": {
"": "",
"": "",
...
"": "",
},
"result": {
"": {
"status": "",
"log": ""
},
"": {
"status": "",
"log": ""
},
...
"": {
"status": "",
"log": ""
},
}
},
}

This scripts depends on scripts/oe-git-archive where it was
facing error if gitpython package not installed. Refer to
[YOCTO# 13082] for more detail.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ test-case-mgmt

To store test result from oeqa automated tests, execute the below
$ test-case-mgmt store  

To store test result from oeqa automated tests under a custom
directory, execute the below
$ test-case-mgmt store   -s 

To report test result summary, execute the below
$ test-case-mgmt report 

Signed-off-by: Yeoh Ee Peng 
---
 scripts/lib/testcasemgmt/__init__.py   |   0
 scripts/lib/testcasemgmt/gitstore.py   | 175 +
 scripts/lib/testcasemgmt/report.py | 136 
 scripts/lib/testcasemgmt/store.py  |  40 +
 .../template/test_report_full_text.txt |  33 
 scripts/test-case-mgmt |  96 +++
 6 files changed, 480 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

diff --git a/scripts/lib/testcasemgmt/__init__.py 
b/scripts/lib/testcasemgmt/__init__.py
new file mode 100644
index 000..e69de29
diff --git a/scripts/lib/testcasemgmt/gitstore.py 
b/scripts/lib/testcasemgmt/gitstore.py
new file mode 100644
index 000..0acb50e
--- /dev/null
+++ b/scripts/lib/testcasemgmt/gitstore.py
@@ -0,0 +1,175 @@
+# test case management tool - store test result & log to git repository
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import tempfile
+import os
+import subprocess
+import shutil
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+#from oe.path import copytree, ignore_patterns
+
+class GitStore(object):
+
+def __init__(self, git_dir, git_branch):
+self.git_dir = git_dir
+self.git_branch = git_branch
+
+def _git_init(self):
+return GitRepo(self.git_dir, is_topdir=True)
+
+def _run_git_cmd(self, repo, cmd):
+try:
+output = repo.run_cmd(cmd)
+return

[OE-core] [PATCH 3/4 v2] oeqa/qemu: Add support for slirp

2018-11-22 Thread Yeoh Ee Peng
Enable qemu for slirp. Initialize Qemurunner with slirp. Setup ip
and port attribute to enable connection with qemu running with slirp.

[YOCTO#10713]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/core/target/qemu.py | 22 +++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/meta/lib/oeqa/core/target/qemu.py 
b/meta/lib/oeqa/core/target/qemu.py
index 538bf12..f47fd74 100644
--- a/meta/lib/oeqa/core/target/qemu.py
+++ b/meta/lib/oeqa/core/target/qemu.py
@@ -13,7 +13,7 @@ supported_fstypes = ['ext3', 'ext4', 'cpio.gz', 'wic']
 
 class OEQemuTarget(OESSHTarget):
 def __init__(self, logger, server_ip, timeout=300, user='root',
-port=None, machine='', rootfs='', kernel='', kvm=False,
+port=None, machine='', rootfs='', kernel='', kvm=False, 
slirp=False,
 dump_dir='', dump_host_cmds='', display='', bootlog='',
 tmpdir='', dir_image='', boottime=60, **kwargs):
 
@@ -25,17 +25,33 @@ class OEQemuTarget(OESSHTarget):
 self.rootfs = rootfs
 self.kernel = kernel
 self.kvm = kvm
+self.use_slirp = slirp
 
 self.runner = QemuRunner(machine=machine, rootfs=rootfs, tmpdir=tmpdir,
  deploy_dir_image=dir_image, display=display,
  logfile=bootlog, boottime=boottime,
- use_kvm=kvm, dump_dir=dump_dir,
+ use_kvm=kvm, use_slirp=slirp, 
dump_dir=dump_dir,
  dump_host_cmds=dump_host_cmds, logger=logger)
 
 def start(self, params=None, extra_bootparams=None):
+if self.use_slirp and not self.server_ip:
+self.logger.error("Could not start qemu with slirp without server 
ip - provide 'TEST_SERVER_IP'")
+raise RuntimeError("FAILED to start qemu - check the task log and 
the boot log")
 if self.runner.start(params, extra_bootparams=extra_bootparams):
 self.ip = self.runner.ip
-self.server_ip = self.runner.server_ip
+if self.use_slirp:
+target_ip_port = self.runner.ip.split(':')
+if len(target_ip_port) == 2:
+target_ip = target_ip_port[0]
+port = target_ip_port[1]
+self.ip = target_ip
+self.ssh = self.ssh + ['-p', port]
+self.scp = self.scp + ['-P', port]
+else:
+self.logger.error("Could not get host machine port to 
connect qemu with slirp, ssh will not be "
+  "able to connect to qemu with slirp")
+if self.runner.server_ip:
+self.server_ip = self.runner.server_ip
 else:
 self.stop()
 raise RuntimeError("FAILED to start qemu - check the task log and 
the boot log")
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 4/4 v2] testimage: Add support for slirp

2018-11-22 Thread Yeoh Ee Peng
Enable testimage to support qemu slirp. Configure "QEMU_USE_SLIRP"
& "TEST_SERVER_IP" variables to enable slirp.

[YOCTO#10713]

Signed-off-by: Yeoh Ee Peng 
---
 meta/classes/testimage.bbclass | 5 +
 1 file changed, 5 insertions(+)

diff --git a/meta/classes/testimage.bbclass b/meta/classes/testimage.bbclass
index 92e5686..82cbb06 100644
--- a/meta/classes/testimage.bbclass
+++ b/meta/classes/testimage.bbclass
@@ -236,6 +236,10 @@ def testimage_main(d):
 else:
 kvm = False
 
+slirp = False
+if d.getVar("QEMU_USE_SLIRP"):
+slirp = True
+
 # TODO: We use the current implementatin of qemu runner because of
 # time constrains, qemu runner really needs a refactor too.
 target_kwargs = { 'machine' : machine,
@@ -247,6 +251,7 @@ def testimage_main(d):
   'boottime': boottime,
   'bootlog' : bootlog,
   'kvm' : kvm,
+  'slirp'   : slirp,
 }
 
 # TODO: Currently BBPATH is needed for custom loading of targets.
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 1/4 v2] oeqa/qemu & runtime: qemu do not need ip input from external

2018-11-22 Thread Yeoh Ee Peng
Qemu do not use the ip input from external. It will
retrieve ip from QemuRunner instance and assign
ip value.

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/core/target/qemu.py | 5 ++---
 meta/lib/oeqa/runtime/context.py  | 2 +-
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/meta/lib/oeqa/core/target/qemu.py 
b/meta/lib/oeqa/core/target/qemu.py
index bf3b633..538bf12 100644
--- a/meta/lib/oeqa/core/target/qemu.py
+++ b/meta/lib/oeqa/core/target/qemu.py
@@ -12,15 +12,14 @@ from oeqa.utils.qemurunner import QemuRunner
 supported_fstypes = ['ext3', 'ext4', 'cpio.gz', 'wic']
 
 class OEQemuTarget(OESSHTarget):
-def __init__(self, logger, ip, server_ip, timeout=300, user='root',
+def __init__(self, logger, server_ip, timeout=300, user='root',
 port=None, machine='', rootfs='', kernel='', kvm=False,
 dump_dir='', dump_host_cmds='', display='', bootlog='',
 tmpdir='', dir_image='', boottime=60, **kwargs):
 
-super(OEQemuTarget, self).__init__(logger, ip, server_ip, timeout,
+super(OEQemuTarget, self).__init__(logger, None, server_ip, timeout,
 user, port)
 
-self.ip = ip
 self.server_ip = server_ip
 self.machine = machine
 self.rootfs = rootfs
diff --git a/meta/lib/oeqa/runtime/context.py b/meta/lib/oeqa/runtime/context.py
index a7f3823..943e29b 100644
--- a/meta/lib/oeqa/runtime/context.py
+++ b/meta/lib/oeqa/runtime/context.py
@@ -101,7 +101,7 @@ class OERuntimeTestContextExecutor(OETestContextExecutor):
 if target_type == 'simpleremote':
 target = OESSHTarget(logger, target_ip, server_ip, **kwargs)
 elif target_type == 'qemu':
-target = OEQemuTarget(logger, target_ip, server_ip, **kwargs)
+target = OEQemuTarget(logger, server_ip, **kwargs)
 else:
 # XXX: This code uses the old naming convention for controllers and
 # targets, the idea it is to leave just targets as the controller
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 2/4 v2] qemurunner: Add support for slirp

2018-11-22 Thread Yeoh Ee Peng
Enable qemurunner for slirp. Retrieved the ip & port from host machine
to connect to qemu from host machine.

[YOCTO#10713]

Signed-off-by: Yeoh Ee Peng 
---
 meta/lib/oeqa/utils/qemurunner.py | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/meta/lib/oeqa/utils/qemurunner.py 
b/meta/lib/oeqa/utils/qemurunner.py
index d40b3b8..f943034 100644
--- a/meta/lib/oeqa/utils/qemurunner.py
+++ b/meta/lib/oeqa/utils/qemurunner.py
@@ -28,7 +28,8 @@ re_control_char = re.compile('[%s]' % 
re.escape("".join(control_chars)))
 
 class QemuRunner:
 
-def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, 
logfile, boottime, dump_dir, dump_host_cmds, use_kvm, logger):
+def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, 
logfile, boottime, dump_dir, dump_host_cmds,
+ use_kvm, logger, use_slirp=False):
 
 # Popen object for runqemu
 self.runqemu = None
@@ -51,6 +52,7 @@ class QemuRunner:
 self.logged = False
 self.thread = None
 self.use_kvm = use_kvm
+self.use_slirp = use_slirp
 self.msg = ''
 
 self.runqemutime = 120
@@ -129,6 +131,8 @@ class QemuRunner:
 self.logger.debug('Not using kvm for runqemu')
 if not self.display:
 launch_cmd += ' nographic'
+if self.use_slirp:
+launch_cmd += ' slirp'
 launch_cmd += ' %s %s' % (self.machine, self.rootfs)
 
 return self.launch(launch_cmd, qemuparams=qemuparams, get_ip=get_ip, 
extra_bootparams=extra_bootparams, env=env)
@@ -238,9 +242,14 @@ class QemuRunner:
 # because is possible to have control characters
 cmdline = re_control_char.sub(' ', cmdline)
 try:
-ips = re.findall(r"((?:[0-9]{1,3}\.){3}[0-9]{1,3})", 
cmdline.split("ip=")[1])
-self.ip = ips[0]
-self.server_ip = ips[1]
+if self.use_slirp:
+tcp_ports = cmdline.split("hostfwd=tcp::")[1]
+host_port = tcp_ports[:tcp_ports.find('-')]
+self.ip = "localhost:%s" % host_port
+else:
+ips = re.findall(r"((?:[0-9]{1,3}\.){3}[0-9]{1,3})", 
cmdline.split("ip=")[1])
+self.ip = ips[0]
+self.server_ip = ips[1]
 self.logger.debug("qemu cmdline used:\n{}".format(cmdline))
 except (IndexError, ValueError):
 # Try to get network configuration from runqemu output
-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] [PATCH 0/4] Enable qemu slirp for testimage only

2018-11-22 Thread Yeoh Ee Peng
Changes:

[v2]
 - enable qemu slirp and like kvm, enable it only for testimage
 - QemuRunner by default has use_slirp=False, where oe-selftest
   will skipped all the new logic related to slirp

Yeoh Ee Peng (4):
  oeqa/qemu & runtime: qemu do not need ip input from external
  qemurunner: Add support for slirp
  oeqa/qemu: Add support for slirp
  testimage: Add support for slirp

 meta/classes/testimage.bbclass|  5 +
 meta/lib/oeqa/core/target/qemu.py | 27 +--
 meta/lib/oeqa/runtime/context.py  |  2 +-
 meta/lib/oeqa/utils/qemurunner.py | 17 +
 4 files changed, 40 insertions(+), 11 deletions(-)

-- 
2.7.4

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


  1   2   3   >