Re: [Piglit] [PATCH] framework: fix running with mixed concurrency (neither -c or -1)

2018-05-09 Thread Arkadiusz Hiler
On Tue, May 08, 2018 at 10:32:26AM -0700, Dylan Baker wrote:
> test_list is an iterator, you can't walk an iterator more than once. To
> solve this we use itertools.tee, which creates a shared buffer for the
> iterators to draw from. This fixes errors like:
> 
> [000/480]
> Traceback (most recent call last):
>   File "./piglit", line 178, in 
> main()
>   File "./piglit", line 174, in main
> sys.exit(runner(args))
>   File "/home/user/piglit/framework/exceptions.py", line 51, in _inner
> func(*args, **kwargs)
>   File "/home/user/piglit/framework/programs/run.py", line 370, in run
> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>   File "/home/user/piglit/framework/backends/json.py", line 163, in finalize
> assert data['tests']
>   AssertionError
> 
> Thanks Tomi for figuring out what was wrong with the original patch!
> 
> CC: Michel Dänzer 
> CC: Tomi Sarvela 
> CC: "Juan A. Suarez Romero" 

Oh, just noticed that (well, actually Tomi told me about this).

I've sent another patch which fixes the same issue in a slightly
different way. Sorry for the duplication!

-- 
Cheers,
Arek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH] profile: Don't exhaust generator when concurrency == "some"

2018-05-09 Thread Arkadiusz Hiler
TestProfile.itertests() is a generator that we happily exhaust. If
concurrency == "some" then run_profile() invokes run_threads() twice,
once for thread safe tests, and once for non-safe ones. The first
invocation exhaust the generator, so the second one sees nothing.

This patch changes the call chain so that we call .itertests() in the
place we actually consume the generator, not earlier.

Reported-by: Stanislav Lisovskiy <stanislav.lisovs...@intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
---
 framework/profile.py | 28 +---
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/framework/profile.py b/framework/profile.py
index 1c75025b3..f214006bc 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -561,12 +561,7 @@ def run(profiles, logger, backend, concurrency):
 """
 chunksize = 1
 
-profiles = [(p, p.itertests()) for p in profiles]
-log = LogManager(logger, sum(len(p) for p, _ in profiles))
-
-# check that after the filters are run there are actually tests to run.
-# if not any(l for _, l in profiles):
-# raise exceptions.PiglitUserError('no matching tests')
+log = LogManager(logger, sum(len(p) for p in profiles))
 
 def test(name, test, profile, this_pool=None):
 """Function to call test.execute from map"""
@@ -576,33 +571,36 @@ def run(profiles, logger, backend, concurrency):
 if profile.options['monitor'].abort_needed:
 this_pool.terminate()
 
-def run_threads(pool, profile, test_list, filterby=None):
+def run_threads(pool, profile, filterby=None):
 """ Open a pool, close it, and join it """
 if filterby:
 # Although filterby could be attached to TestProfile as a filter,
 # it would have to be removed when run_threads exits, requiring
 # more code, and adding side-effects
-test_list = (x for x in test_list if filterby(x))
+test_list = (x for x in profile.itertests() if filterby(x))
+else:
+test_list = list(profile.itertests())
 
 pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
   test_list, chunksize)
 
-def run_profile(profile, test_list):
+def run_profile(profile):
 """Run an individual profile."""
 profile.setup()
+
 if concurrency == "all":
-run_threads(multi, profile, test_list)
+run_threads(multi, profile)
 elif concurrency == "none":
-run_threads(single, profile, test_list)
+run_threads(single, profile)
 else:
 assert concurrency == "some"
 # Filter and return only thread safe tests to the threaded pool
-run_threads(multi, profile, test_list,
+run_threads(multi, profile,
 lambda x: x[1].run_concurrent)
 
 # Filter and return the non thread safe tests to the single
 # pool
-run_threads(single, profile, test_list,
+run_threads(single, profile,
 lambda x: not x[1].run_concurrent)
 profile.teardown()
 
@@ -615,7 +613,7 @@ def run(profiles, logger, backend, concurrency):
 
 try:
 for p in profiles:
-run_profile(*p)
+run_profile(p)
 
 for pool in [single, multi]:
 pool.close()
@@ -623,6 +621,6 @@ def run(profiles, logger, backend, concurrency):
 finally:
 log.get().summary()
 
-for p, _ in profiles:
+for p in profiles:
 if p.options['monitor'].abort_needed:
 raise exceptions.PiglitAbort(p.options['monitor'].error_message)
-- 
2.14.3

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH] framework: Do not run with an empty test list

2017-09-26 Thread Arkadiusz Hiler
Because in Python we have `bool([]}) == False`, providing empty test
list resulted in hitting the same code path as not providing it at all,
meaning that we run everything.

Let's just exit early with an appropriate message instead.

This will get rid of the rather surprising behavior and will help making
the execution less prone to automated list generation errors (which has
already bitten us) as well as human errors.

Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
---
 framework/programs/run.py | 4 
 1 file changed, 4 insertions(+)

diff --git a/framework/programs/run.py b/framework/programs/run.py
index 4524f171b..0fec264ec 100644
--- a/framework/programs/run.py
+++ b/framework/programs/run.py
@@ -327,6 +327,10 @@ def run(input_):
 stripped = (t.split('#')[0].strip() for t in test_list)
 forced_test_list = [t for t in stripped if t]
 
+# to avoid running everything
+if not forced_test_list:
+raise exceptions.PiglitFatalError("Empty test list provided")
+
 backend = backends.get_backend(args.backend)(
 args.results_path,
 junit_suffix=args.junit_suffix,
-- 
2.13.5

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH v3] framework: Add --ignore-missing option

2017-08-15 Thread Arkadiusz Hiler
On Mon, Aug 14, 2017 at 10:21:13AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-08-14 05:09:01)
> > Currently, if a test from provided testlist fails to be discovered by
> > the framework, piglit blows up with an exception.
> > 
> > This is both good - for consistency/early errors - and bad - for
> > handling some CI/automation scenarios (e.g autobisecting the tests).
> > 
> > So let's keep the current default, but allow some flexibility with the
> > new option that reports the missing tests as 'notrun'.
> > 
> > v2:
> >  - change switch name to 'ignore', as skip is too suggestive
> >  - use DummyTest to get 'notrun' result instead of warnings
> > 
> > v3: don't use OPTIONS
> > 
> > bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
> > Cc: Dylan Baker <dy...@pnwbakers.com>
> > Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
> > Cc: Martin Peres <martin.pe...@intel.com>
> > Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
> > ---
> >  framework/profile.py  | 10 +++---
> >  framework/programs/run.py | 12 
> >  framework/test/base.py| 12 
> >  3 files changed, 31 insertions(+), 3 deletions(-)
> > 
> > diff --git a/framework/profile.py b/framework/profile.py
> > index a625318..cf0e298 100644

> > @@ -314,7 +315,10 @@ class TestProfile(object):
> >  if self.forced_test_list:
> >  opts = collections.OrderedDict()
> >  for n in self.forced_test_list:
> > -opts[n] = self.test_list[n]
> > +if self.options['ignore_missing'] and n not in 
> > self.test_list:
> > +opts[n] = DummyTest(name=n, result=status.NOTRUN)
> 
> name and result are not keyword arguments, they're positional, so please write
> it like this instead:
> opts[n] = DummyTest(n, status.NOTRUN)
>
> As long as this passes the unit test suite with the above changed,
> Reviewed-by: Dylan Baker <dy...@pnwbakers.com>

% tox > with_patches
% git checkout HEAD~
% tox > without_patches
% diff -u <(grep -v PYTHONHASHSEED with_patches) <(grep -v PYTHONHASHSEED 
without_patches) | sprunge
http://sprunge.us/VZDN


Exactly identical pass rate and the same tests are failing.
I guess that's a pass? :-)

> Thanks for fixing that for me.

Glad to see you back in action.

v4 with your r-b is comming in a second and thanks for the review!


-- 
Cheers,
Arek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH v3] framework: Add --ignore-missing option

2017-08-14 Thread Arkadiusz Hiler
Currently, if a test from provided testlist fails to be discovered by
the framework, piglit blows up with an exception.

This is both good - for consistency/early errors - and bad - for
handling some CI/automation scenarios (e.g autobisecting the tests).

So let's keep the current default, but allow some flexibility with the
new option that reports the missing tests as 'notrun'.

v2:
 - change switch name to 'ignore', as skip is too suggestive
 - use DummyTest to get 'notrun' result instead of warnings

v3: don't use OPTIONS

bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
Cc: Dylan Baker <dy...@pnwbakers.com>
Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
Cc: Martin Peres <martin.pe...@intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
---
 framework/profile.py  | 10 +++---
 framework/programs/run.py | 12 
 framework/test/base.py| 12 
 3 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/framework/profile.py b/framework/profile.py
index a625318..cf0e298 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -42,11 +42,11 @@ import re
 
 import six
 
-from framework import grouptools, exceptions
+from framework import grouptools, exceptions, status
 from framework.dmesg import get_dmesg
 from framework.log import LogManager
 from framework.monitoring import Monitoring
-from framework.test.base import Test
+from framework.test.base import Test, DummyTest
 
 __all__ = [
 'RegexFilter',
@@ -285,6 +285,7 @@ class TestProfile(object):
 self.options = {
 'dmesg': get_dmesg(False),
 'monitor': Monitoring(False),
+'ignore_missing': False,
 }
 
 def setup(self):
@@ -314,7 +315,10 @@ class TestProfile(object):
 if self.forced_test_list:
 opts = collections.OrderedDict()
 for n in self.forced_test_list:
-opts[n] = self.test_list[n]
+if self.options['ignore_missing'] and n not in self.test_list:
+opts[n] = DummyTest(name=n, result=status.NOTRUN)
+else:
+opts[n] = self.test_list[n]
 else:
 opts = self.test_list  # pylint: disable=redefined-variable-type
 
diff --git a/framework/programs/run.py b/framework/programs/run.py
index 6444cfe..4524f17 100644
--- a/framework/programs/run.py
+++ b/framework/programs/run.py
@@ -208,6 +208,10 @@ def _run_parser(input_):
  'isolation. This allows, but does not require, '
  'tests to run multiple tests per process. '
  'This value can also be set in piglit.conf.')
+parser.add_argument("--ignore-missing",
+dest="ignore_missing",
+action="store_true",
+help="missing tests are considered as 'notrun'")
 parser.add_argument("test_profile",
 metavar="",
 nargs='+',
@@ -234,6 +238,7 @@ def _create_metadata(args, name, forced_test_list):
 if args.platform:
 opts['platform'] = args.platform
 opts['forced_test_list'] = forced_test_list
+opts['ignore_missing'] = args.ignore_missing
 
 metadata = {'options': opts}
 metadata['name'] = name
@@ -346,6 +351,10 @@ def run(input_):
 for p in profiles:
 p.options['monitor'] = monitoring.Monitoring(args.monitored)
 
+if args.ignore_missing:
+for p in profiles:
+p.options['ignore_missing'] = args.ignore_missing
+
 for p in profiles:
 if args.exclude_tests:
 p.filters.append(profile.RegexFilter(args.exclude_tests,
@@ -422,6 +431,9 @@ def resume(input_):
 p.options['monitor'] = monitoring.Monitoring(
 results.options['monitoring'])
 
+if results.options['ignore_missing']:
+p.options['ignore_missing'] = results.options['ignore_missing']
+
 if exclude_tests:
 p.filters.append(lambda n, _: n not in exclude_tests)
 if results.options['exclude_filter']:
diff --git a/framework/test/base.py b/framework/test/base.py
index d73dee9..e74ea3d 100644
--- a/framework/test/base.py
+++ b/framework/test/base.py
@@ -381,6 +381,18 @@ class Test(object):
 return not self == other
 
 
+class DummyTest(Test):
+def __init__(self, name, result):
+super(DummyTest, self).__init__([name])
+self.result.result = result
+
+def execute(self, path, log, options):
+pass
+
+def interpret_result(self):
+pass
+
+
 class WindowResizeMixin(object):
 """ Mixin class that deals with spurious window resizes
 
-- 
2.9.4

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH v2] framework: Add --ignore-missing option

2017-08-01 Thread Arkadiusz Hiler
Currently, if a test from provided testlist fails to be discovered by
the framework, piglit blows up with an exception.

This is both good - for consistency/early errors - and bad - for
handling some CI/automation scenarios (e.g autobisecting the tests).

So let's keep the current default, but allow some flexibility with the
new option that reports the missing tests as 'notrun'.

v2:
 - change switch name to 'ignore', as skip is too suggestive
 - use DummyTest to get 'notrun' result instead of warnings

bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
Cc: Dylan Baker <dy...@pnwbakers.com>
Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
Cc: Martin Peres <martin.pe...@intel.com>
Tested-by: Petri Latvala <petri.latv...@intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
---
 framework/profile.py  | 10 +++---
 framework/programs/run.py |  6 ++
 framework/test/base.py| 12 
 3 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/framework/profile.py b/framework/profile.py
index a625318..bb6c39e 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -42,11 +42,12 @@ import re
 
 import six
 
-from framework import grouptools, exceptions
+from framework import grouptools, exceptions, status
 from framework.dmesg import get_dmesg
 from framework.log import LogManager
 from framework.monitoring import Monitoring
-from framework.test.base import Test
+from framework.test.base import Test, DummyTest
+from framework.options import OPTIONS
 
 __all__ = [
 'RegexFilter',
@@ -314,7 +315,10 @@ class TestProfile(object):
 if self.forced_test_list:
 opts = collections.OrderedDict()
 for n in self.forced_test_list:
-opts[n] = self.test_list[n]
+if OPTIONS.ignore_missing and n not in self.test_list:
+opts[n] = DummyTest(name=n, result=status.NOTRUN)
+else:
+opts[n] = self.test_list[n]
 else:
 opts = self.test_list  # pylint: disable=redefined-variable-type
 
diff --git a/framework/programs/run.py b/framework/programs/run.py
index 6444cfe..d30e6a4 100644
--- a/framework/programs/run.py
+++ b/framework/programs/run.py
@@ -208,6 +208,10 @@ def _run_parser(input_):
  'isolation. This allows, but does not require, '
  'tests to run multiple tests per process. '
  'This value can also be set in piglit.conf.')
+parser.add_argument("--ignore-missing",
+dest="ignore_missing",
+action="store_true",
+help="missing tests are considered as 'notrun'")
 parser.add_argument("test_profile",
 metavar="",
 nargs='+',
@@ -291,6 +295,7 @@ def run(input_):
 options.OPTIONS.sync = args.sync
 options.OPTIONS.deqp_mustpass = args.deqp_mustpass
 options.OPTIONS.process_isolation = args.process_isolation
+options.OPTIONS.ignore_missing = args.ignore_missing
 
 # Set the platform to pass to waffle
 options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
@@ -389,6 +394,7 @@ def resume(input_):
 options.OPTIONS.sync = results.options['sync']
 options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
 options.OPTIONS.proces_isolation = results.options['process_isolation']
+options.OPTIONS.ignore_missing = results.options['ignore_missing']
 
 core.get_config(args.config_file)
 
diff --git a/framework/test/base.py b/framework/test/base.py
index d73dee9..e74ea3d 100644
--- a/framework/test/base.py
+++ b/framework/test/base.py
@@ -381,6 +381,18 @@ class Test(object):
 return not self == other
 
 
+class DummyTest(Test):
+def __init__(self, name, result):
+super(DummyTest, self).__init__([name])
+self.result.result = result
+
+def execute(self, path, log, options):
+pass
+
+def interpret_result(self):
+pass
+
+
 class WindowResizeMixin(object):
 """ Mixin class that deals with spurious window resizes
 
-- 
2.9.4

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-31 Thread Arkadiusz Hiler
On Thu, Jul 27, 2017 at 09:54:13AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-07-27 02:24:33)
> > > >  
> > > >  __all__ = [
> > > >  'RegexFilter',
> > > > @@ -314,6 +316,9 @@ class TestProfile(object):
> > > >  if self.forced_test_list:
> > > >  opts = collections.OrderedDict()
> > > >  for n in self.forced_test_list:
> > > > +if OPTIONS.skip_missing and n not in self.test_list:
> > > > +warnings.warn("Test {} missing. 
> > > > Skipping.".format(n))
> > > > +continue
> > > >  opts[n] = self.test_list[n]
> > > >  else:
> > > >  opts = self.test_list  # pylint: 
> > > > disable=redefined-variable-type
> > > 
> > > I agree that we really want this to have an explicit notrun or similar 
> > > status.
> > > I'm not sure what the right way to do it is. You could probably create 
> > > some kind
> > > of stub Test instance that just sets a result of NotRun and returns, but 
> > > that
> > > doesn't feel very elegant.
> > 
> > My thoughts exactly - stub Test for a fixed result.
> > 
> > And I agree that it isn't very pretty, but reworking the code that
> > handles Test instances to have the special case there feels even less
> > elegant.
> > 
> > What do you think about something along the lines of
> > StubedTest(result=NOTRUN), so we can use it for testing as well?
> 
> Generally we use mocking for tests, so I don't think we'll need it for 
> testing,
> but I think it's the best solution we've come up with so far.
> 
> > 
> > 
> > > > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > > > index 6444cfe..bf68f31 100644
> > > > --- a/framework/programs/run.py
> > > > +++ b/framework/programs/run.py
> > > > @@ -208,6 +208,10 @@ def _run_parser(input_):
> > > >   'isolation. This allows, but does not 
> > > > require, '
> > > >   'tests to run multiple tests per process. 
> > > > '
> > > >   'This value can also be set in 
> > > > piglit.conf.')
> > > > +parser.add_argument("--skip-missing",
> > > > +dest="skip_missing",
> > > > +action="store_true",
> > > > +help="skip on missing tests instead of 
> > > > failing")
> > > >  parser.add_argument("test_profile",
> > > >  metavar="",
> > > >  nargs='+',
> > > > @@ -291,6 +295,7 @@ def run(input_):
> > > >  options.OPTIONS.sync = args.sync
> > > >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> > > >  options.OPTIONS.process_isolation = args.process_isolation
> > > > +options.OPTIONS.skip_missing = args.skip_missing
> > > >  
> > > >  # Set the platform to pass to waffle
> > > >  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > > > @@ -389,6 +394,7 @@ def resume(input_):
> > > >  options.OPTIONS.sync = results.options['sync']
> > > >  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> > > >  options.OPTIONS.proces_isolation = 
> > > > results.options['process_isolation']
> > > > +options.OPTIONS.skip_missing = results.options['skip_missing']
> > > 
> > > As the person who added the options module...
> > > Options was a mis-design from the start. I really don't want to see more 
> > > code
> > > added to options. If there's a way we could pass this some other way I 
> > > would
> > > very much prefer that.
> > 
> > It's kind of hard to not add it, as we have to preserve the "skip
> > missing" value in case of a resume.
> > 
> > I didn't like adding the same name in mutiple places, so I understand
> > what you are saying :-)
> > 
> > I can give reworking options a try. Maybe making it
> > serializes/deserializes itself so we won't have to repeat the field
> > names is the way to go?
> 
> I think we could put it in the TestProfile.options attribute, that would give 
> us
> access where we need it, and we could avoid putting it in options.OPTIONS that
> way. We already enforce that forced_test_list is only used with one profile
> anyway so we wouldn't have to worry about that.
> 
> I think you could add it in the same place that we set the forced_test_list, 
> it
> would still need to be added to the metadata, but since it's an option that's
> easy.

Reworked the patch using a dummy test clas. Looks clean and almost
everything work.

Sadly TestProfile.options are not persisted i.e. do not work with
"resume".

Shall I keep it the setting in options.OPTIONS, or do you have other
suggestions?

-- 
Cheers,
Arek



___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-27 Thread Arkadiusz Hiler
On Wed, Jul 26, 2017 at 11:56:47AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-07-26 00:46:21)
> > Currently, if a test from provided testlist fails to be discovered by
> > the framework, piglit blows up with an exception.
> 
> Thank you for keeping the default behavior, it's important to have the "run
> these and exactly these or fail" option.

Sure, I do understand importance of that, but as a sidenote...

What do you think about catching the __container's NoKeyFound expeption,
logging human readalbe message (e.g. "test xxx form the forced list not
found in the test suit") and then re-throwing it (or doing hard exit)?

I find 'key "igt@foo@bar" not found in self.__container' and a
stacktrace rather enigmatic.

That approach should maintain compatibility and fewer people would query
us on obvious typos in their testlists.

> > 
> > This is both good - for consistency/early errors - and bad - for
> > handling some CI/automation scenarios (e.g autobisecting the tests).
> > 
> > So let's keep the current default, but allow some flexibility with the
> > new option that skips the missing tests with a warning.
> > 
> > bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
> > Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
> > Cc: Petri Latvala <petri.latv...@intel.com>
> > Reviewed-by: Martin Peres <martin.pe...@intel.com>
> > Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
> > ---
> >  framework/profile.py  | 5 +
> >  framework/programs/run.py | 6 ++
> >  2 files changed, 11 insertions(+)
> > 
> > diff --git a/framework/profile.py b/framework/profile.py
> > index a625318..7a39e3c 100644
> > --- a/framework/profile.py
> > +++ b/framework/profile.py
> > @@ -37,6 +37,7 @@ import importlib
> >  import itertools
> >  import multiprocessing
> >  import multiprocessing.dummy
> > +import warnings
> >  import os
> >  import re
> >  
> > @@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
> >  from framework.log import LogManager
> >  from framework.monitoring import Monitoring
> >  from framework.test.base import Test
> > +from framework.options import OPTIONS
> >  
> >  __all__ = [
> >  'RegexFilter',
> > @@ -314,6 +316,9 @@ class TestProfile(object):
> >  if self.forced_test_list:
> >  opts = collections.OrderedDict()
> >  for n in self.forced_test_list:
> > +if OPTIONS.skip_missing and n not in self.test_list:
> > +warnings.warn("Test {} missing. Skipping.".format(n))
> > +continue
> >  opts[n] = self.test_list[n]
> >  else:
> >  opts = self.test_list  # pylint: 
> > disable=redefined-variable-type
> 
> I agree that we really want this to have an explicit notrun or similar status.
> I'm not sure what the right way to do it is. You could probably create some 
> kind
> of stub Test instance that just sets a result of NotRun and returns, but that
> doesn't feel very elegant.

My thoughts exactly - stub Test for a fixed result.

And I agree that it isn't very pretty, but reworking the code that
handles Test instances to have the special case there feels even less
elegant.

What do you think about something along the lines of
StubedTest(result=NOTRUN), so we can use it for testing as well?


> > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > index 6444cfe..bf68f31 100644
> > --- a/framework/programs/run.py
> > +++ b/framework/programs/run.py
> > @@ -208,6 +208,10 @@ def _run_parser(input_):
> >   'isolation. This allows, but does not 
> > require, '
> >   'tests to run multiple tests per process. '
> >   'This value can also be set in piglit.conf.')
> > +parser.add_argument("--skip-missing",
> > +dest="skip_missing",
> > +action="store_true",
> > +help="skip on missing tests instead of failing")
> >  parser.add_argument("test_profile",
> >  metavar="",
> >  nargs='+',
> > @@ -291,6 +295,7 @@ def run(input_):
> >  options.OPTIONS.sync = args.sync
> >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> >  options.OPTIONS.process_isolation = args.process_isolation
> > +options.OPTIONS.skip_missing = args.skip_missing
> >  

[Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-26 Thread Arkadiusz Hiler
Currently, if a test from provided testlist fails to be discovered by
the framework, piglit blows up with an exception.

This is both good - for consistency/early errors - and bad - for
handling some CI/automation scenarios (e.g autobisecting the tests).

So let's keep the current default, but allow some flexibility with the
new option that skips the missing tests with a warning.

bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
Cc: Petri Latvala <petri.latv...@intel.com>
Reviewed-by: Martin Peres <martin.pe...@intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
---
 framework/profile.py  | 5 +
 framework/programs/run.py | 6 ++
 2 files changed, 11 insertions(+)

diff --git a/framework/profile.py b/framework/profile.py
index a625318..7a39e3c 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -37,6 +37,7 @@ import importlib
 import itertools
 import multiprocessing
 import multiprocessing.dummy
+import warnings
 import os
 import re
 
@@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
 from framework.log import LogManager
 from framework.monitoring import Monitoring
 from framework.test.base import Test
+from framework.options import OPTIONS
 
 __all__ = [
 'RegexFilter',
@@ -314,6 +316,9 @@ class TestProfile(object):
 if self.forced_test_list:
 opts = collections.OrderedDict()
 for n in self.forced_test_list:
+if OPTIONS.skip_missing and n not in self.test_list:
+warnings.warn("Test {} missing. Skipping.".format(n))
+continue
 opts[n] = self.test_list[n]
 else:
 opts = self.test_list  # pylint: disable=redefined-variable-type
diff --git a/framework/programs/run.py b/framework/programs/run.py
index 6444cfe..bf68f31 100644
--- a/framework/programs/run.py
+++ b/framework/programs/run.py
@@ -208,6 +208,10 @@ def _run_parser(input_):
  'isolation. This allows, but does not require, '
  'tests to run multiple tests per process. '
  'This value can also be set in piglit.conf.')
+parser.add_argument("--skip-missing",
+dest="skip_missing",
+action="store_true",
+help="skip on missing tests instead of failing")
 parser.add_argument("test_profile",
 metavar="",
 nargs='+',
@@ -291,6 +295,7 @@ def run(input_):
 options.OPTIONS.sync = args.sync
 options.OPTIONS.deqp_mustpass = args.deqp_mustpass
 options.OPTIONS.process_isolation = args.process_isolation
+options.OPTIONS.skip_missing = args.skip_missing
 
 # Set the platform to pass to waffle
 options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
@@ -389,6 +394,7 @@ def resume(input_):
 options.OPTIONS.sync = results.options['sync']
 options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
 options.OPTIONS.proces_isolation = results.options['process_isolation']
+options.OPTIONS.skip_missing = results.options['skip_missing']
 
 core.get_config(args.config_file)
 
-- 
2.9.4

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH RFC] Handling missing tests

2017-07-26 Thread Arkadiusz Hiler
Hey all,

This is my initial (and simple) attempt at adding a way to handle missing tests
gracefully for an automated CI/bisecting/etc. scenarios.


The change is okay(-ish) on it's own but we can make the missing tests case more
of an first-class citizen, e.g.:

 1) actually report the missing tests as notrun, instead of ignoring them with
a warning

 2) add another possible result state - missing, so it would be more obvious

But those would require much more work and code juggling, so I would like to
know your opinions first.

Any thoughts?


Arkadiusz Hiler (1):
  framework: Add --skip-missing option

 framework/profile.py  | 5 +
 framework/programs/run.py | 6 ++
 2 files changed, 11 insertions(+)

-- 
2.9.4

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit