On Wed, Jul 26, 2017 at 11:56:47AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-07-26 00:46:21)
> > Currently, if a test from provided testlist fails to be discovered by
> > the framework, piglit blows up with an exception.
> 
> Thank you for keeping the default behavior, it's important to have the "run
> these and exactly these or fail" option.

Sure, I do understand importance of that, but as a sidenote...

What do you think about catching the __container's NoKeyFound expeption,
logging human readalbe message (e.g. "test xxx form the forced list not
found in the test suit") and then re-throwing it (or doing hard exit)?

I find 'key "igt@foo@bar" not found in self.__container' and a
stacktrace rather enigmatic.

That approach should maintain compatibility and fewer people would query
us on obvious typos in their testlists.

> > 
> > This is both good - for consistency/early errors - and bad - for
> > handling some CI/automation scenarios (e.g autobisecting the tests).
> > 
> > So let's keep the current default, but allow some flexibility with the
> > new option that skips the missing tests with a warning.
> > 
> > bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
> > Cc: Tomi Sarvela <tomi.p.sarv...@intel.com>
> > Cc: Petri Latvala <petri.latv...@intel.com>
> > Reviewed-by: Martin Peres <martin.pe...@intel.com>
> > Signed-off-by: Arkadiusz Hiler <arkadiusz.hi...@intel.com>
> > ---
> >  framework/profile.py      | 5 +++++
> >  framework/programs/run.py | 6 ++++++
> >  2 files changed, 11 insertions(+)
> > 
> > diff --git a/framework/profile.py b/framework/profile.py
> > index a625318..7a39e3c 100644
> > --- a/framework/profile.py
> > +++ b/framework/profile.py
> > @@ -37,6 +37,7 @@ import importlib
> >  import itertools
> >  import multiprocessing
> >  import multiprocessing.dummy
> > +import warnings
> >  import os
> >  import re
> >  
> > @@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
> >  from framework.log import LogManager
> >  from framework.monitoring import Monitoring
> >  from framework.test.base import Test
> > +from framework.options import OPTIONS
> >  
> >  __all__ = [
> >      'RegexFilter',
> > @@ -314,6 +316,9 @@ class TestProfile(object):
> >          if self.forced_test_list:
> >              opts = collections.OrderedDict()
> >              for n in self.forced_test_list:
> > +                if OPTIONS.skip_missing and n not in self.test_list:
> > +                    warnings.warn("Test {} missing. Skipping.".format(n))
> > +                    continue
> >                  opts[n] = self.test_list[n]
> >          else:
> >              opts = self.test_list  # pylint: 
> > disable=redefined-variable-type
> 
> I agree that we really want this to have an explicit notrun or similar status.
> I'm not sure what the right way to do it is. You could probably create some 
> kind
> of stub Test instance that just sets a result of NotRun and returns, but that
> doesn't feel very elegant.

My thoughts exactly - stub Test for a fixed result.

And I agree that it isn't very pretty, but reworking the code that
handles Test instances to have the special case there feels even less
elegant.

What do you think about something along the lines of
StubedTest(result=NOTRUN), so we can use it for testing as well?


> > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > index 6444cfe..bf68f31 100644
> > --- a/framework/programs/run.py
> > +++ b/framework/programs/run.py
> > @@ -208,6 +208,10 @@ def _run_parser(input_):
> >                               'isolation. This allows, but does not 
> > require, '
> >                               'tests to run multiple tests per process. '
> >                               'This value can also be set in piglit.conf.')
> > +    parser.add_argument("--skip-missing",
> > +                        dest="skip_missing",
> > +                        action="store_true",
> > +                        help="skip on missing tests instead of failing")
> >      parser.add_argument("test_profile",
> >                          metavar="<Profile path(s)>",
> >                          nargs='+',
> > @@ -291,6 +295,7 @@ def run(input_):
> >      options.OPTIONS.sync = args.sync
> >      options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> >      options.OPTIONS.process_isolation = args.process_isolation
> > +    options.OPTIONS.skip_missing = args.skip_missing
> >  
> >      # Set the platform to pass to waffle
> >      options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > @@ -389,6 +394,7 @@ def resume(input_):
> >      options.OPTIONS.sync = results.options['sync']
> >      options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> >      options.OPTIONS.proces_isolation = results.options['process_isolation']
> > +    options.OPTIONS.skip_missing = results.options['skip_missing']
> 
> As the person who added the options module...
> Options was a mis-design from the start. I really don't want to see more code
> added to options. If there's a way we could pass this some other way I would
> very much prefer that.

It's kind of hard to not add it, as we have to preserve the "skip
missing" value in case of a resume.

I didn't like adding the same name in mutiple places, so I understand
what you are saying :-)

I can give reworking options a try. Maybe making it
serializes/deserializes itself so we won't have to repeat the field
names is the way to go?

-- 
Cheers,
Arek
_______________________________________________
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit

Reply via email to