Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-08-11 Thread Dylan Baker
Quoting Arkadiusz Hiler (2017-07-31 05:15:26)
> On Thu, Jul 27, 2017 at 09:54:13AM -0700, Dylan Baker wrote:
> > Quoting Arkadiusz Hiler (2017-07-27 02:24:33)
> > > > >  
> > > > >  __all__ = [
> > > > >  'RegexFilter',
> > > > > @@ -314,6 +316,9 @@ class TestProfile(object):
> > > > >  if self.forced_test_list:
> > > > >  opts = collections.OrderedDict()
> > > > >  for n in self.forced_test_list:
> > > > > +if OPTIONS.skip_missing and n not in self.test_list:
> > > > > +warnings.warn("Test {} missing. 
> > > > > Skipping.".format(n))
> > > > > +continue
> > > > >  opts[n] = self.test_list[n]
> > > > >  else:
> > > > >  opts = self.test_list  # pylint: 
> > > > > disable=redefined-variable-type
> > > > 
> > > > I agree that we really want this to have an explicit notrun or similar 
> > > > status.
> > > > I'm not sure what the right way to do it is. You could probably create 
> > > > some kind
> > > > of stub Test instance that just sets a result of NotRun and returns, 
> > > > but that
> > > > doesn't feel very elegant.
> > > 
> > > My thoughts exactly - stub Test for a fixed result.
> > > 
> > > And I agree that it isn't very pretty, but reworking the code that
> > > handles Test instances to have the special case there feels even less
> > > elegant.
> > > 
> > > What do you think about something along the lines of
> > > StubedTest(result=NOTRUN), so we can use it for testing as well?
> > 
> > Generally we use mocking for tests, so I don't think we'll need it for 
> > testing,
> > but I think it's the best solution we've come up with so far.
> > 
> > > 
> > > 
> > > > > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > > > > index 6444cfe..bf68f31 100644
> > > > > --- a/framework/programs/run.py
> > > > > +++ b/framework/programs/run.py
> > > > > @@ -208,6 +208,10 @@ def _run_parser(input_):
> > > > >   'isolation. This allows, but does not 
> > > > > require, '
> > > > >   'tests to run multiple tests per 
> > > > > process. '
> > > > >   'This value can also be set in 
> > > > > piglit.conf.')
> > > > > +parser.add_argument("--skip-missing",
> > > > > +dest="skip_missing",
> > > > > +action="store_true",
> > > > > +help="skip on missing tests instead of 
> > > > > failing")
> > > > >  parser.add_argument("test_profile",
> > > > >  metavar="",
> > > > >  nargs='+',
> > > > > @@ -291,6 +295,7 @@ def run(input_):
> > > > >  options.OPTIONS.sync = args.sync
> > > > >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> > > > >  options.OPTIONS.process_isolation = args.process_isolation
> > > > > +options.OPTIONS.skip_missing = args.skip_missing
> > > > >  
> > > > >  # Set the platform to pass to waffle
> > > > >  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > > > > @@ -389,6 +394,7 @@ def resume(input_):
> > > > >  options.OPTIONS.sync = results.options['sync']
> > > > >  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> > > > >  options.OPTIONS.proces_isolation = 
> > > > > results.options['process_isolation']
> > > > > +options.OPTIONS.skip_missing = results.options['skip_missing']
> > > > 
> > > > As the person who added the options module...
> > > > Options was a mis-design from the start. I really don't want to see 
> > > > more code
> > > > added to options. If there's a way we could pass this some other way I 
> > > > would
> > > > very much prefer that.
> > > 
> > > It's kind of hard to not add it, as we have to preserve the "skip
> > > missing" value in case of a resume.
> > > 
> > > I didn't like adding the same name in mutiple places, so I understand
> > > what you are saying :-)
> > > 
> > > I can give reworking options a try. Maybe making it
> > > serializes/deserializes itself so we won't have to repeat the field
> > > names is the way to go?
> > 
> > I think we could put it in the TestProfile.options attribute, that would 
> > give us
> > access where we need it, and we could avoid putting it in options.OPTIONS 
> > that
> > way. We already enforce that forced_test_list is only used with one profile
> > anyway so we wouldn't have to worry about that.
> > 
> > I think you could add it in the same place that we set the 
> > forced_test_list, it
> > would still need to be added to the metadata, but since it's an option 
> > that's
> > easy.
> 
> Reworked the patch using a dummy test clas. Looks clean and almost
> everything work.
> 
> Sadly TestProfile.options are not persisted i.e. do not work with
> "resume".



Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-31 Thread Arkadiusz Hiler
On Thu, Jul 27, 2017 at 09:54:13AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-07-27 02:24:33)
> > > >  
> > > >  __all__ = [
> > > >  'RegexFilter',
> > > > @@ -314,6 +316,9 @@ class TestProfile(object):
> > > >  if self.forced_test_list:
> > > >  opts = collections.OrderedDict()
> > > >  for n in self.forced_test_list:
> > > > +if OPTIONS.skip_missing and n not in self.test_list:
> > > > +warnings.warn("Test {} missing. 
> > > > Skipping.".format(n))
> > > > +continue
> > > >  opts[n] = self.test_list[n]
> > > >  else:
> > > >  opts = self.test_list  # pylint: 
> > > > disable=redefined-variable-type
> > > 
> > > I agree that we really want this to have an explicit notrun or similar 
> > > status.
> > > I'm not sure what the right way to do it is. You could probably create 
> > > some kind
> > > of stub Test instance that just sets a result of NotRun and returns, but 
> > > that
> > > doesn't feel very elegant.
> > 
> > My thoughts exactly - stub Test for a fixed result.
> > 
> > And I agree that it isn't very pretty, but reworking the code that
> > handles Test instances to have the special case there feels even less
> > elegant.
> > 
> > What do you think about something along the lines of
> > StubedTest(result=NOTRUN), so we can use it for testing as well?
> 
> Generally we use mocking for tests, so I don't think we'll need it for 
> testing,
> but I think it's the best solution we've come up with so far.
> 
> > 
> > 
> > > > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > > > index 6444cfe..bf68f31 100644
> > > > --- a/framework/programs/run.py
> > > > +++ b/framework/programs/run.py
> > > > @@ -208,6 +208,10 @@ def _run_parser(input_):
> > > >   'isolation. This allows, but does not 
> > > > require, '
> > > >   'tests to run multiple tests per process. 
> > > > '
> > > >   'This value can also be set in 
> > > > piglit.conf.')
> > > > +parser.add_argument("--skip-missing",
> > > > +dest="skip_missing",
> > > > +action="store_true",
> > > > +help="skip on missing tests instead of 
> > > > failing")
> > > >  parser.add_argument("test_profile",
> > > >  metavar="",
> > > >  nargs='+',
> > > > @@ -291,6 +295,7 @@ def run(input_):
> > > >  options.OPTIONS.sync = args.sync
> > > >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> > > >  options.OPTIONS.process_isolation = args.process_isolation
> > > > +options.OPTIONS.skip_missing = args.skip_missing
> > > >  
> > > >  # Set the platform to pass to waffle
> > > >  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > > > @@ -389,6 +394,7 @@ def resume(input_):
> > > >  options.OPTIONS.sync = results.options['sync']
> > > >  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> > > >  options.OPTIONS.proces_isolation = 
> > > > results.options['process_isolation']
> > > > +options.OPTIONS.skip_missing = results.options['skip_missing']
> > > 
> > > As the person who added the options module...
> > > Options was a mis-design from the start. I really don't want to see more 
> > > code
> > > added to options. If there's a way we could pass this some other way I 
> > > would
> > > very much prefer that.
> > 
> > It's kind of hard to not add it, as we have to preserve the "skip
> > missing" value in case of a resume.
> > 
> > I didn't like adding the same name in mutiple places, so I understand
> > what you are saying :-)
> > 
> > I can give reworking options a try. Maybe making it
> > serializes/deserializes itself so we won't have to repeat the field
> > names is the way to go?
> 
> I think we could put it in the TestProfile.options attribute, that would give 
> us
> access where we need it, and we could avoid putting it in options.OPTIONS that
> way. We already enforce that forced_test_list is only used with one profile
> anyway so we wouldn't have to worry about that.
> 
> I think you could add it in the same place that we set the forced_test_list, 
> it
> would still need to be added to the metadata, but since it's an option that's
> easy.

Reworked the patch using a dummy test clas. Looks clean and almost
everything work.

Sadly TestProfile.options are not persisted i.e. do not work with
"resume".

Shall I keep it the setting in options.OPTIONS, or do you have other
suggestions?

-- 
Cheers,
Arek



___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-27 Thread Dylan Baker
Quoting Arkadiusz Hiler (2017-07-27 02:24:33)
> > >  
> > >  __all__ = [
> > >  'RegexFilter',
> > > @@ -314,6 +316,9 @@ class TestProfile(object):
> > >  if self.forced_test_list:
> > >  opts = collections.OrderedDict()
> > >  for n in self.forced_test_list:
> > > +if OPTIONS.skip_missing and n not in self.test_list:
> > > +warnings.warn("Test {} missing. Skipping.".format(n))
> > > +continue
> > >  opts[n] = self.test_list[n]
> > >  else:
> > >  opts = self.test_list  # pylint: 
> > > disable=redefined-variable-type
> > 
> > I agree that we really want this to have an explicit notrun or similar 
> > status.
> > I'm not sure what the right way to do it is. You could probably create some 
> > kind
> > of stub Test instance that just sets a result of NotRun and returns, but 
> > that
> > doesn't feel very elegant.
> 
> My thoughts exactly - stub Test for a fixed result.
> 
> And I agree that it isn't very pretty, but reworking the code that
> handles Test instances to have the special case there feels even less
> elegant.
> 
> What do you think about something along the lines of
> StubedTest(result=NOTRUN), so we can use it for testing as well?

Generally we use mocking for tests, so I don't think we'll need it for testing,
but I think it's the best solution we've come up with so far.

> 
> 
> > > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > > index 6444cfe..bf68f31 100644
> > > --- a/framework/programs/run.py
> > > +++ b/framework/programs/run.py
> > > @@ -208,6 +208,10 @@ def _run_parser(input_):
> > >   'isolation. This allows, but does not 
> > > require, '
> > >   'tests to run multiple tests per process. '
> > >   'This value can also be set in 
> > > piglit.conf.')
> > > +parser.add_argument("--skip-missing",
> > > +dest="skip_missing",
> > > +action="store_true",
> > > +help="skip on missing tests instead of failing")
> > >  parser.add_argument("test_profile",
> > >  metavar="",
> > >  nargs='+',
> > > @@ -291,6 +295,7 @@ def run(input_):
> > >  options.OPTIONS.sync = args.sync
> > >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> > >  options.OPTIONS.process_isolation = args.process_isolation
> > > +options.OPTIONS.skip_missing = args.skip_missing
> > >  
> > >  # Set the platform to pass to waffle
> > >  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > > @@ -389,6 +394,7 @@ def resume(input_):
> > >  options.OPTIONS.sync = results.options['sync']
> > >  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> > >  options.OPTIONS.proces_isolation = 
> > > results.options['process_isolation']
> > > +options.OPTIONS.skip_missing = results.options['skip_missing']
> > 
> > As the person who added the options module...
> > Options was a mis-design from the start. I really don't want to see more 
> > code
> > added to options. If there's a way we could pass this some other way I would
> > very much prefer that.
> 
> It's kind of hard to not add it, as we have to preserve the "skip
> missing" value in case of a resume.
> 
> I didn't like adding the same name in mutiple places, so I understand
> what you are saying :-)
> 
> I can give reworking options a try. Maybe making it
> serializes/deserializes itself so we won't have to repeat the field
> names is the way to go?

I think we could put it in the TestProfile.options attribute, that would give us
access where we need it, and we could avoid putting it in options.OPTIONS that
way. We already enforce that forced_test_list is only used with one profile
anyway so we wouldn't have to worry about that.

I think you could add it in the same place that we set the forced_test_list, it
would still need to be added to the metadata, but since it's an option that's
easy.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-27 Thread Arkadiusz Hiler
On Wed, Jul 26, 2017 at 11:56:47AM -0700, Dylan Baker wrote:
> Quoting Arkadiusz Hiler (2017-07-26 00:46:21)
> > Currently, if a test from provided testlist fails to be discovered by
> > the framework, piglit blows up with an exception.
> 
> Thank you for keeping the default behavior, it's important to have the "run
> these and exactly these or fail" option.

Sure, I do understand importance of that, but as a sidenote...

What do you think about catching the __container's NoKeyFound expeption,
logging human readalbe message (e.g. "test xxx form the forced list not
found in the test suit") and then re-throwing it (or doing hard exit)?

I find 'key "igt@foo@bar" not found in self.__container' and a
stacktrace rather enigmatic.

That approach should maintain compatibility and fewer people would query
us on obvious typos in their testlists.

> > 
> > This is both good - for consistency/early errors - and bad - for
> > handling some CI/automation scenarios (e.g autobisecting the tests).
> > 
> > So let's keep the current default, but allow some flexibility with the
> > new option that skips the missing tests with a warning.
> > 
> > bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
> > Cc: Tomi Sarvela 
> > Cc: Petri Latvala 
> > Reviewed-by: Martin Peres 
> > Signed-off-by: Arkadiusz Hiler 
> > ---
> >  framework/profile.py  | 5 +
> >  framework/programs/run.py | 6 ++
> >  2 files changed, 11 insertions(+)
> > 
> > diff --git a/framework/profile.py b/framework/profile.py
> > index a625318..7a39e3c 100644
> > --- a/framework/profile.py
> > +++ b/framework/profile.py
> > @@ -37,6 +37,7 @@ import importlib
> >  import itertools
> >  import multiprocessing
> >  import multiprocessing.dummy
> > +import warnings
> >  import os
> >  import re
> >  
> > @@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
> >  from framework.log import LogManager
> >  from framework.monitoring import Monitoring
> >  from framework.test.base import Test
> > +from framework.options import OPTIONS
> >  
> >  __all__ = [
> >  'RegexFilter',
> > @@ -314,6 +316,9 @@ class TestProfile(object):
> >  if self.forced_test_list:
> >  opts = collections.OrderedDict()
> >  for n in self.forced_test_list:
> > +if OPTIONS.skip_missing and n not in self.test_list:
> > +warnings.warn("Test {} missing. Skipping.".format(n))
> > +continue
> >  opts[n] = self.test_list[n]
> >  else:
> >  opts = self.test_list  # pylint: 
> > disable=redefined-variable-type
> 
> I agree that we really want this to have an explicit notrun or similar status.
> I'm not sure what the right way to do it is. You could probably create some 
> kind
> of stub Test instance that just sets a result of NotRun and returns, but that
> doesn't feel very elegant.

My thoughts exactly - stub Test for a fixed result.

And I agree that it isn't very pretty, but reworking the code that
handles Test instances to have the special case there feels even less
elegant.

What do you think about something along the lines of
StubedTest(result=NOTRUN), so we can use it for testing as well?


> > diff --git a/framework/programs/run.py b/framework/programs/run.py
> > index 6444cfe..bf68f31 100644
> > --- a/framework/programs/run.py
> > +++ b/framework/programs/run.py
> > @@ -208,6 +208,10 @@ def _run_parser(input_):
> >   'isolation. This allows, but does not 
> > require, '
> >   'tests to run multiple tests per process. '
> >   'This value can also be set in piglit.conf.')
> > +parser.add_argument("--skip-missing",
> > +dest="skip_missing",
> > +action="store_true",
> > +help="skip on missing tests instead of failing")
> >  parser.add_argument("test_profile",
> >  metavar="",
> >  nargs='+',
> > @@ -291,6 +295,7 @@ def run(input_):
> >  options.OPTIONS.sync = args.sync
> >  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
> >  options.OPTIONS.process_isolation = args.process_isolation
> > +options.OPTIONS.skip_missing = args.skip_missing
> >  
> >  # Set the platform to pass to waffle
> >  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> > @@ -389,6 +394,7 @@ def resume(input_):
> >  options.OPTIONS.sync = results.options['sync']
> >  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
> >  options.OPTIONS.proces_isolation = results.options['process_isolation']
> > +options.OPTIONS.skip_missing = results.options['skip_missing']
> 
> As the person who added the options module...
> Options was a mis-design from the start. I really don't want to see more code
> added to options. If 

Re: [Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-26 Thread Dylan Baker
Quoting Arkadiusz Hiler (2017-07-26 00:46:21)
> Currently, if a test from provided testlist fails to be discovered by
> the framework, piglit blows up with an exception.

Thank you for keeping the default behavior, it's important to have the "run
these and exactly these or fail" option.

> 
> This is both good - for consistency/early errors - and bad - for
> handling some CI/automation scenarios (e.g autobisecting the tests).
> 
> So let's keep the current default, but allow some flexibility with the
> new option that skips the missing tests with a warning.
> 
> bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
> Cc: Tomi Sarvela 
> Cc: Petri Latvala 
> Reviewed-by: Martin Peres 
> Signed-off-by: Arkadiusz Hiler 
> ---
>  framework/profile.py  | 5 +
>  framework/programs/run.py | 6 ++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/framework/profile.py b/framework/profile.py
> index a625318..7a39e3c 100644
> --- a/framework/profile.py
> +++ b/framework/profile.py
> @@ -37,6 +37,7 @@ import importlib
>  import itertools
>  import multiprocessing
>  import multiprocessing.dummy
> +import warnings
>  import os
>  import re
>  
> @@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
>  from framework.log import LogManager
>  from framework.monitoring import Monitoring
>  from framework.test.base import Test
> +from framework.options import OPTIONS
>  
>  __all__ = [
>  'RegexFilter',
> @@ -314,6 +316,9 @@ class TestProfile(object):
>  if self.forced_test_list:
>  opts = collections.OrderedDict()
>  for n in self.forced_test_list:
> +if OPTIONS.skip_missing and n not in self.test_list:
> +warnings.warn("Test {} missing. Skipping.".format(n))
> +continue
>  opts[n] = self.test_list[n]
>  else:
>  opts = self.test_list  # pylint: disable=redefined-variable-type

I agree that we really want this to have an explicit notrun or similar status.
I'm not sure what the right way to do it is. You could probably create some kind
of stub Test instance that just sets a result of NotRun and returns, but that
doesn't feel very elegant.

> diff --git a/framework/programs/run.py b/framework/programs/run.py
> index 6444cfe..bf68f31 100644
> --- a/framework/programs/run.py
> +++ b/framework/programs/run.py
> @@ -208,6 +208,10 @@ def _run_parser(input_):
>   'isolation. This allows, but does not require, '
>   'tests to run multiple tests per process. '
>   'This value can also be set in piglit.conf.')
> +parser.add_argument("--skip-missing",
> +dest="skip_missing",
> +action="store_true",
> +help="skip on missing tests instead of failing")
>  parser.add_argument("test_profile",
>  metavar="",
>  nargs='+',
> @@ -291,6 +295,7 @@ def run(input_):
>  options.OPTIONS.sync = args.sync
>  options.OPTIONS.deqp_mustpass = args.deqp_mustpass
>  options.OPTIONS.process_isolation = args.process_isolation
> +options.OPTIONS.skip_missing = args.skip_missing
>  
>  # Set the platform to pass to waffle
>  options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
> @@ -389,6 +394,7 @@ def resume(input_):
>  options.OPTIONS.sync = results.options['sync']
>  options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
>  options.OPTIONS.proces_isolation = results.options['process_isolation']
> +options.OPTIONS.skip_missing = results.options['skip_missing']

As the person who added the options module...
Options was a mis-design from the start. I really don't want to see more code
added to options. If there's a way we could pass this some other way I would
very much prefer that.

>  
>  core.get_config(args.config_file)
>  
> -- 
> 2.9.4
> 
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH RFC] framework: Add --skip-missing option

2017-07-26 Thread Arkadiusz Hiler
Currently, if a test from provided testlist fails to be discovered by
the framework, piglit blows up with an exception.

This is both good - for consistency/early errors - and bad - for
handling some CI/automation scenarios (e.g autobisecting the tests).

So let's keep the current default, but allow some flexibility with the
new option that skips the missing tests with a warning.

bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99649
Cc: Tomi Sarvela 
Cc: Petri Latvala 
Reviewed-by: Martin Peres 
Signed-off-by: Arkadiusz Hiler 
---
 framework/profile.py  | 5 +
 framework/programs/run.py | 6 ++
 2 files changed, 11 insertions(+)

diff --git a/framework/profile.py b/framework/profile.py
index a625318..7a39e3c 100644
--- a/framework/profile.py
+++ b/framework/profile.py
@@ -37,6 +37,7 @@ import importlib
 import itertools
 import multiprocessing
 import multiprocessing.dummy
+import warnings
 import os
 import re
 
@@ -47,6 +48,7 @@ from framework.dmesg import get_dmesg
 from framework.log import LogManager
 from framework.monitoring import Monitoring
 from framework.test.base import Test
+from framework.options import OPTIONS
 
 __all__ = [
 'RegexFilter',
@@ -314,6 +316,9 @@ class TestProfile(object):
 if self.forced_test_list:
 opts = collections.OrderedDict()
 for n in self.forced_test_list:
+if OPTIONS.skip_missing and n not in self.test_list:
+warnings.warn("Test {} missing. Skipping.".format(n))
+continue
 opts[n] = self.test_list[n]
 else:
 opts = self.test_list  # pylint: disable=redefined-variable-type
diff --git a/framework/programs/run.py b/framework/programs/run.py
index 6444cfe..bf68f31 100644
--- a/framework/programs/run.py
+++ b/framework/programs/run.py
@@ -208,6 +208,10 @@ def _run_parser(input_):
  'isolation. This allows, but does not require, '
  'tests to run multiple tests per process. '
  'This value can also be set in piglit.conf.')
+parser.add_argument("--skip-missing",
+dest="skip_missing",
+action="store_true",
+help="skip on missing tests instead of failing")
 parser.add_argument("test_profile",
 metavar="",
 nargs='+',
@@ -291,6 +295,7 @@ def run(input_):
 options.OPTIONS.sync = args.sync
 options.OPTIONS.deqp_mustpass = args.deqp_mustpass
 options.OPTIONS.process_isolation = args.process_isolation
+options.OPTIONS.skip_missing = args.skip_missing
 
 # Set the platform to pass to waffle
 options.OPTIONS.env['PIGLIT_PLATFORM'] = args.platform
@@ -389,6 +394,7 @@ def resume(input_):
 options.OPTIONS.sync = results.options['sync']
 options.OPTIONS.deqp_mustpass = results.options['deqp_mustpass']
 options.OPTIONS.proces_isolation = results.options['process_isolation']
+options.OPTIONS.skip_missing = results.options['skip_missing']
 
 core.get_config(args.config_file)
 
-- 
2.9.4

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit