Re: [Piglit] [PATCH] framework: Handle tests with subtests crashing in totals

2018-01-19 Thread Jan Vesely
On Fri, 2018-01-19 at 15:26 -0800, Dylan Baker wrote:
> I mean to CC Fabien as well...

I think you did, at least the email looks correct.

I'm not sure how to test this change. After reverting Fabien's patch
things got back to normal. the tests on Jan 17 and 18 ran with Fabien's
change, the rest is without [0].

thanks,
Jan


[0] http://paul.rutgers.edu/~jv356/piglit/radeon-latest-5/problems.html

> 
> Quoting Dylan Baker (2018-01-19 14:04:13)
> > Currently piglit doesn't account for a test with subtests crashing when
> > it calculates the total number of tests of each status. The result is
> > that if a test with subtests runs no tests before crashing it is handled
> > correctly (since it goes down the non-subtest path), but if one or more
> > subtests are run, and those tests return a better result than crash,
> > then the test will be marked as that status instead.
> > 
> > The real problem is that the python framework has no idea how many
> > subtests that a test binary is going to run, so if the test crashes it
> > has no idea if some subtests weren't run. To paper over that if the
> > result of a test is not the same as the worst result of it's subtests
> > we'll treat the test as a single test rather than a group, this results
> > in the summaries generating the expected results.
> > 
> > A better fix would be to have tests with subtests inform the framework
> > (preferably via JSON) that all of the subtests that it will run before
> > it starts running, so that the python framework can pre-populate the
> > subtests and generate the right result.
> > 
> > This solution is a better in the short term because it makes the results
> > consistent, if a test crashes or not it will produce the same results.
> > 
> > Signed-off-by: Dylan Baker 
> > ---
> >  framework/results.py | 7 ++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/framework/results.py b/framework/results.py
> > index 99dd3735b..4c7266208 100644
> > --- a/framework/results.py
> > +++ b/framework/results.py
> > @@ -329,7 +329,12 @@ class TestrunResult(object):
> >  for name, result in six.iteritems(self.tests):
> >  # If there are subtests treat the test as if it is a group 
> > instead
> >  # of a test.
> > -if result.subtests:
> > +# FIXME: If there overall test crashed, then don't treat it as 
> > a
> > +# group, ignore all of the subtests and report that the test 
> > was
> > +# crash. This is just papering over the fact that the binaries
> > +# don't inform the python layer how many subtests (and the 
> > names)
> > +# of the subtests it wants to run.
> > +if result.subtests and result.result == 
> > max(six.itervalues(result.subtests)):
> >  for res in six.itervalues(result.subtests):
> >  res = str(res)
> >  temp = name
> > -- 
> > 2.15.1
> > 
> > ___
> > Piglit mailing list
> > Piglit@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/piglit

-- 
Jan Vesely 

signature.asc
Description: This is a digitally signed message part
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [RFC 1/9] util: Add a function for enumerating subtests

2018-01-19 Thread Dylan Baker
I meant to add a cover letter to this, also, there's only 8 patches.

This is an RFC, I'm looking for input into fixing the subtest reporting problem.
This approach is going to be more complete, but it's going to be pretty
involved. If anyone has a better approach I'd be happy to hear it.

Quoting Dylan Baker (2018-01-19 16:25:56)
> This function takes one or more subtests as strings and returns a JSON
> structure that the python framework can consume.
> 
> Signed-off-by: Dylan Baker 
> ---
>  tests/util/piglit-util.c | 29 +
>  tests/util/piglit-util.h |  2 ++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/tests/util/piglit-util.c b/tests/util/piglit-util.c
> index e33d055..46b4b75 100644
> --- a/tests/util/piglit-util.c
> +++ b/tests/util/piglit-util.c
> @@ -290,6 +290,35 @@ piglit_report_subtest_result(enum piglit_result result, 
> const char *format, ...)
> va_end(ap);
>  }
>  
> +void
> +piglit_enumerate_subtests(int num_args, const char *name, ...)
> +{
> +   va_list ap;
> +
> +   va_start(ap, name);
> +
> +   printf("PIGLIT: {\"enumerate subtests\": [\"%s\"", name);
> +   for (int i = 1; i < num_args; i++) {
> +   vprintf(", \"%s\"", ap);
> +   }
> +   printf("]}\n");
> +   fflush(stdout);
> +
> +   va_end(ap);
> +}
> +
> +void
> +piglit_enumerate_subtest_list(int length, const char *names[])
> +{
> +   assert(length > 0);
> +   printf("PIGLIT: {\"enumerate subtests\": [\"%s\"", names[0]);
> +   for (int i = 1; i < length; i++) {
> +   printf(", \"%s\"", names[i]);
> +   }
> +   printf("]}\n");
> +   fflush(stdout);
> +}
> +
>  
>  static void
>  piglit_disable_error_message_boxes(void)
> diff --git a/tests/util/piglit-util.h b/tests/util/piglit-util.h
> index 3757f86..c5adb11 100644
> --- a/tests/util/piglit-util.h
> +++ b/tests/util/piglit-util.h
> @@ -360,6 +360,8 @@ NORETURN void piglit_report_result(enum piglit_result 
> result);
>  void piglit_set_timeout(double seconds, enum piglit_result timeout_result);
>  void piglit_report_subtest_result(enum piglit_result result,
>   const char *format, ...) PRINTFLIKE(2, 3);
> +void piglit_enumerate_subtests(int num_args, const char *name, ...);
> +void piglit_enumerate_subtest_list(int length, const char *names[]);
>  
>  void piglit_general_init(void);
>  
> 
> base-commit: 736496667329bf73a706aebec6f8287078df79ae
> -- 
> git-series 0.9.1
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 5/9] tests/fbo-storage-formats: Always print the same number of subtests

2018-01-19 Thread Dylan Baker
!skip -> skip won't show up in the regressions/fixes/etc lists anyway,
and this means that the output will always be the same.
---
 tests/fbo/fbo-storage-formats.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tests/fbo/fbo-storage-formats.c b/tests/fbo/fbo-storage-formats.c
index 3ecd07a..91b80e0 100644
--- a/tests/fbo/fbo-storage-formats.c
+++ b/tests/fbo/fbo-storage-formats.c
@@ -223,6 +223,12 @@ test(void)
piglit_report_subtest_result(PIGLIT_PASS, "%s", 
name);
}
}
+   } else {
+   printf("Skipping error tests because KHR_NO_ERROR is 
enabled\n");
+   for (i = 0; i < ARRAY_SIZE(invalid_formats); i++) {
+   const char *name = 
piglit_get_gl_enum_name(invalid_formats[i]);
+   piglit_report_subtest_result(PIGLIT_SKIP, "%s", name);
+   }
}
 
return pass ? PIGLIT_PASS : PIGLIT_FAIL;
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 3/9] framework: add support for parsing subtest enumeration

2018-01-19 Thread Dylan Baker
This adds support for enumerating subtests to the python layer. When it
sees this it sets each subtest to notrun. This allows the python
framework to report that tests didn't run when they were expected to.

Signed-off-by: Dylan Baker 
---
 framework/test/piglit_test.py | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/framework/test/piglit_test.py b/framework/test/piglit_test.py
index 491f3d3..a7406c1 100644
--- a/framework/test/piglit_test.py
+++ b/framework/test/piglit_test.py
@@ -34,6 +34,7 @@ except ImportError:
 import json
 
 from framework import core, options
+from framework import status
 from .base import Test, WindowResizeMixin, ValgrindMixin, TestIsSkip
 
 
@@ -73,7 +74,12 @@ class PiglitBaseTest(ValgrindMixin, Test):
 
 for each in self.result.out.split('\n'):
 if each.startswith('PIGLIT:'):
-self.result.update(json.loads(each[8:]))
+deserial = json.loads(each[8:])
+if 'enumerate subtests' in deserial:
+self.result.subtests.update(
+{n: status.NOTRUN for n in deserial['enumerate 
subtests']})
+else:
+self.result.update(deserial)
 else:
 out.append(each)
 
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 8/9] tests/fbo-storage-formats: enumerate subtests

2018-01-19 Thread Dylan Baker
---
 tests/fbo/fbo-storage-formats.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/tests/fbo/fbo-storage-formats.c b/tests/fbo/fbo-storage-formats.c
index 4db990d..a11335b 100644
--- a/tests/fbo/fbo-storage-formats.c
+++ b/tests/fbo/fbo-storage-formats.c
@@ -243,9 +243,31 @@ piglit_display(void)
 }
 
 
+static void
+enumerate_subtests(void)
+{
+   static int len = ARRAY_SIZE(formats) + ARRAY_SIZE(invalid_formats);
+   const char* names[64];
+
+   int t = 0;
+   for (int i = 0; i < ARRAY_SIZE(formats); i++) {
+   names[t] = piglit_get_gl_enum_name(formats[i].format);
+   ++t;
+   }
+   for (int i = 0; i < ARRAY_SIZE(invalid_formats); i++) {
+   names[t] = piglit_get_gl_enum_name(invalid_formats[i]);
+   ++t;
+   }
+
+   piglit_enumerate_subtest_list(len, names);
+}
+
+
 void
 piglit_init(int argc, char**argv)
 {
+   enumerate_subtests();
+
piglit_require_extension("GL_EXT_framebuffer_object");
 
have_extension[0] = GL_TRUE;
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 2/9] tests: enumerate subtests in gl-2.1-gbo test

2018-01-19 Thread Dylan Baker
This gives us something to test the python part against.

Signed-off-by: Dylan Baker 
---
 tests/spec/gl-2.1/pbo.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tests/spec/gl-2.1/pbo.c b/tests/spec/gl-2.1/pbo.c
index 83dc1c4..1a59733 100644
--- a/tests/spec/gl-2.1/pbo.c
+++ b/tests/spec/gl-2.1/pbo.c
@@ -59,6 +59,12 @@ piglit_init(int argc, char **argv)
piglit_require_extension("GL_ARB_pixel_buffer_object");
 
piglit_ortho_projection(piglit_width, piglit_height, GL_FALSE);
+
+   piglit_enumerate_subtests(
+   8, "test_sanity", "test_draw_pixels", "test_pixel_map",
+   "test_bitmap", "test_tex_image", "test_tex_sub_image",
+   "test_polygon_stip", "test_error_handling"
+   );
 }
 
 static void
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH] framework: Handle tests with subtests crashing in totals

2018-01-19 Thread Dylan Baker
I mean to CC Fabien as well...

Quoting Dylan Baker (2018-01-19 14:04:13)
> Currently piglit doesn't account for a test with subtests crashing when
> it calculates the total number of tests of each status. The result is
> that if a test with subtests runs no tests before crashing it is handled
> correctly (since it goes down the non-subtest path), but if one or more
> subtests are run, and those tests return a better result than crash,
> then the test will be marked as that status instead.
> 
> The real problem is that the python framework has no idea how many
> subtests that a test binary is going to run, so if the test crashes it
> has no idea if some subtests weren't run. To paper over that if the
> result of a test is not the same as the worst result of it's subtests
> we'll treat the test as a single test rather than a group, this results
> in the summaries generating the expected results.
> 
> A better fix would be to have tests with subtests inform the framework
> (preferably via JSON) that all of the subtests that it will run before
> it starts running, so that the python framework can pre-populate the
> subtests and generate the right result.
> 
> This solution is a better in the short term because it makes the results
> consistent, if a test crashes or not it will produce the same results.
> 
> Signed-off-by: Dylan Baker 
> ---
>  framework/results.py | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/framework/results.py b/framework/results.py
> index 99dd3735b..4c7266208 100644
> --- a/framework/results.py
> +++ b/framework/results.py
> @@ -329,7 +329,12 @@ class TestrunResult(object):
>  for name, result in six.iteritems(self.tests):
>  # If there are subtests treat the test as if it is a group 
> instead
>  # of a test.
> -if result.subtests:
> +# FIXME: If there overall test crashed, then don't treat it as a
> +# group, ignore all of the subtests and report that the test was
> +# crash. This is just papering over the fact that the binaries
> +# don't inform the python layer how many subtests (and the names)
> +# of the subtests it wants to run.
> +if result.subtests and result.result == 
> max(six.itervalues(result.subtests)):
>  for res in six.itervalues(result.subtests):
>  res = str(res)
>  temp = name
> -- 
> 2.15.1
> 
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 7/9] tests/fbo-storage-formats: print subtest result for skip too

2018-01-19 Thread Dylan Baker
Because we always want to have the same number of subtests printed, or
they'll show up as "NOTRUN" instead of "SKIP" in the summary.

Signed-off-by: Dylan Baker 
---
 tests/fbo/fbo-storage-formats.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/fbo/fbo-storage-formats.c b/tests/fbo/fbo-storage-formats.c
index cfc8ae0..4db990d 100644
--- a/tests/fbo/fbo-storage-formats.c
+++ b/tests/fbo/fbo-storage-formats.c
@@ -190,8 +190,10 @@ test(void)
for (i = 0; i < ARRAY_SIZE(formats); i++) {
const char *name = piglit_get_gl_enum_name(formats[i].format);
 
-   if (!have_extension[formats[i].extension])
+   if (!have_extension[formats[i].extension]) {
+   piglit_report_subtest_result(PIGLIT_SKIP, "%s", name);
continue;
+   }
 
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, formats[i].format,
 piglit_width, piglit_height);
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 6/9] tests/fbo-storage-formats: Make subtest names predictable

2018-01-19 Thread Dylan Baker
Currently the name of the subtest relies on checking the GL state of the
command to know whether the framebuffer is completely or incomplete.
That is a problem for enumerating subtests, we would need to know ahead
of time whether the framebuffer is complete or not.

Instead print the completeness of the test apart from the subtest name,
and name the subtest name just the format name. This also makes the name
the same when the test fails and when in passes.

Signed-off-by: Dylan Baker 
---
 tests/fbo/fbo-storage-formats.c |  9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tests/fbo/fbo-storage-formats.c b/tests/fbo/fbo-storage-formats.c
index 91b80e0..cfc8ae0 100644
--- a/tests/fbo/fbo-storage-formats.c
+++ b/tests/fbo/fbo-storage-formats.c
@@ -200,11 +200,10 @@ test(void)
pass = GL_FALSE;
} else {
GLenum status = 
glCheckFramebufferStatus(GL_FRAMEBUFFER);
-   piglit_report_subtest_result(PIGLIT_PASS,
-"%s (%s)",
-name,
-(status == 
GL_FRAMEBUFFER_COMPLETE ?
- "complete" : 
"incomplete"));
+   printf("%s is %s",
+  name,
+  (status == GL_FRAMEBUFFER_COMPLETE ?  "complete" 
: "incomplete"));
+   piglit_report_subtest_result(PIGLIT_PASS, "%s", name);
}
}
 
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 4/9] tests: enumerate subtests in fbo-incomplete

2018-01-19 Thread Dylan Baker
Signed-off-by: Dylan Baker 
---
 tests/fbo/fbo-incomplete.cpp | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/tests/fbo/fbo-incomplete.cpp b/tests/fbo/fbo-incomplete.cpp
index 8cde6d2..0354615 100644
--- a/tests/fbo/fbo-incomplete.cpp
+++ b/tests/fbo/fbo-incomplete.cpp
@@ -459,6 +459,13 @@ piglit_init(int argc, char **argv)
 {
bool pass = true;
 
+   piglit_enumerate_subtests(
+   8, "0x0 texture", "0x0 renderbuffer", "invalid slice of 3D 
texture",
+   "invalid layer of a 1D-array texture", "invalid layer of a 
2D-array texture",
+   "invalid layer of a cube-array texture", "delete texture of 
bound FBO",
+   "delete renderbuffer of bound FBO"
+   );
+
piglit_require_extension("GL_ARB_framebuffer_object");
 
pass = incomplete_0_by_0_texture() && pass;
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH 1/9] util: Add a function for enumerating subtests

2018-01-19 Thread Dylan Baker
This function takes one or more subtests as strings and returns a JSON
structure that the python framework can consume.

Signed-off-by: Dylan Baker 
---
 tests/util/piglit-util.c | 29 +
 tests/util/piglit-util.h |  2 ++
 2 files changed, 31 insertions(+)

diff --git a/tests/util/piglit-util.c b/tests/util/piglit-util.c
index e33d055..46b4b75 100644
--- a/tests/util/piglit-util.c
+++ b/tests/util/piglit-util.c
@@ -290,6 +290,35 @@ piglit_report_subtest_result(enum piglit_result result, 
const char *format, ...)
va_end(ap);
 }
 
+void
+piglit_enumerate_subtests(int num_args, const char *name, ...)
+{
+   va_list ap;
+
+   va_start(ap, name);
+
+   printf("PIGLIT: {\"enumerate subtests\": [\"%s\"", name);
+   for (int i = 1; i < num_args; i++) {
+   vprintf(", \"%s\"", ap);
+   }
+   printf("]}\n");
+   fflush(stdout);
+
+   va_end(ap);
+}
+
+void
+piglit_enumerate_subtest_list(int length, const char *names[])
+{
+   assert(length > 0);
+   printf("PIGLIT: {\"enumerate subtests\": [\"%s\"", names[0]);
+   for (int i = 1; i < length; i++) {
+   printf(", \"%s\"", names[i]);
+   }
+   printf("]}\n");
+   fflush(stdout);
+}
+
 
 static void
 piglit_disable_error_message_boxes(void)
diff --git a/tests/util/piglit-util.h b/tests/util/piglit-util.h
index 3757f86..c5adb11 100644
--- a/tests/util/piglit-util.h
+++ b/tests/util/piglit-util.h
@@ -360,6 +360,8 @@ NORETURN void piglit_report_result(enum piglit_result 
result);
 void piglit_set_timeout(double seconds, enum piglit_result timeout_result);
 void piglit_report_subtest_result(enum piglit_result result,
  const char *format, ...) PRINTFLIKE(2, 3);
+void piglit_enumerate_subtests(int num_args, const char *name, ...);
+void piglit_enumerate_subtest_list(int length, const char *names[]);
 
 void piglit_general_init(void);
 

base-commit: 736496667329bf73a706aebec6f8287078df79ae
-- 
git-series 0.9.1
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [PATCH] framework: Handle tests with subtests crashing in totals

2018-01-19 Thread Dylan Baker
Currently piglit doesn't account for a test with subtests crashing when
it calculates the total number of tests of each status. The result is
that if a test with subtests runs no tests before crashing it is handled
correctly (since it goes down the non-subtest path), but if one or more
subtests are run, and those tests return a better result than crash,
then the test will be marked as that status instead.

The real problem is that the python framework has no idea how many
subtests that a test binary is going to run, so if the test crashes it
has no idea if some subtests weren't run. To paper over that if the
result of a test is not the same as the worst result of it's subtests
we'll treat the test as a single test rather than a group, this results
in the summaries generating the expected results.

A better fix would be to have tests with subtests inform the framework
(preferably via JSON) that all of the subtests that it will run before
it starts running, so that the python framework can pre-populate the
subtests and generate the right result.

This solution is a better in the short term because it makes the results
consistent, if a test crashes or not it will produce the same results.

Signed-off-by: Dylan Baker 
---
 framework/results.py | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/framework/results.py b/framework/results.py
index 99dd3735b..4c7266208 100644
--- a/framework/results.py
+++ b/framework/results.py
@@ -329,7 +329,12 @@ class TestrunResult(object):
 for name, result in six.iteritems(self.tests):
 # If there are subtests treat the test as if it is a group instead
 # of a test.
-if result.subtests:
+# FIXME: If there overall test crashed, then don't treat it as a
+# group, ignore all of the subtests and report that the test was
+# crash. This is just papering over the fact that the binaries
+# don't inform the python layer how many subtests (and the names)
+# of the subtests it wants to run.
+if result.subtests and result.result == 
max(six.itervalues(result.subtests)):
 for res in six.itervalues(result.subtests):
 res = str(res)
 temp = name
-- 
2.15.1

___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #9 from Dylan Baker  ---
I've sent a patch for this, I cc'd both Jan and Fabian

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #8 from Dylan Baker  ---
Fabian, it's the totalling code that's wrong. It doesn't account for crashes in
tests with subtests. I'm looking at it now.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #7 from Fabian Bieler  ---
(In reply to Jan Vesely from comment #5)
> (In reply to Fabian Bieler from comment #3)
> > Strange, I can't seem to reproduce that.
> > I reverted 938ec48e2, ran arb_copy_buffer@data-sync once normally and once
> > with a call to abort() after the first call to
> > piglit_report_subtest_result() (tests/spec/arb_copy_buffer/data-sync.c:82).
> > 
> > Neither "piglit summary console" nor "piglit summary html" report a crashing
> > test or a regression.
> > 
> > Am I missing something?
> 
> most of the crashes in OpenCL tests happen during kernel compile (before the
> first subtest). Maybe there's a difference in behaviour in those cases?

You're absolutely right. Aborting before the first call to
piglit_report_subtest_result shows me the behavior you described.

I'll revert 938ec48e2 and have a look if I can find a solution to get this
behavior for tests that crash after a call to piglit_report_subtest_result,
too.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #6 from Dylan Baker  ---
I think there's actually a bug in the summary code. I reverted this patch, and
added the Fabian's patch to the data-sync test. The raw JSON contains what I
expect, but `piglit summary console` returns all tests passed. I'll look into
it some more.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #5 from Jan Vesely  ---
(In reply to Fabian Bieler from comment #3)
> Strange, I can't seem to reproduce that.
> I reverted 938ec48e2, ran arb_copy_buffer@data-sync once normally and once
> with a call to abort() after the first call to
> piglit_report_subtest_result() (tests/spec/arb_copy_buffer/data-sync.c:82).
> 
> Neither "piglit summary console" nor "piglit summary html" report a crashing
> test or a regression.
> 
> Am I missing something?

most of the crashes in OpenCL tests happen during kernel compile (before the
first subtest). Maybe there's a difference in behaviour in those cases?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #4 from Dylan Baker  ---
the behavior Jan sees is what the framework is *supposed* to do. This is pretty
frustrating actually because it makes it impossible for us to track regressions
in CI because the output is now inconsistant. If a test crashes and has
subtests it doesn't behave the same way as other tests.

If the test crashes just set the overall result to crash, it should be the
"worst" result, and thus the whole test should show crash.

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #3 from Fabian Bieler  ---
Strange, I can't seem to reproduce that.
I reverted 938ec48e2, ran arb_copy_buffer@data-sync once normally and once with
a call to abort() after the first call to piglit_report_subtest_result()
(tests/spec/arb_copy_buffer/data-sync.c:82).

Neither "piglit summary console" nor "piglit summary html" report a crashing
test or a regression.

Am I missing something?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #2 from Jan Vesely  ---
That's no entirely accurate. The previous behavior would report crash of the
entire test. So you could see:
test1 pass -> crash
 subtest0 pass -> notrun
 subtest1 pass -> notrun

the new behavior is:
test1 2/2 -> 0/0
 subtest0 pass  -> notrun
 subtest1 pass  -> notrun
 unknown notrun -> crash

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH] cl: Add test for MUBUF access with a negative vaddr

2018-01-19 Thread Matt Arsenault


> On Jan 18, 2018, at 15:02, Jan Vesely  wrote:
> 
> Why is this necessary? can't you just pass the offset argument as a
> kernel input?
> 
> Jan

It needs to specifically be in a VGPR___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests don't show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

Fabian Bieler  changed:

   What|Removed |Added

Summary|Crashing tests with |Crashing tests with
   |subtests no longer show in  |subtests don't show in a
   |a list of regressions   |list of regressions

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] Crashing tests with subtests no longer show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

--- Comment #1 from Fabian Bieler  ---
Created attachment 136858
  --> https://bugs.freedesktop.org/attachment.cgi?id=136858=edit
Framework/summary: Include crashing subtests in fixes and regressions.

If a subtest crashed before 938ec48e2 no crash was recorded at all. A subtest
that regressed from pass to crash was recorded as pass and notrun,
respectively, too.

The commit in question attempts to improve on the situation by at least
recording some crash, albeit only of subtest "unknown". This is no proper fix
for bug #74642 as noted in the commit message.

Crashing subtests not showing up in the regressions tab of summaries is a
problem, but that has been the case since at least 2014 (see bug #74642).

The attached test tries to remedy that by at least including the subtest named
"unknown" in the list of regressions (and fixes, if applicable).

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH] gl-1.0-blend-func: skip some blend tests when using LLVM 3.8

2018-01-19 Thread Roland Scheidegger
Am 19.01.2018 um 06:35 schrieb Eric Anholt:
> Brian Paul  writes:
> 
>> On 01/18/2018 01:27 PM, Eric Anholt wrote:
>>> Brian Paul  writes:
>>>
 To avoid an infinite loop.  See code comments for details.
>>>
>>> Skipping a failing test and returning pass is wrong to me.
>>
>> It's not ideal.  But the bug is in LLVM and cannot readily be fixed in 
>> llvmpipe.
>>
>> I could have the test return a WARN result in this situation.  Would 
>> that be better?
> 
> It's still a bug in the driver, even if it's because the driver's using
> a buggy external library.  It should be a fail.
> 

Albeit it's just a guess it will hang. With a fixed llvm 3.8 from the
stable branch, it would not hang and pass. Or IIRC if you don't have a
avx-capable cpu, it also would not hang (and with a non-x86 cpu it won't
hang neither).
But I don't really care either way if it just reports fail in this case.
(Would be nice if we could just determine the hang empirically I
suppose, if the testcase runs for more than a second kill it and it's a
fail, but that doesn't work easily.)

Roland
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


[Piglit] [Bug 104700] New: Crashing tests with subtests no longer show in a list of regressions

2018-01-19 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=104700

Bug ID: 104700
   Summary: Crashing tests with subtests no longer show in a list
of regressions
   Product: piglit
   Version: unspecified
  Hardware: Other
OS: All
Status: NEW
  Severity: normal
  Priority: medium
 Component: infrastructure
  Assignee: fabianbie...@fastmail.fm
  Reporter: pavel.ondra...@email.cz
QA Contact: piglit@lists.freedesktop.org

When some test with subtests starts crashing, it will not show in a piglit
summary as a regression, since the actual subtests will go from pass to notrun
and the failure will be only reported for the "unknown" test which will go from
notrun to crash, and hence neither will end in the list of regressions. It can
be still found in the changes summary, however the list of regressions is no
longer reliable due to this. I'm not sure if this is a bug or a feature,
however it seems somewhat unfortunate since this can lead to missed
regressions.

Was probably introduced by 
commit 938ec48e2575b78defd06d169f704ed8d4f11bce
Author: Fabian Bieler 
Date:   Sat Jan 6 23:36:02 2018 +0100

framework: Handle crashing subtest.

Piglit silently ignored crashes in subtests.

It's impossible to know what subtest crashed. Theoretically it might
even be possible that a test crashes after the last call to
piglit_report_subtest_result and thus no subtest crashed. Though the
odds of that happening are probably pretty long.

If a test with subtests crashes, this commit adds an additional subtest
named "unknown" with result "crash".

The issue that subsequent subtests are not run is not touched upon by
this commit.

This is more of a work-around. A proper fix would modify the C subtest
framework to print a list of subtests it intends to run at the start of
the test. This would enable the python subtest framework to know which
subtest crashed.
But that's a lot of work (it would require modifying more than 100
tests) and this is better than nothing.

See also: https://bugs.freedesktop.org/show_bug.cgi?id=74642

Reviewed-by: Brian Paul 

-- 
You are receiving this mail because:
You are the QA Contact for the bug.___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit