Re: [Piglit] [PATCH V3 1/4] shader_runner: Add basic SSO support to shader runner

2015-12-05 Thread Kenneth Graunke
On Sunday, December 06, 2015 01:58:31 PM Timothy Arceri wrote:
> This sets up the basics for using SSO with shader runner. This will
> only support vertex and fragment shaders but is easily extended.
> 
> V2: delete pipeline in cleanup code rather than calling gen again,
> output error message when SSO fails to link
> 
> V3: add new option to [require] to allow separate shader objects to be
> enabled for the entire test.
> 
> Example use:
> [require]
> SSO ENABLED
> 
> Adding the ENABLED field rather than just using SSO will allow us to use
> DISABLED in future should we ever add the ability to automatically run
> all tests as SSO.
> 
> Example shader:
> 
> [require]
> GLSL >= 1.50
> SSO ENABLED
> 
> [vertex shader]
> 
> layout(location = 0) in vec4 piglit_vertex;
> 
> layout(location = 2) out vec3 a;
> layout(location = 3) out vec3 b;
> 
> void main()
> {
> gl_Position = piglit_vertex;
> a = vec3(0, 0, 1);
> b = vec3(1, 0, 0);
> }
> 
> [fragment shader]
> 
> layout(location = 0) out vec4 out_color;
> 
> layout(location = 2) in vec3 b; /* should get vec3(0, 0, 1) */
> layout(location = 3) in vec3 a; /* should get vec3(1, 0, 0) */
> 
> void main()
> {
> out_color = vec4(cross(b, a), 1);
> }
> 
> [test]
> draw rect -1 -1 2 2
> probe all rgb 0 1 0
> 
> Cc: Kenneth Graunke 
> ---
>  tests/shaders/shader_runner.c | 90 
> +--
>  1 file changed, 86 insertions(+), 4 deletions(-)

I think you could simplify the code substantially by adding this to
the top of compile_glsl():

if (sso_in_use) {
create_sso(target, shader_string, shader_string_size);
return;
}

Then you could...

> diff --git a/tests/shaders/shader_runner.c b/tests/shaders/shader_runner.c
> index eeb1aac..a5f456f 100644
> --- a/tests/shaders/shader_runner.c
> +++ b/tests/shaders/shader_runner.c
> @@ -123,10 +123,12 @@ GLint shader_string_size;
>  const char *vertex_data_start = NULL;
>  const char *vertex_data_end = NULL;
>  GLuint prog;
> +GLuint pipeline;
>  size_t num_vbo_rows = 0;
>  bool vbo_present = false;
>  bool link_ok = false;
>  bool prog_in_use = false;
> +bool sso_in_use = false;
>  GLchar *prog_err_info = NULL;
>  GLuint vao = 0;
>  GLuint fbo = 0;
> @@ -137,12 +139,14 @@ enum states {
>   requirements,
>   vertex_shader,
>   vertex_shader_passthrough,
> + vertex_sso,

drop this

>   vertex_program,
>   tess_ctrl_shader,
>   tess_eval_shader,
>   geometry_shader,
>   geometry_layout,
>   fragment_shader,
> + fragment_sso,

and this

>   fragment_program,
>   compute_shader,
>   vertex_data,
> @@ -480,6 +484,55 @@ compile_and_bind_program(GLenum target, const char 
> *start, int len)
>   prog_in_use = true;
>  }
>  
> +void
> +create_sso(GLenum target, const char *start, int len)
> +{
> + GLuint prog;
> + GLint ok;
> + char *source;
> +
> + piglit_require_extension("GL_ARB_separate_shader_objects");
> +
> + source = malloc(len + 1);
> + memcpy(source, start, len);
> + source[len] = 0;
> + prog = glCreateShaderProgramv(target, 1,
> + (const GLchar *const *) );
> +
> + glGetProgramiv(prog, GL_LINK_STATUS, );
> + if (ok) {
> + link_ok = true;
> + } else {
> + GLint size;
> +
> + glGetProgramiv(prog, GL_INFO_LOG_LENGTH, );
> + prog_err_info = malloc(size);
> +
> + glGetProgramInfoLog(prog, size, NULL, prog_err_info);
> +
> + fprintf(stderr, "glCreateShaderProgramv(%s) failed: %s\n",
> + target_to_short_name(target),
> + prog_err_info);
> +
> + free(prog_err_info);
> + piglit_report_result(PIGLIT_FAIL);
> +
> + return;
> + }
> +
> + switch (target) {
> + case GL_VERTEX_SHADER:
> + glUseProgramStages(pipeline, GL_VERTEX_SHADER_BIT, prog);
> + break;
> + case GL_FRAGMENT_SHADER:
> + glUseProgramStages(pipeline, GL_FRAGMENT_SHADER_BIT, prog);
> + break;
> + }
> +
> + sso_in_use = true;

This doesn't seem useful now that we have a global option

> + prog_in_use = true;
> +}
> +
>  /**
>   * Compare two values given a specified comparison operator
>   */
> @@ -705,13 +758,14 @@ process_requirement(const char *line)
>   return;
>   }
>  
> - /* There are four types of requirements that a test can currently
> + /* There are five types of requirements that a test can currently
>* have:
>*
>** Require that some GL extension be supported
>** Require some particular versions of GL
>** Require some particular versions of GLSL
>** Require some particular number of uniform components
> +  ** Require shaders be built as separate shader objects
>*
>* The tests for GL 

Re: [Piglit] [PATCH 2/2] framework/backends/junit.py: Handle tests with subtests as testsuite elements

2015-12-05 Thread Jose Fonseca

On 05/12/15 00:21, baker.dyla...@gmail.com wrote:

From: Dylan Baker 

Currently the JUnit backend has no way to represent subtests in such a
way that they can be understood by jenkins and by the summary tools.

Mark, Nanley and myself consulted and came up with several approaches,
each with serious drawbacks:
1. Print the subtest statuses into stdout or stderr nodes in the JUnit.
This has the advantage of being simple, but has a problem with being
shadowed by an expected-. If subtest A fails, an expected
fail can be entered. If subtest B also starts failing, no one will
notice. This wont work
2. Treat each subtest as a full test, and the test as a group. I have
two reservations about this approach. It's different than the JSON
for one, and there isn't a good way to turn the JUnit back into the
piglit internal representation using this approach, which would make
running JUnit results through the piglit status tools difficult. This
would also massively inflate the size of the JSON results, and that's
already becoming a problem for us.
3. Create a main test entry, and then subtest entries as well, which
pointed back to the original test as the parent in their stdout. This
also has shadowing problems, and would still make the XML very large.

The final approach taken was suggested by Nanely, to turn tests with
subtests into a testsuite element, which could represent the shared
values (stdout, stderr), and could hold individual testcases elements
for each subtest. This solves the shadowing issue, and introduces less
file size increase than the other ideas floated.


Having the sub-tests being a testsuite is OK, but I'm not sure nesting a 
testsuite inside a testsuite is reliable.


I don't know if JUnit generated by other frameworks ever have that. And 
what exactly is the semantics for names.


I wonder if it wouldn't be better to ensure that the `testsuite` tag is 
always a child from the root `testsuites` tag.


Jose



Also adds test for the new feature.

cc: mark.a.ja...@intel.com
cc: jfons...@vmware.com
Signed-off-by: Dylan Baker 
---
  framework/backends/junit.py | 174 +--
  framework/tests/junit_backends_tests.py | 206 +++-
  2 files changed, 343 insertions(+), 37 deletions(-)

diff --git a/framework/backends/junit.py b/framework/backends/junit.py
index 34df300..329fc4b 100644
--- a/framework/backends/junit.py
+++ b/framework/backends/junit.py
@@ -117,8 +117,6 @@ class JUnitBackend(FileBackend):

  shutil.rmtree(os.path.join(self._dest, 'tests'))

-
-
  def _write(self, f, name, data):
  # Split the name of the test and the group (what junit refers to as
  # classname), and replace piglits '/' separated groups with '.', after
@@ -135,11 +133,41 @@ class JUnitBackend(FileBackend):
  # set different root names.
  classname = 'piglit.' + classname

-element = self.__make_case(testname, classname, data)
+if data.subtests:
+# If there are subtests treat the test as a suite instead of a
+# test, set system-out, system-err, and time on the suite rather
+# than on the testcase
+name='{}.{}'.format(classname, testname)
+element = etree.Element(
+'testsuite',
+name=name,
+time=str(data.time.total))
+
+out = etree.SubElement(element, 'system-out')
+out.text = data.command + '\n' + data.out
+err = etree.SubElement(element, 'system-err')
+err.text = data.err
+err.text += '\n\nstart time: {}\nend time: {}\n'.format(
+data.time.start, data.time.end)
+
+for name, result in data.subtests.iteritems():
+sub = self.__make_subcase(name, result, err)
+out = etree.SubElement(sub, 'system-out')
+out.text = 'I am a subtest of {}'.format(name)
+element.append(sub)
+
+for attrib, xpath in [('failures', './/testcase/failure'),
+  ('errors', './/testcase/error'),
+  ('skipped', './/testcase/skipped'),
+  ('tests', './/testcase')]:
+element.attrib[attrib] = str(len(element.findall(xpath)))
+
+else:
+element = self.__make_case(testname, classname, data)
+
  f.write(etree.tostring(element))

-def __make_case(self, testname, classname, data):
-"""Create a test case element and return it."""
+def __make_name(self, testname):
  # Jenkins will display special pages when the test has certain names,
  # so add '_' so the tests don't match those names
  # https://jenkins-ci.org/issue/18062
@@ -148,17 +176,68 @@ class JUnitBackend(FileBackend):
  if full_test_name in 

Re: [Piglit] [PATCH 2/2] framework/backends/junit.py: Handle tests with subtests as testsuite elements

2015-12-05 Thread Dylan Baker
According to the schema it's allowed. The schema is in the piglit repo at
"framework/tests/schema/junit-7.xsd"

On Sat, Dec 5, 2015 at 4:36 AM, Jose Fonseca  wrote:

> On 05/12/15 00:21, baker.dyla...@gmail.com wrote:
>
>> From: Dylan Baker 
>>
>> Currently the JUnit backend has no way to represent subtests in such a
>> way that they can be understood by jenkins and by the summary tools.
>>
>> Mark, Nanley and myself consulted and came up with several approaches,
>> each with serious drawbacks:
>> 1. Print the subtest statuses into stdout or stderr nodes in the JUnit.
>> This has the advantage of being simple, but has a problem with being
>> shadowed by an expected-. If subtest A fails, an expected
>> fail can be entered. If subtest B also starts failing, no one will
>> notice. This wont work
>> 2. Treat each subtest as a full test, and the test as a group. I have
>> two reservations about this approach. It's different than the JSON
>> for one, and there isn't a good way to turn the JUnit back into the
>> piglit internal representation using this approach, which would make
>> running JUnit results through the piglit status tools difficult. This
>> would also massively inflate the size of the JSON results, and that's
>> already becoming a problem for us.
>> 3. Create a main test entry, and then subtest entries as well, which
>> pointed back to the original test as the parent in their stdout. This
>> also has shadowing problems, and would still make the XML very large.
>>
>> The final approach taken was suggested by Nanely, to turn tests with
>> subtests into a testsuite element, which could represent the shared
>> values (stdout, stderr), and could hold individual testcases elements
>> for each subtest. This solves the shadowing issue, and introduces less
>> file size increase than the other ideas floated.
>>
>
> Having the sub-tests being a testsuite is OK, but I'm not sure nesting a
> testsuite inside a testsuite is reliable.
>
> I don't know if JUnit generated by other frameworks ever have that. And
> what exactly is the semantics for names.
>
> I wonder if it wouldn't be better to ensure that the `testsuite` tag is
> always a child from the root `testsuites` tag.
>
> Jose
>
>
>
>> Also adds test for the new feature.
>>
>> cc: mark.a.ja...@intel.com
>> cc: jfons...@vmware.com
>> Signed-off-by: Dylan Baker 
>> ---
>>   framework/backends/junit.py | 174
>> +--
>>   framework/tests/junit_backends_tests.py | 206
>> +++-
>>   2 files changed, 343 insertions(+), 37 deletions(-)
>>
>> diff --git a/framework/backends/junit.py b/framework/backends/junit.py
>> index 34df300..329fc4b 100644
>> --- a/framework/backends/junit.py
>> +++ b/framework/backends/junit.py
>> @@ -117,8 +117,6 @@ class JUnitBackend(FileBackend):
>>
>>   shutil.rmtree(os.path.join(self._dest, 'tests'))
>>
>> -
>> -
>>   def _write(self, f, name, data):
>>   # Split the name of the test and the group (what junit refers
>> to as
>>   # classname), and replace piglits '/' separated groups with
>> '.', after
>> @@ -135,11 +133,41 @@ class JUnitBackend(FileBackend):
>>   # set different root names.
>>   classname = 'piglit.' + classname
>>
>> -element = self.__make_case(testname, classname, data)
>> +if data.subtests:
>> +# If there are subtests treat the test as a suite instead of
>> a
>> +# test, set system-out, system-err, and time on the suite
>> rather
>> +# than on the testcase
>> +name='{}.{}'.format(classname, testname)
>> +element = etree.Element(
>> +'testsuite',
>> +name=name,
>> +time=str(data.time.total))
>> +
>> +out = etree.SubElement(element, 'system-out')
>> +out.text = data.command + '\n' + data.out
>> +err = etree.SubElement(element, 'system-err')
>> +err.text = data.err
>> +err.text += '\n\nstart time: {}\nend time: {}\n'.format(
>> +data.time.start, data.time.end)
>> +
>> +for name, result in data.subtests.iteritems():
>> +sub = self.__make_subcase(name, result, err)
>> +out = etree.SubElement(sub, 'system-out')
>> +out.text = 'I am a subtest of {}'.format(name)
>> +element.append(sub)
>> +
>> +for attrib, xpath in [('failures', './/testcase/failure'),
>> +  ('errors', './/testcase/error'),
>> +  ('skipped', './/testcase/skipped'),
>> +  ('tests', './/testcase')]:
>> +element.attrib[attrib] = str(len(element.findall(xpath)))
>> +
>> +else:
>> +element = self.__make_case(testname, classname, 

[Piglit] [PATCH V3 1/4] shader_runner: Add basic SSO support to shader runner

2015-12-05 Thread Timothy Arceri
This sets up the basics for using SSO with shader runner. This will
only support vertex and fragment shaders but is easily extended.

V2: delete pipeline in cleanup code rather than calling gen again,
output error message when SSO fails to link

V3: add new option to [require] to allow separate shader objects to be
enabled for the entire test.

Example use:
[require]
SSO ENABLED

Adding the ENABLED field rather than just using SSO will allow us to use
DISABLED in future should we ever add the ability to automatically run
all tests as SSO.

Example shader:

[require]
GLSL >= 1.50
SSO ENABLED

[vertex shader]

layout(location = 0) in vec4 piglit_vertex;

layout(location = 2) out vec3 a;
layout(location = 3) out vec3 b;

void main()
{
gl_Position = piglit_vertex;
a = vec3(0, 0, 1);
b = vec3(1, 0, 0);
}

[fragment shader]

layout(location = 0) out vec4 out_color;

layout(location = 2) in vec3 b; /* should get vec3(0, 0, 1) */
layout(location = 3) in vec3 a; /* should get vec3(1, 0, 0) */

void main()
{
out_color = vec4(cross(b, a), 1);
}

[test]
draw rect -1 -1 2 2
probe all rgb 0 1 0

Cc: Kenneth Graunke 
---
 tests/shaders/shader_runner.c | 90 +--
 1 file changed, 86 insertions(+), 4 deletions(-)

diff --git a/tests/shaders/shader_runner.c b/tests/shaders/shader_runner.c
index eeb1aac..a5f456f 100644
--- a/tests/shaders/shader_runner.c
+++ b/tests/shaders/shader_runner.c
@@ -123,10 +123,12 @@ GLint shader_string_size;
 const char *vertex_data_start = NULL;
 const char *vertex_data_end = NULL;
 GLuint prog;
+GLuint pipeline;
 size_t num_vbo_rows = 0;
 bool vbo_present = false;
 bool link_ok = false;
 bool prog_in_use = false;
+bool sso_in_use = false;
 GLchar *prog_err_info = NULL;
 GLuint vao = 0;
 GLuint fbo = 0;
@@ -137,12 +139,14 @@ enum states {
requirements,
vertex_shader,
vertex_shader_passthrough,
+   vertex_sso,
vertex_program,
tess_ctrl_shader,
tess_eval_shader,
geometry_shader,
geometry_layout,
fragment_shader,
+   fragment_sso,
fragment_program,
compute_shader,
vertex_data,
@@ -480,6 +484,55 @@ compile_and_bind_program(GLenum target, const char *start, 
int len)
prog_in_use = true;
 }
 
+void
+create_sso(GLenum target, const char *start, int len)
+{
+   GLuint prog;
+   GLint ok;
+   char *source;
+
+   piglit_require_extension("GL_ARB_separate_shader_objects");
+
+   source = malloc(len + 1);
+   memcpy(source, start, len);
+   source[len] = 0;
+   prog = glCreateShaderProgramv(target, 1,
+   (const GLchar *const *) );
+
+   glGetProgramiv(prog, GL_LINK_STATUS, );
+   if (ok) {
+   link_ok = true;
+   } else {
+   GLint size;
+
+   glGetProgramiv(prog, GL_INFO_LOG_LENGTH, );
+   prog_err_info = malloc(size);
+
+   glGetProgramInfoLog(prog, size, NULL, prog_err_info);
+
+   fprintf(stderr, "glCreateShaderProgramv(%s) failed: %s\n",
+   target_to_short_name(target),
+   prog_err_info);
+
+   free(prog_err_info);
+   piglit_report_result(PIGLIT_FAIL);
+
+   return;
+   }
+
+   switch (target) {
+   case GL_VERTEX_SHADER:
+   glUseProgramStages(pipeline, GL_VERTEX_SHADER_BIT, prog);
+   break;
+   case GL_FRAGMENT_SHADER:
+   glUseProgramStages(pipeline, GL_FRAGMENT_SHADER_BIT, prog);
+   break;
+   }
+
+   sso_in_use = true;
+   prog_in_use = true;
+}
+
 /**
  * Compare two values given a specified comparison operator
  */
@@ -705,13 +758,14 @@ process_requirement(const char *line)
return;
}
 
-   /* There are four types of requirements that a test can currently
+   /* There are five types of requirements that a test can currently
 * have:
 *
 ** Require that some GL extension be supported
 ** Require some particular versions of GL
 ** Require some particular versions of GLSL
 ** Require some particular number of uniform components
+** Require shaders be built as separate shader objects
 *
 * The tests for GL and GLSL versions can be equal, not equal,
 * less, less-or-equal, greater, or greater-or-equal.  Extension tests
@@ -797,6 +851,10 @@ process_requirement(const char *line)
}
 
piglit_set_rlimit(lim);
+   }  else if (string_match("SSO", line)) {
+   line = eat_whitespace(line + 3);
+   if (string_match("ENABLED", line))
+   sso_in_use = true;
}
 }
 
@@ -846,6 +904,13 @@ leave_state(enum states state, const char *line)
compile_glsl(GL_VERTEX_SHADER);
break;
 
+