Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-11 Thread Michel Dänzer
On 2018-05-10 03:08 AM, Matt Turner wrote:
> On Tue, May 8, 2018 at 7:07 AM, Michel Dänzer  wrote:
>> On 2018-05-07 06:49 PM, Michel Dänzer wrote:
>>> On 2018-05-07 06:44 PM, Dylan Baker wrote:
 Quoting Tomi Sarvela (2018-05-07 01:20:46)
>
> piglit/framework$ diff -c profile.py.orig profile.py
> *** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
> --- profile.py  2018-05-07 19:11:46.880994608 +0300
> ***
> *** 584,591 
># more code, and adding side-effects
>test_list = (x for x in test_list if filterby(x))
>
> ! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
> !   test_list, chunksize)
>
>def run_profile(profile, test_list):
>"""Run an individual profile."""
> --- 584,591 
># more code, and adding side-effects
>test_list = (x for x in test_list if filterby(x))
>
> ! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
> !  test_list, chunksize)
>
>def run_profile(profile, test_list):
>"""Run an individual profile."""
>
>
> Tomi

 Juan, can you test this patch and see if it resolves your issue as well? 
 I'm not
 sure why this is fixing things, but if it does I'm happy to merge it and 
 deal
 with any performance problems it introduces later.
>>>
>>> FWIW, this patch doesn't fix the gpu profile running a lot fewer tests
>>> now than it did before 9461d92301e72807eba4776a16a05207e3a16477. I'm
>>> also using -x.
>>
>> I just bisected another problem to
>> 9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
>> work anymore. Most of the time, it doesn't even start:
>>
>> [000/480]
>> Traceback (most recent call last):
>>   File "./piglit", line 178, in 
>> main()
>>   File "./piglit", line 174, in main
>> sys.exit(runner(args))
>>   File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 
>> 51, in _inner
>> func(*args, **kwargs)
>>   File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
>> 370, in run
>> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>>   File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", 
>> line 163, in finalize
>> assert data['tests']
>> AssertionError
>>
>> Sometimes, it doesn't fail like this, but only runs between 0 and another
>> number < 480 of tests. Very rarely, it manages to run all tests.
>>
>> (I'm using python 3.6 now)
>>
>>
>> Dylan, since a number of issues have been reported to have started with
>> this commit, and you don't seem to have an idea what's wrong with it,
>> can you revert it and anything depending on it for the time being? I'll
>> be happy to test against the issues I've run into when you're ready to
>> try again.
> 
> Do you think that is a good workflow? (Serious question)

Who said anything about "workflow"? It's just damage control. I did say
it would suck.

Would a single change causing multiple regressions be tolerated for so
long if any of the regressions affected the Intel CI?


> Dylan's patches were on the list for three weeks and I think only one
> person (Rafael) tested them. It doesn't make sense to me to
> significantly increase the burden on the person writing the code (by
> reverting all the patches when a problem is found) in exchange for a
> promise to test the patches... which you or anyone else could have
> done during the three weeks Dylan was practically begging for testers.
> 
> It's frustrating for me, just as an observer, to see that not even the
> people who have so loudly complained about the lack of this very
> feature could be bothered to try it out.

I don't know what you mean by "this very feature", but I don't remember
ever "complaining loudly" about the lack of any feature in piglit, so I
guess I'm not one of those people.

Anyway, these patches weren't on my radar; I had no idea they might
cause such issues, and I certainly don't have the bandwidth to even look
at all piglit patches (not that it would make any difference when it
comes to Python), let alone test them. Does that mean I can't report any
issues I run into?


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-09 Thread Matt Turner
On Tue, May 8, 2018 at 7:07 AM, Michel Dänzer  wrote:
> On 2018-05-07 06:49 PM, Michel Dänzer wrote:
>> On 2018-05-07 06:44 PM, Dylan Baker wrote:
>>> Quoting Tomi Sarvela (2018-05-07 01:20:46)

 piglit/framework$ diff -c profile.py.orig profile.py
 *** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
 --- profile.py  2018-05-07 19:11:46.880994608 +0300
 ***
 *** 584,591 
# more code, and adding side-effects
test_list = (x for x in test_list if filterby(x))

 ! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
 !   test_list, chunksize)

def run_profile(profile, test_list):
"""Run an individual profile."""
 --- 584,591 
# more code, and adding side-effects
test_list = (x for x in test_list if filterby(x))

 ! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
 !  test_list, chunksize)

def run_profile(profile, test_list):
"""Run an individual profile."""


 Tomi
>>>
>>> Juan, can you test this patch and see if it resolves your issue as well? 
>>> I'm not
>>> sure why this is fixing things, but if it does I'm happy to merge it and 
>>> deal
>>> with any performance problems it introduces later.
>>
>> FWIW, this patch doesn't fix the gpu profile running a lot fewer tests
>> now than it did before 9461d92301e72807eba4776a16a05207e3a16477. I'm
>> also using -x.
>
> I just bisected another problem to
> 9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
> work anymore. Most of the time, it doesn't even start:
>
> [000/480]
> Traceback (most recent call last):
>   File "./piglit", line 178, in 
> main()
>   File "./piglit", line 174, in main
> sys.exit(runner(args))
>   File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 
> 51, in _inner
> func(*args, **kwargs)
>   File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
> 370, in run
> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>   File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", line 
> 163, in finalize
> assert data['tests']
> AssertionError
>
> Sometimes, it doesn't fail like this, but only runs between 0 and another
> number < 480 of tests. Very rarely, it manages to run all tests.
>
> (I'm using python 3.6 now)
>
>
> Dylan, since a number of issues have been reported to have started with
> this commit, and you don't seem to have an idea what's wrong with it,
> can you revert it and anything depending on it for the time being? I'll
> be happy to test against the issues I've run into when you're ready to
> try again.

Do you think that is a good workflow? (Serious question)

Dylan's patches were on the list for three weeks and I think only one
person (Rafael) tested them. It doesn't make sense to me to
significantly increase the burden on the person writing the code (by
reverting all the patches when a problem is found) in exchange for a
promise to test the patches... which you or anyone else could have
done during the three weeks Dylan was practically begging for testers.

It's frustrating for me, just as an observer, to see that not even the
people who have so loudly complained about the lack of this very
feature could be bothered to try it out.
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-09 Thread Michel Dänzer
On 2018-05-08 07:19 PM, Dylan Baker wrote:
> Quoting Michel Dänzer (2018-05-08 07:07:34)
>>
>> I just bisected another problem to
>> 9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
>> work anymore. Most of the time, it doesn't even start:
>>
>> [000/480]  
>> Traceback (most recent call last):
>>   File "./piglit", line 178, in 
>> main()
>>   File "./piglit", line 174, in main
>> sys.exit(runner(args))
>>   File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 
>> 51, in _inner
>> func(*args, **kwargs)
>>   File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
>> 370, in run
>> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>>   File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", 
>> line 163, in finalize
>> assert data['tests']
>> AssertionError
>>
>> Sometimes, it doesn't fail like this, but only runs between 0 and another
>> number < 480 of tests. Very rarely, it manages to run all tests.
>>
>> (I'm using python 3.6 now)
>>
>>
>> Dylan, since a number of issues have been reported to have started with
>> this commit, and you don't seem to have an idea what's wrong with it,
>> can you revert it and anything depending on it for the time being? I'll
>> be happy to test against the issues I've run into when you're ready to
>> try again.
> 
> That would mean reverting nearly 20 commits

I appreciate that would suck, but so does having piglit in a regressed
state for a week, with slow progress towards fixing all the regressions.

> and giving up considerable runtime reductions for us

The thing is, while I can see that the time until running the first test
has been greatly reduced,

 ./piglit run -x basic-arithmetic-uvec2-texture2d -x
glx-multithread-texture --process-isolation false gpu

takes significantly longer to finish now (~8 minutes) than before
9461d92301e72807eba4776a16a05207e3a16477 (~4.5 minutes), with radeonsi
on an 8-core Ryzen CPU.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer



signature.asc
Description: OpenPGP digital signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-08 Thread Dylan Baker
Quoting Michel Dänzer (2018-05-08 07:07:34)
> On 2018-05-07 06:49 PM, Michel Dänzer wrote:
> > On 2018-05-07 06:44 PM, Dylan Baker wrote:
> >> Quoting Tomi Sarvela (2018-05-07 01:20:46)
> >>>
> >>> piglit/framework$ diff -c profile.py.orig profile.py
> >>> *** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
> >>> --- profile.py  2018-05-07 19:11:46.880994608 +0300
> >>> ***
> >>> *** 584,591 
> >>># more code, and adding side-effects
> >>>test_list = (x for x in test_list if filterby(x))
> >>>
> >>> ! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
> >>> !   test_list, chunksize)
> >>>
> >>>def run_profile(profile, test_list):
> >>>"""Run an individual profile."""
> >>> --- 584,591 
> >>># more code, and adding side-effects
> >>>test_list = (x for x in test_list if filterby(x))
> >>>
> >>> ! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
> >>> !  test_list, chunksize)
> >>>
> >>>def run_profile(profile, test_list):
> >>>"""Run an individual profile."""
> >>>
> >>>
> >>> Tomi
> >>
> >> Juan, can you test this patch and see if it resolves your issue as well? 
> >> I'm not
> >> sure why this is fixing things, but if it does I'm happy to merge it and 
> >> deal
> >> with any performance problems it introduces later.
> > 
> > FWIW, this patch doesn't fix the gpu profile running a lot fewer tests
> > now than it did before 9461d92301e72807eba4776a16a05207e3a16477. I'm
> > also using -x.
> 
> I just bisected another problem to
> 9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
> work anymore. Most of the time, it doesn't even start:
> 
> [000/480]  
> Traceback (most recent call last):
>   File "./piglit", line 178, in 
> main()
>   File "./piglit", line 174, in main
> sys.exit(runner(args))
>   File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 
> 51, in _inner
> func(*args, **kwargs)
>   File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
> 370, in run
> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>   File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", line 
> 163, in finalize
> assert data['tests']
> AssertionError
> 
> Sometimes, it doesn't fail like this, but only runs between 0 and another
> number < 480 of tests. Very rarely, it manages to run all tests.
> 
> (I'm using python 3.6 now)
> 

I have a patch for this on the list (I cc'd you). I've identified the other
problem, and I have a fix for part of it, the second part is obnoxious.

The first part is pool.imap with an iterator isn't reliable, I've got a patch
for that. The second is that quick_shader has a filter that doesn't do the same
thing if called multiple times. I'm trying to decide how best to solve that
right now.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-08 Thread Dylan Baker
Quoting Michel Dänzer (2018-05-08 07:07:34)
> On 2018-05-07 06:49 PM, Michel Dänzer wrote:
> > On 2018-05-07 06:44 PM, Dylan Baker wrote:
> >> Quoting Tomi Sarvela (2018-05-07 01:20:46)
> >>>
> >>> piglit/framework$ diff -c profile.py.orig profile.py
> >>> *** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
> >>> --- profile.py  2018-05-07 19:11:46.880994608 +0300
> >>> ***
> >>> *** 584,591 
> >>># more code, and adding side-effects
> >>>test_list = (x for x in test_list if filterby(x))
> >>>
> >>> ! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
> >>> !   test_list, chunksize)
> >>>
> >>>def run_profile(profile, test_list):
> >>>"""Run an individual profile."""
> >>> --- 584,591 
> >>># more code, and adding side-effects
> >>>test_list = (x for x in test_list if filterby(x))
> >>>
> >>> ! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
> >>> !  test_list, chunksize)
> >>>
> >>>def run_profile(profile, test_list):
> >>>"""Run an individual profile."""
> >>>
> >>>
> >>> Tomi
> >>
> >> Juan, can you test this patch and see if it resolves your issue as well? 
> >> I'm not
> >> sure why this is fixing things, but if it does I'm happy to merge it and 
> >> deal
> >> with any performance problems it introduces later.
> > 
> > FWIW, this patch doesn't fix the gpu profile running a lot fewer tests
> > now than it did before 9461d92301e72807eba4776a16a05207e3a16477. I'm
> > also using -x.
> 
> I just bisected another problem to
> 9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
> work anymore. Most of the time, it doesn't even start:
> 
> [000/480]  
> Traceback (most recent call last):
>   File "./piglit", line 178, in 
> main()
>   File "./piglit", line 174, in main
> sys.exit(runner(args))
>   File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 
> 51, in _inner
> func(*args, **kwargs)
>   File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
> 370, in run
> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>   File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", line 
> 163, in finalize
> assert data['tests']
> AssertionError
> 
> Sometimes, it doesn't fail like this, but only runs between 0 and another
> number < 480 of tests. Very rarely, it manages to run all tests.
> 
> (I'm using python 3.6 now)
> 
> 
> Dylan, since a number of issues have been reported to have started with
> this commit, and you don't seem to have an idea what's wrong with it,
> can you revert it and anything depending on it for the time being? I'll
> be happy to test against the issues I've run into when you're ready to
> try again.

That would mean reverting nearly 20 commits and giving up considerable runtime
reductions for us I'd like to avoid that if possible. Juan also ran into this
problem, I'm trying to see if this is all the same problem that Tomi narrowed
down, do you run with -1 or -c, and if you don't can you try running with either
of those and see if it fixes your problems? If it doesn't then I'll start
reverting.

Dylan

> 
> 
> -- 
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
> 


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-08 Thread Michel Dänzer
On 2018-05-07 06:49 PM, Michel Dänzer wrote:
> On 2018-05-07 06:44 PM, Dylan Baker wrote:
>> Quoting Tomi Sarvela (2018-05-07 01:20:46)
>>>
>>> piglit/framework$ diff -c profile.py.orig profile.py
>>> *** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
>>> --- profile.py  2018-05-07 19:11:46.880994608 +0300
>>> ***
>>> *** 584,591 
>>># more code, and adding side-effects
>>>test_list = (x for x in test_list if filterby(x))
>>>
>>> ! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
>>> !   test_list, chunksize)
>>>
>>>def run_profile(profile, test_list):
>>>"""Run an individual profile."""
>>> --- 584,591 
>>># more code, and adding side-effects
>>>test_list = (x for x in test_list if filterby(x))
>>>
>>> ! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
>>> !  test_list, chunksize)
>>>
>>>def run_profile(profile, test_list):
>>>"""Run an individual profile."""
>>>
>>>
>>> Tomi
>>
>> Juan, can you test this patch and see if it resolves your issue as well? I'm 
>> not
>> sure why this is fixing things, but if it does I'm happy to merge it and deal
>> with any performance problems it introduces later.
> 
> FWIW, this patch doesn't fix the gpu profile running a lot fewer tests
> now than it did before 9461d92301e72807eba4776a16a05207e3a16477. I'm
> also using -x.

I just bisected another problem to
9461d92301e72807eba4776a16a05207e3a16477: The xts-render profile doesn't
work anymore. Most of the time, it doesn't even start:

[000/480]  
Traceback (most recent call last):
  File "./piglit", line 178, in 
main()
  File "./piglit", line 174, in main
sys.exit(runner(args))
  File "/home/daenzer/src/piglit-git/piglit/framework/exceptions.py", line 51, 
in _inner
func(*args, **kwargs)
  File "/home/daenzer/src/piglit-git/piglit/framework/programs/run.py", line 
370, in run
backend.finalize({'time_elapsed': time_elapsed.to_json()})
  File "/home/daenzer/src/piglit-git/piglit/framework/backends/json.py", line 
163, in finalize
assert data['tests']
AssertionError

Sometimes, it doesn't fail like this, but only runs between 0 and another
number < 480 of tests. Very rarely, it manages to run all tests.

(I'm using python 3.6 now)


Dylan, since a number of issues have been reported to have started with
this commit, and you don't seem to have an idea what's wrong with it,
can you revert it and anything depending on it for the time being? I'll
be happy to test against the issues I've run into when you're ready to
try again.


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer



signature.asc
Description: OpenPGP digital signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-08 Thread Juan A. Suarez Romero
On Mon, 2018-05-07 at 09:44 -0700, Dylan Baker wrote:
> Juan, can you test this patch and see if it resolves your issue as well? I'm 
> not
> sure why this is fixing things, but if it does I'm happy to merge it and deal
> with any performance problems it introduces later.

Unfortunately , it still doesn't solve the problem:

$ ./piglit-run.py tests/crucible.py output
/home/igalia/jasuarez/piglit/framework/test/base.py:78: UserWarning: Timeouts
are not available
  warnings.warn('Timeouts are not available')
[000/553]  
Traceback (most recent call last):
  File "./piglit-run.py", line 39, in 
run([i.decode('utf-8') for i in sys.argv[1:]])
  File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
_inner
func(*args, **kwargs)
  File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 370, in
run
backend.finalize({'time_elapsed': time_elapsed.to_json()})
  File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 163, in
finalize
assert data['tests']
AssertionError


___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-07 Thread Michel Dänzer
On 2018-05-07 06:44 PM, Dylan Baker wrote:
> Quoting Tomi Sarvela (2018-05-07 01:20:46)
>> On 05/07/2018 10:17 AM, Tomi Sarvela wrote:
>>> On 05/04/2018 07:57 PM, Dylan Baker wrote:
 Quoting Juan A. Suarez Romero (2018-05-04 04:50:27)
> On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:
>> On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:
>>> Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
 Hi, Dylan.

 I see you've pushed this series.

 Now, when I'm trying to run some profiles (mainly, tests/crucible and
 tests/khr_gl* ), seems they are broken:

 [/7776]
 Traceback (most recent call last):
    File "./piglit", line 178, in 
  main()
    File "./piglit", line 174, in main
  sys.exit(runner(args))
    File "/home/igalia/jasuarez/piglit/framework/exceptions.py", 
 line 51, in
 _inner
  func(*args, **kwargs)
    File "/home/igalia/jasuarez/piglit/framework/programs/run.py", 
 line 370, in
 run
  backend.finalize({'time_elapsed': time_elapsed.to_json()})
    File "/home/igalia/jasuarez/piglit/framework/backends/json.py", 
 line 163, in
 finalize
  assert data['tests']
 AssertionError

  J.A.

>>>
>>> Dang.
>>>
>>> I can't reproduce any failures with crucible, though I did make it 
>>> thread safe
>>> and fix the using a config file :)
>>>
>>> I can't get the glcts binary to run, no matter what target I build 
>>> for I run
>>> into either EGL errors of GL errors.
>>>
>>
>> More info on this issue.
>>
>> It seems it happens with the profiles that requires to use an 
>> external runner
>> (crucible, vk-gl-cts, deqp, ...).
>>
>>
>> When executing, it tells it will run all the tests, but sometimes it 
>> just
>> execute one test, other times 2, and other times none. It is in the 
>> last case
>> when the error above is shown.
>>
>> Still don't know why.
>>
>
>
> Found the problem in this commit:
>
> commit 9461d92301e72807eba4776a16a05207e3a16477
> Author: Dylan Baker 
> Date:   Mon Mar 26 15:23:17 2018 -0700
>
>  framework/profile: Add a __len__ method to TestProfile
>  This exposes a standard interface for getting the number of 
> tests in a
>  profile, which is itself nice. It will also allow us to 
> encapsulate the
>  differences between the various profiles added in this series.
>  Tested-by: Rafael Antognolli 
>
>

 I'm really having trouble reproducing this, the vulkan cts and 
 crucible both run
 fine for me, no matter how many times I stop and start them. I even 
 tried with
 python2 and couldn't reproduce. Can you give me some more information 
 about your
 system?
>>>
>>> I think I've hit this same issue on our CI.
>>>
>>> Symptoms match so that we sometimes run the whole 25k piglit gbm 
>>> testset, sometimes we stop around the test 400-600. This behaviour can 
>>> change with subsequent runs without rebooting the machine. Test where 
>>> run is stopped is usually the same, and changes if filters change.
>>>
>>> I can reproduce this with -d / --dry-run so the tests themselves are not 
>>> an issue. Filtering with large -x / --exclude-tests might play a part. 
>>> The command line is max 25kB, so there shouldn't be cutoff point with 
>>> partial regex, which then would match too much.
>>>
>>> I'm just starting to investigate where does the test list size drop so 
>>> dramatically, probably by inserting testlist size debugs around to see 
>>> where it takes me.
>>>
>>> Environment: Ubuntu 18.04 LTS with default mesa
>>> Kernel: DRM-Tip HEAD or Ubuntu default.
>>>
>>> Commandline is built with bash array from blacklist. This looks correct, 
>>> and sometimes works correctly. Eg
>>>
>>> ./piglit run tests/gpu ~/results -d -o -l verbose "${OPTIONS[@]}"
>>>
>>> where $OPTIONS is an array of
>>> '-x', 'timestamp-get',
>>> '-x', 'glsl-routing', ...
>>>
>>> Successful CI runlog:
>>> http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4148/pig-glk-j5005/run0.log
>>>
>>> Unsuccessful CI runlog:
>>> http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4149/pig-glk-j5005/run0.log
>>>
>>> Between those two runs, only kernel has changed.
>>>
>>> The issue is easiest to reproduce with GLK. HSW seems to be somewhat 
>>> affected too, so the host speed might play a part.
>>
>> Patch below makes the issue disappear for my GLK testrig.
>>
>> With multiprocessing.pool.imap I'm getting rougly 50% correct behaviour 
>> and 50% early exists on dry-runs.
>>
>> With multiprocessing.pool.map I'm not getting early exists at all.
>>
>> Sample size is ~50 runs for both 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-07 Thread Dylan Baker
Quoting Tomi Sarvela (2018-05-07 01:20:46)
> On 05/07/2018 10:17 AM, Tomi Sarvela wrote:
> > On 05/04/2018 07:57 PM, Dylan Baker wrote:
> >> Quoting Juan A. Suarez Romero (2018-05-04 04:50:27)
> >>> On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:
>  On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:
> > Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
> >> Hi, Dylan.
> >>
> >> I see you've pushed this series.
> >>
> >> Now, when I'm trying to run some profiles (mainly, tests/crucible and
> >> tests/khr_gl* ), seems they are broken:
> >>
> >> [/7776]
> >> Traceback (most recent call last):
> >>    File "./piglit", line 178, in 
> >>  main()
> >>    File "./piglit", line 174, in main
> >>  sys.exit(runner(args))
> >>    File "/home/igalia/jasuarez/piglit/framework/exceptions.py", 
> >> line 51, in
> >> _inner
> >>  func(*args, **kwargs)
> >>    File "/home/igalia/jasuarez/piglit/framework/programs/run.py", 
> >> line 370, in
> >> run
> >>  backend.finalize({'time_elapsed': time_elapsed.to_json()})
> >>    File "/home/igalia/jasuarez/piglit/framework/backends/json.py", 
> >> line 163, in
> >> finalize
> >>  assert data['tests']
> >> AssertionError
> >>
> >>  J.A.
> >>
> >
> > Dang.
> >
> > I can't reproduce any failures with crucible, though I did make it 
> > thread safe
> > and fix the using a config file :)
> >
> > I can't get the glcts binary to run, no matter what target I build 
> > for I run
> > into either EGL errors of GL errors.
> >
> 
>  More info on this issue.
> 
>  It seems it happens with the profiles that requires to use an 
>  external runner
>  (crucible, vk-gl-cts, deqp, ...).
> 
> 
>  When executing, it tells it will run all the tests, but sometimes it 
>  just
>  execute one test, other times 2, and other times none. It is in the 
>  last case
>  when the error above is shown.
> 
>  Still don't know why.
> 
> >>>
> >>>
> >>> Found the problem in this commit:
> >>>
> >>> commit 9461d92301e72807eba4776a16a05207e3a16477
> >>> Author: Dylan Baker 
> >>> Date:   Mon Mar 26 15:23:17 2018 -0700
> >>>
> >>>  framework/profile: Add a __len__ method to TestProfile
> >>>  This exposes a standard interface for getting the number of 
> >>> tests in a
> >>>  profile, which is itself nice. It will also allow us to 
> >>> encapsulate the
> >>>  differences between the various profiles added in this series.
> >>>  Tested-by: Rafael Antognolli 
> >>>
> >>>
> >>
> >> I'm really having trouble reproducing this, the vulkan cts and 
> >> crucible both run
> >> fine for me, no matter how many times I stop and start them. I even 
> >> tried with
> >> python2 and couldn't reproduce. Can you give me some more information 
> >> about your
> >> system?
> > 
> > I think I've hit this same issue on our CI.
> > 
> > Symptoms match so that we sometimes run the whole 25k piglit gbm 
> > testset, sometimes we stop around the test 400-600. This behaviour can 
> > change with subsequent runs without rebooting the machine. Test where 
> > run is stopped is usually the same, and changes if filters change.
> > 
> > I can reproduce this with -d / --dry-run so the tests themselves are not 
> > an issue. Filtering with large -x / --exclude-tests might play a part. 
> > The command line is max 25kB, so there shouldn't be cutoff point with 
> > partial regex, which then would match too much.
> > 
> > I'm just starting to investigate where does the test list size drop so 
> > dramatically, probably by inserting testlist size debugs around to see 
> > where it takes me.
> > 
> > Environment: Ubuntu 18.04 LTS with default mesa
> > Kernel: DRM-Tip HEAD or Ubuntu default.
> > 
> > Commandline is built with bash array from blacklist. This looks correct, 
> > and sometimes works correctly. Eg
> > 
> > ./piglit run tests/gpu ~/results -d -o -l verbose "${OPTIONS[@]}"
> > 
> > where $OPTIONS is an array of
> > '-x', 'timestamp-get',
> > '-x', 'glsl-routing', ...
> > 
> > Successful CI runlog:
> > http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4148/pig-glk-j5005/run0.log
> > 
> > Unsuccessful CI runlog:
> > http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4149/pig-glk-j5005/run0.log
> > 
> > Between those two runs, only kernel has changed.
> > 
> > The issue is easiest to reproduce with GLK. HSW seems to be somewhat 
> > affected too, so the host speed might play a part.
> 
> Patch below makes the issue disappear for my GLK testrig.
> 
> With multiprocessing.pool.imap I'm getting rougly 50% correct behaviour 
> and 50% early exists on dry-runs.
> 
> With multiprocessing.pool.map I'm not getting early exists at all.
> 
> Sample size is ~50 runs for both setups.
> 
> With the testset of 26179 on 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-07 Thread Tomi Sarvela

On 05/07/2018 10:17 AM, Tomi Sarvela wrote:

On 05/04/2018 07:57 PM, Dylan Baker wrote:

Quoting Juan A. Suarez Romero (2018-05-04 04:50:27)

On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:

On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:

Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)

Hi, Dylan.

I see you've pushed this series.

Now, when I'm trying to run some profiles (mainly, tests/crucible and
tests/khr_gl* ), seems they are broken:

[/7776]
Traceback (most recent call last):
   File "./piglit", line 178, in 
 main()
   File "./piglit", line 174, in main
 sys.exit(runner(args))
   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", 
line 51, in

_inner
 func(*args, **kwargs)
   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", 
line 370, in

run
 backend.finalize({'time_elapsed': time_elapsed.to_json()})
   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", 
line 163, in

finalize
 assert data['tests']
AssertionError

 J.A.



Dang.

I can't reproduce any failures with crucible, though I did make it 
thread safe

and fix the using a config file :)

I can't get the glcts binary to run, no matter what target I build 
for I run

into either EGL errors of GL errors.



More info on this issue.

It seems it happens with the profiles that requires to use an 
external runner

(crucible, vk-gl-cts, deqp, ...).


When executing, it tells it will run all the tests, but sometimes it 
just
execute one test, other times 2, and other times none. It is in the 
last case

when the error above is shown.

Still don't know why.




Found the problem in this commit:

commit 9461d92301e72807eba4776a16a05207e3a16477
Author: Dylan Baker 
Date:   Mon Mar 26 15:23:17 2018 -0700

 framework/profile: Add a __len__ method to TestProfile
 This exposes a standard interface for getting the number of 
tests in a
 profile, which is itself nice. It will also allow us to 
encapsulate the

 differences between the various profiles added in this series.
 Tested-by: Rafael Antognolli 




I'm really having trouble reproducing this, the vulkan cts and 
crucible both run
fine for me, no matter how many times I stop and start them. I even 
tried with
python2 and couldn't reproduce. Can you give me some more information 
about your

system?


I think I've hit this same issue on our CI.

Symptoms match so that we sometimes run the whole 25k piglit gbm 
testset, sometimes we stop around the test 400-600. This behaviour can 
change with subsequent runs without rebooting the machine. Test where 
run is stopped is usually the same, and changes if filters change.


I can reproduce this with -d / --dry-run so the tests themselves are not 
an issue. Filtering with large -x / --exclude-tests might play a part. 
The command line is max 25kB, so there shouldn't be cutoff point with 
partial regex, which then would match too much.


I'm just starting to investigate where does the test list size drop so 
dramatically, probably by inserting testlist size debugs around to see 
where it takes me.


Environment: Ubuntu 18.04 LTS with default mesa
Kernel: DRM-Tip HEAD or Ubuntu default.

Commandline is built with bash array from blacklist. This looks correct, 
and sometimes works correctly. Eg


./piglit run tests/gpu ~/results -d -o -l verbose "${OPTIONS[@]}"

where $OPTIONS is an array of
'-x', 'timestamp-get',
'-x', 'glsl-routing', ...

Successful CI runlog:
http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4148/pig-glk-j5005/run0.log

Unsuccessful CI runlog:
http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4149/pig-glk-j5005/run0.log

Between those two runs, only kernel has changed.

The issue is easiest to reproduce with GLK. HSW seems to be somewhat 
affected too, so the host speed might play a part.


Patch below makes the issue disappear for my GLK testrig.

With multiprocessing.pool.imap I'm getting rougly 50% correct behaviour 
and 50% early exists on dry-runs.


With multiprocessing.pool.map I'm not getting early exists at all.

Sample size is ~50 runs for both setups.

With the testset of 26179 on GLK dry-run, the runtime difference is 
negligible: pool.map 49s vs pool.imap 50s




piglit/framework$ diff -c profile.py.orig profile.py
*** profile.py.orig 2018-05-07 19:11:37.649994643 +0300
--- profile.py  2018-05-07 19:11:46.880994608 +0300
***
*** 584,591 
  # more code, and adding side-effects
  test_list = (x for x in test_list if filterby(x))

! pool.imap(lambda pair: test(pair[0], pair[1], profile, pool),
!   test_list, chunksize)

  def run_profile(profile, test_list):
  """Run an individual profile."""
--- 584,591 
  # more code, and adding side-effects
  test_list = (x for x in test_list if filterby(x))

! pool.map(lambda pair: test(pair[0], pair[1], profile, pool),
! 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-07 Thread Tomi Sarvela

On 05/04/2018 07:57 PM, Dylan Baker wrote:

Quoting Juan A. Suarez Romero (2018-05-04 04:50:27)

On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:

On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:

Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)

Hi, Dylan.

I see you've pushed this series.

Now, when I'm trying to run some profiles (mainly, tests/crucible and
tests/khr_gl* ), seems they are broken:

[/7776]
Traceback (most recent call last):
   File "./piglit", line 178, in 
 main()
   File "./piglit", line 174, in main
 sys.exit(runner(args))
   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
_inner
 func(*args, **kwargs)
   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 370, in
run
 backend.finalize({'time_elapsed': time_elapsed.to_json()})
   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 163, in
finalize
 assert data['tests']
AssertionError

 J.A.



Dang.

I can't reproduce any failures with crucible, though I did make it thread safe
and fix the using a config file :)

I can't get the glcts binary to run, no matter what target I build for I run
into either EGL errors of GL errors.



More info on this issue.

It seems it happens with the profiles that requires to use an external runner
(crucible, vk-gl-cts, deqp, ...).


When executing, it tells it will run all the tests, but sometimes it just
execute one test, other times 2, and other times none. It is in the last case
when the error above is shown.

Still don't know why.




Found the problem in this commit:

commit 9461d92301e72807eba4776a16a05207e3a16477
Author: Dylan Baker 
Date:   Mon Mar 26 15:23:17 2018 -0700

 framework/profile: Add a __len__ method to TestProfile
 
 This exposes a standard interface for getting the number of tests in a

 profile, which is itself nice. It will also allow us to encapsulate the
 differences between the various profiles added in this series.
 
 Tested-by: Rafael Antognolli 





I'm really having trouble reproducing this, the vulkan cts and crucible both run
fine for me, no matter how many times I stop and start them. I even tried with
python2 and couldn't reproduce. Can you give me some more information about your
system?


I think I've hit this same issue on our CI.

Symptoms match so that we sometimes run the whole 25k piglit gbm 
testset, sometimes we stop around the test 400-600. This behaviour can 
change with subsequent runs without rebooting the machine. Test where 
run is stopped is usually the same, and changes if filters change.


I can reproduce this with -d / --dry-run so the tests themselves are not 
an issue. Filtering with large -x / --exclude-tests might play a part. 
The command line is max 25kB, so there shouldn't be cutoff point with 
partial regex, which then would match too much.


I'm just starting to investigate where does the test list size drop so 
dramatically, probably by inserting testlist size debugs around to see 
where it takes me.


Environment: Ubuntu 18.04 LTS with default mesa
Kernel: DRM-Tip HEAD or Ubuntu default.

Commandline is built with bash array from blacklist. This looks correct, 
and sometimes works correctly. Eg


./piglit run tests/gpu ~/results -d -o -l verbose "${OPTIONS[@]}"

where $OPTIONS is an array of
'-x', 'timestamp-get',
'-x', 'glsl-routing', ...

Successful CI runlog:
http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4148/pig-glk-j5005/run0.log

Unsuccessful CI runlog:
http://gfx-ci.fi.intel.com/tree/drm-tip/CI_DRM_4149/pig-glk-j5005/run0.log

Between those two runs, only kernel has changed.

The issue is easiest to reproduce with GLK. HSW seems to be somewhat 
affected too, so the host speed might play a part.


Tomi
--
Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-04 Thread Dylan Baker
Quoting Juan A. Suarez Romero (2018-05-04 04:50:27)
> On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:
> > On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:
> > > Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
> > > > Hi, Dylan.
> > > > 
> > > > I see you've pushed this series.
> > > > 
> > > > Now, when I'm trying to run some profiles (mainly, tests/crucible and
> > > > tests/khr_gl* ), seems they are broken:
> > > > 
> > > > [/7776]
> > > > Traceback (most recent call last):
> > > >   File "./piglit", line 178, in 
> > > > main()
> > > >   File "./piglit", line 174, in main
> > > > sys.exit(runner(args))
> > > >   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, 
> > > > in
> > > > _inner
> > > > func(*args, **kwargs)
> > > >   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 
> > > > 370, in
> > > > run
> > > > backend.finalize({'time_elapsed': time_elapsed.to_json()})
> > > >   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 
> > > > 163, in
> > > > finalize
> > > > assert data['tests']
> > > > AssertionError
> > > > 
> > > > J.A.
> > > > 
> > > 
> > > Dang.
> > > 
> > > I can't reproduce any failures with crucible, though I did make it thread 
> > > safe
> > > and fix the using a config file :)
> > > 
> > > I can't get the glcts binary to run, no matter what target I build for I 
> > > run
> > > into either EGL errors of GL errors.
> > > 
> > 
> > More info on this issue.
> > 
> > It seems it happens with the profiles that requires to use an external 
> > runner
> > (crucible, vk-gl-cts, deqp, ...).
> > 
> > 
> > When executing, it tells it will run all the tests, but sometimes it just
> > execute one test, other times 2, and other times none. It is in the last 
> > case
> > when the error above is shown.
> > 
> > Still don't know why.
> > 
> 
> 
> Found the problem in this commit:
> 
> commit 9461d92301e72807eba4776a16a05207e3a16477
> Author: Dylan Baker 
> Date:   Mon Mar 26 15:23:17 2018 -0700
> 
> framework/profile: Add a __len__ method to TestProfile
> 
> This exposes a standard interface for getting the number of tests in a
> profile, which is itself nice. It will also allow us to encapsulate the
> differences between the various profiles added in this series.
> 
> Tested-by: Rafael Antognolli 
> 
> 

I'm really having trouble reproducing this, the vulkan cts and crucible both run
fine for me, no matter how many times I stop and start them. I even tried with
python2 and couldn't reproduce. Can you give me some more information about your
system?

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-04 Thread Juan A. Suarez Romero
On Fri, 2018-05-04 at 12:03 +0200, Juan A. Suarez Romero wrote:
> On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:
> > Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
> > > Hi, Dylan.
> > > 
> > > I see you've pushed this series.
> > > 
> > > Now, when I'm trying to run some profiles (mainly, tests/crucible and
> > > tests/khr_gl* ), seems they are broken:
> > > 
> > > [/7776]
> > > Traceback (most recent call last):
> > >   File "./piglit", line 178, in 
> > > main()
> > >   File "./piglit", line 174, in main
> > > sys.exit(runner(args))
> > >   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
> > > _inner
> > > func(*args, **kwargs)
> > >   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 
> > > 370, in
> > > run
> > > backend.finalize({'time_elapsed': time_elapsed.to_json()})
> > >   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 
> > > 163, in
> > > finalize
> > > assert data['tests']
> > > AssertionError
> > > 
> > > J.A.
> > > 
> > 
> > Dang.
> > 
> > I can't reproduce any failures with crucible, though I did make it thread 
> > safe
> > and fix the using a config file :)
> > 
> > I can't get the glcts binary to run, no matter what target I build for I run
> > into either EGL errors of GL errors.
> > 
> 
> More info on this issue.
> 
> It seems it happens with the profiles that requires to use an external runner
> (crucible, vk-gl-cts, deqp, ...).
> 
> 
> When executing, it tells it will run all the tests, but sometimes it just
> execute one test, other times 2, and other times none. It is in the last case
> when the error above is shown.
> 
> Still don't know why.
> 


Found the problem in this commit:

commit 9461d92301e72807eba4776a16a05207e3a16477
Author: Dylan Baker 
Date:   Mon Mar 26 15:23:17 2018 -0700

framework/profile: Add a __len__ method to TestProfile

This exposes a standard interface for getting the number of tests in a
profile, which is itself nice. It will also allow us to encapsulate the
differences between the various profiles added in this series.

Tested-by: Rafael Antognolli 


>   J.A.
> 
> > Dylan
> 
> ___
> Piglit mailing list
> Piglit@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/piglit
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-04 Thread Juan A. Suarez Romero
On Wed, 2018-05-02 at 13:57 -0700, Dylan Baker wrote:
> Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
> > Hi, Dylan.
> > 
> > I see you've pushed this series.
> > 
> > Now, when I'm trying to run some profiles (mainly, tests/crucible and
> > tests/khr_gl* ), seems they are broken:
> > 
> > [/7776]
> > Traceback (most recent call last):
> >   File "./piglit", line 178, in 
> > main()
> >   File "./piglit", line 174, in main
> > sys.exit(runner(args))
> >   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
> > _inner
> > func(*args, **kwargs)
> >   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 370, 
> > in
> > run
> > backend.finalize({'time_elapsed': time_elapsed.to_json()})
> >   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 163, 
> > in
> > finalize
> > assert data['tests']
> > AssertionError
> > 
> > J.A.
> > 
> 
> Dang.
> 
> I can't reproduce any failures with crucible, though I did make it thread safe
> and fix the using a config file :)
> 
> I can't get the glcts binary to run, no matter what target I build for I run
> into either EGL errors of GL errors.
> 

More info on this issue.

It seems it happens with the profiles that requires to use an external runner
(crucible, vk-gl-cts, deqp, ...).


When executing, it tells it will run all the tests, but sometimes it just
execute one test, other times 2, and other times none. It is in the last case
when the error above is shown.

Still don't know why.

J.A.

> Dylan
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-02 Thread Dylan Baker
Quoting Juan A. Suarez Romero (2018-05-02 09:49:08)
> Hi, Dylan.
> 
> I see you've pushed this series.
> 
> Now, when I'm trying to run some profiles (mainly, tests/crucible and
> tests/khr_gl* ), seems they are broken:
> 
> [/7776]
> Traceback (most recent call last):
>   File "./piglit", line 178, in 
> main()
>   File "./piglit", line 174, in main
> sys.exit(runner(args))
>   File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
> _inner
> func(*args, **kwargs)
>   File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 370, in
> run
> backend.finalize({'time_elapsed': time_elapsed.to_json()})
>   File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 163, in
> finalize
> assert data['tests']
> AssertionError
> 
> J.A.
> 

Dang.

I can't reproduce any failures with crucible, though I did make it thread safe
and fix the using a config file :)

I can't get the glcts binary to run, no matter what target I build for I run
into either EGL errors of GL errors.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-05-02 Thread Juan A. Suarez Romero
Hi, Dylan.

I see you've pushed this series.

Now, when I'm trying to run some profiles (mainly, tests/crucible and
tests/khr_gl* ), seems they are broken:

[/7776]
Traceback (most recent call last):
  File "./piglit", line 178, in 
main()
  File "./piglit", line 174, in main
sys.exit(runner(args))
  File "/home/igalia/jasuarez/piglit/framework/exceptions.py", line 51, in
_inner
func(*args, **kwargs)
  File "/home/igalia/jasuarez/piglit/framework/programs/run.py", line 370, in
run
backend.finalize({'time_elapsed': time_elapsed.to_json()})
  File "/home/igalia/jasuarez/piglit/framework/backends/json.py", line 163, in
finalize
assert data['tests']
AssertionError

J.A.

On Tue, 2018-04-17 at 08:30 -0700, Dylan Baker wrote:
> Quoting Dylan Baker (2018-04-04 15:26:48)
> > I don't expect everyone I've CC'd to give thorough review (or any
> > review), I've mostly CC'd people who I think would be interested in this
> > work, or who's work flow I might be altered by it.
> > 
> > Piglit has struggled to cope with the growing number of tests that it
> > contains, especially with startup time. Piglit has always calculated
> > tests at runtime, which was not a problem when there were only a few
> > hundred or even thousand tests. Piglit now has roughly 55,000
> > OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> > means that piglit needs to keep a python object for each of those tests
> > in memory, which has sent the resident memory usage soaring. We've also
> > moved to automatic test discovery for glslparser, asmparser, and shader
> > tests, which is very convenient and reduces typing, but further
> > increases the amount of time spent starting up. This has even made
> > features which decrease runtime, like fast skipping, hurt startup
> > performance, making it a less than desirable tradeoff in some cases.
> > Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> > an unheard of startup time. That might be okay to run 55,000 tests, but
> > not if you only need a dozen, such as when bisecting.
> > 
> > This series is my proposal to fix that, mainly by moving much of that
> > cost to build time. This series creates the infrastructure build XML
> > base profiles at build time, which are installed with piglit instead of
> > the python profiles. These profiles are lazily iterated over to ease
> > memory usage, test objects are created as they are run, and python can
> > garbage collect them as soon as they are done running. Along with that
> > any filters applied to profiles (like removing 80% of the vs_in shader
> > tests in quick) are done before the profile is serialized, and all fast
> > skipping information is collected at build time as well, and encoded in
> > the XML. All this means that start times are vastly reduced.
> > 
> > For example:
> > XML profiles
> > quick: 0.5
> > shader: 0.5
> > 
> > master
> > quick: 11.6
> > shader: 7.3
> > 
> > This series also implements some optimizations for running without
> > filters or test-lists, if you add a filter quick would take 2.5
> > seconds, because that is necessary to calculate the total number of
> > tests before starting.
> > 
> > To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> > llvmpipe working this series adds meta profiles, small XML snippets that
> > list other profiles. These can contain other meta profiles, xml
> > profiles, or python profiles. This means that for most uses cases your
> > existing command line will still work, `./piglit run quick out -c` will
> > still do exactly the same thing as before, just faster.
> > 
> > The XML generated is dumb, there is no encoding of options or logic. An
> > early version of this series did contain logic and options, but the
> > result was pretty terrible. It was very hard to read, and the code to
> > handle it was very complicated. I've chosen not to go down that path.
> > There are drawbacks, some things that relied on run time generation have
> > cannot be handled the same way, among them the "multi shader" concept,
> > where shader_runner consumes a directory of shader_tests at a time. This
> > was previously handled via a --process-isolation=false flag, now its
> > encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> > was also an option to use glslparsertest with ES shaders and
> > ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> > added metaprofiles for these cases, although we certainly could (or you
> > can write your own, the schema is dead simple), so `./piglit run quick
> > out --process-isolation=false` is now `./piglit run quick_gl glslparser
> > quick_shader_multi out`.
> > 
> > I've run this through our CI extensively, and gotten green results out
> > of it across the board.
> > 
> > I know this is a big series, but piglit makes a lot of assumptions about the
> > test profiles being created at runtime, and we've had to changes those
> 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-25 Thread Dylan Baker
Quoting Rafael Antognolli (2018-04-25 11:01:29)
> On Mon, Apr 23, 2018 at 10:47:08AM -0700, Dylan Baker wrote:
> > I'm planning to just push this Wednesday if no one expresses any more 
> > concerns,
> > or signals that they want time to test or review this.
> 
> I've been running some tests this morning (regular runs, using filters,
> testing print-cmd and summary), and it all worked just fine for me. It's
> not an in deep testing, but at least I can say it works for my common
> use cases. So if you want, feel free to add
> 
> Tested-by: Rafael Antognolli 

Thank you for testing this!

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-25 Thread Rafael Antognolli
On Mon, Apr 23, 2018 at 10:47:08AM -0700, Dylan Baker wrote:
> I'm planning to just push this Wednesday if no one expresses any more 
> concerns,
> or signals that they want time to test or review this.

I've been running some tests this morning (regular runs, using filters,
testing print-cmd and summary), and it all worked just fine for me. It's
not an in deep testing, but at least I can say it works for my common
use cases. So if you want, feel free to add

Tested-by: Rafael Antognolli 

> Dylan
> 
> Quoting Dylan Baker (2018-04-04 15:26:48)
> > I don't expect everyone I've CC'd to give thorough review (or any
> > review), I've mostly CC'd people who I think would be interested in this
> > work, or who's work flow I might be altered by it.
> > 
> > Piglit has struggled to cope with the growing number of tests that it
> > contains, especially with startup time. Piglit has always calculated
> > tests at runtime, which was not a problem when there were only a few
> > hundred or even thousand tests. Piglit now has roughly 55,000
> > OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> > means that piglit needs to keep a python object for each of those tests
> > in memory, which has sent the resident memory usage soaring. We've also
> > moved to automatic test discovery for glslparser, asmparser, and shader
> > tests, which is very convenient and reduces typing, but further
> > increases the amount of time spent starting up. This has even made
> > features which decrease runtime, like fast skipping, hurt startup
> > performance, making it a less than desirable tradeoff in some cases.
> > Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> > an unheard of startup time. That might be okay to run 55,000 tests, but
> > not if you only need a dozen, such as when bisecting.
> > 
> > This series is my proposal to fix that, mainly by moving much of that
> > cost to build time. This series creates the infrastructure build XML
> > base profiles at build time, which are installed with piglit instead of
> > the python profiles. These profiles are lazily iterated over to ease
> > memory usage, test objects are created as they are run, and python can
> > garbage collect them as soon as they are done running. Along with that
> > any filters applied to profiles (like removing 80% of the vs_in shader
> > tests in quick) are done before the profile is serialized, and all fast
> > skipping information is collected at build time as well, and encoded in
> > the XML. All this means that start times are vastly reduced.
> > 
> > For example:
> > XML profiles
> > quick: 0.5
> > shader: 0.5
> > 
> > master
> > quick: 11.6
> > shader: 7.3
> > 
> > This series also implements some optimizations for running without
> > filters or test-lists, if you add a filter quick would take 2.5
> > seconds, because that is necessary to calculate the total number of
> > tests before starting.
> > 
> > To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> > llvmpipe working this series adds meta profiles, small XML snippets that
> > list other profiles. These can contain other meta profiles, xml
> > profiles, or python profiles. This means that for most uses cases your
> > existing command line will still work, `./piglit run quick out -c` will
> > still do exactly the same thing as before, just faster.
> > 
> > The XML generated is dumb, there is no encoding of options or logic. An
> > early version of this series did contain logic and options, but the
> > result was pretty terrible. It was very hard to read, and the code to
> > handle it was very complicated. I've chosen not to go down that path.
> > There are drawbacks, some things that relied on run time generation have
> > cannot be handled the same way, among them the "multi shader" concept,
> > where shader_runner consumes a directory of shader_tests at a time. This
> > was previously handled via a --process-isolation=false flag, now its
> > encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> > was also an option to use glslparsertest with ES shaders and
> > ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> > added metaprofiles for these cases, although we certainly could (or you
> > can write your own, the schema is dead simple), so `./piglit run quick
> > out --process-isolation=false` is now `./piglit run quick_gl glslparser
> > quick_shader_multi out`.
> > 
> > I've run this through our CI extensively, and gotten green results out
> > of it across the board.
> > 
> > I know this is a big series, but piglit makes a lot of assumptions about the
> > test profiles being created at runtime, and we've had to changes those
> > assumptions.
> > 
> > 
> > Dylan Baker (35):
> >   update git ignore for this series
> >   test/piglit_test: add ROOT_DIR variable
> >   framework/profile: Allow a group manager class to be overwritten
> >   framework/test: 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-23 Thread Dylan Baker
I'm planning to just push this Wednesday if no one expresses any more concerns,
or signals that they want time to test or review this.

Dylan

Quoting Dylan Baker (2018-04-04 15:26:48)
> I don't expect everyone I've CC'd to give thorough review (or any
> review), I've mostly CC'd people who I think would be interested in this
> work, or who's work flow I might be altered by it.
> 
> Piglit has struggled to cope with the growing number of tests that it
> contains, especially with startup time. Piglit has always calculated
> tests at runtime, which was not a problem when there were only a few
> hundred or even thousand tests. Piglit now has roughly 55,000
> OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> means that piglit needs to keep a python object for each of those tests
> in memory, which has sent the resident memory usage soaring. We've also
> moved to automatic test discovery for glslparser, asmparser, and shader
> tests, which is very convenient and reduces typing, but further
> increases the amount of time spent starting up. This has even made
> features which decrease runtime, like fast skipping, hurt startup
> performance, making it a less than desirable tradeoff in some cases.
> Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> an unheard of startup time. That might be okay to run 55,000 tests, but
> not if you only need a dozen, such as when bisecting.
> 
> This series is my proposal to fix that, mainly by moving much of that
> cost to build time. This series creates the infrastructure build XML
> base profiles at build time, which are installed with piglit instead of
> the python profiles. These profiles are lazily iterated over to ease
> memory usage, test objects are created as they are run, and python can
> garbage collect them as soon as they are done running. Along with that
> any filters applied to profiles (like removing 80% of the vs_in shader
> tests in quick) are done before the profile is serialized, and all fast
> skipping information is collected at build time as well, and encoded in
> the XML. All this means that start times are vastly reduced.
> 
> For example:
> XML profiles
> quick: 0.5
> shader: 0.5
> 
> master
> quick: 11.6
> shader: 7.3
> 
> This series also implements some optimizations for running without
> filters or test-lists, if you add a filter quick would take 2.5
> seconds, because that is necessary to calculate the total number of
> tests before starting.
> 
> To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> llvmpipe working this series adds meta profiles, small XML snippets that
> list other profiles. These can contain other meta profiles, xml
> profiles, or python profiles. This means that for most uses cases your
> existing command line will still work, `./piglit run quick out -c` will
> still do exactly the same thing as before, just faster.
> 
> The XML generated is dumb, there is no encoding of options or logic. An
> early version of this series did contain logic and options, but the
> result was pretty terrible. It was very hard to read, and the code to
> handle it was very complicated. I've chosen not to go down that path.
> There are drawbacks, some things that relied on run time generation have
> cannot be handled the same way, among them the "multi shader" concept,
> where shader_runner consumes a directory of shader_tests at a time. This
> was previously handled via a --process-isolation=false flag, now its
> encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> was also an option to use glslparsertest with ES shaders and
> ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> added metaprofiles for these cases, although we certainly could (or you
> can write your own, the schema is dead simple), so `./piglit run quick
> out --process-isolation=false` is now `./piglit run quick_gl glslparser
> quick_shader_multi out`.
> 
> I've run this through our CI extensively, and gotten green results out
> of it across the board.
> 
> I know this is a big series, but piglit makes a lot of assumptions about the
> test profiles being created at runtime, and we've had to changes those
> assumptions.
> 
> 
> Dylan Baker (35):
>   update git ignore for this series
>   test/piglit_test: add ROOT_DIR variable
>   framework/profile: Allow a group manager class to be overwritten
>   framework/test: Use getter for altering PiglitBaseTest Command
>   framework/test: expose required and excluded platforms
>   framework/profile: Add a __len__ method to TestProfile
>   framework: Use custom class for ASM parser tests
>   framework/test: add a test class for built-in constants
>   tests: use BuiltInConstantsClass
>   framework: use a class method for building test via parsing
>   framework: do the same for shader test
>   framework/test: Split multishader too
>   framework/test/piglit_test: make cl_concurrency always a boolean
>   framework/test: Add 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-17 Thread Dylan Baker
Quoting Dylan Baker (2018-04-04 15:26:48)
> I don't expect everyone I've CC'd to give thorough review (or any
> review), I've mostly CC'd people who I think would be interested in this
> work, or who's work flow I might be altered by it.
> 
> Piglit has struggled to cope with the growing number of tests that it
> contains, especially with startup time. Piglit has always calculated
> tests at runtime, which was not a problem when there were only a few
> hundred or even thousand tests. Piglit now has roughly 55,000
> OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> means that piglit needs to keep a python object for each of those tests
> in memory, which has sent the resident memory usage soaring. We've also
> moved to automatic test discovery for glslparser, asmparser, and shader
> tests, which is very convenient and reduces typing, but further
> increases the amount of time spent starting up. This has even made
> features which decrease runtime, like fast skipping, hurt startup
> performance, making it a less than desirable tradeoff in some cases.
> Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> an unheard of startup time. That might be okay to run 55,000 tests, but
> not if you only need a dozen, such as when bisecting.
> 
> This series is my proposal to fix that, mainly by moving much of that
> cost to build time. This series creates the infrastructure build XML
> base profiles at build time, which are installed with piglit instead of
> the python profiles. These profiles are lazily iterated over to ease
> memory usage, test objects are created as they are run, and python can
> garbage collect them as soon as they are done running. Along with that
> any filters applied to profiles (like removing 80% of the vs_in shader
> tests in quick) are done before the profile is serialized, and all fast
> skipping information is collected at build time as well, and encoded in
> the XML. All this means that start times are vastly reduced.
> 
> For example:
> XML profiles
> quick: 0.5
> shader: 0.5
> 
> master
> quick: 11.6
> shader: 7.3
> 
> This series also implements some optimizations for running without
> filters or test-lists, if you add a filter quick would take 2.5
> seconds, because that is necessary to calculate the total number of
> tests before starting.
> 
> To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> llvmpipe working this series adds meta profiles, small XML snippets that
> list other profiles. These can contain other meta profiles, xml
> profiles, or python profiles. This means that for most uses cases your
> existing command line will still work, `./piglit run quick out -c` will
> still do exactly the same thing as before, just faster.
> 
> The XML generated is dumb, there is no encoding of options or logic. An
> early version of this series did contain logic and options, but the
> result was pretty terrible. It was very hard to read, and the code to
> handle it was very complicated. I've chosen not to go down that path.
> There are drawbacks, some things that relied on run time generation have
> cannot be handled the same way, among them the "multi shader" concept,
> where shader_runner consumes a directory of shader_tests at a time. This
> was previously handled via a --process-isolation=false flag, now its
> encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> was also an option to use glslparsertest with ES shaders and
> ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> added metaprofiles for these cases, although we certainly could (or you
> can write your own, the schema is dead simple), so `./piglit run quick
> out --process-isolation=false` is now `./piglit run quick_gl glslparser
> quick_shader_multi out`.
> 
> I've run this through our CI extensively, and gotten green results out
> of it across the board.
> 
> I know this is a big series, but piglit makes a lot of assumptions about the
> test profiles being created at runtime, and we've had to changes those
> assumptions.
> 
> 
> Dylan Baker (35):
>   update git ignore for this series
>   test/piglit_test: add ROOT_DIR variable
>   framework/profile: Allow a group manager class to be overwritten
>   framework/test: Use getter for altering PiglitBaseTest Command
>   framework/test: expose required and excluded platforms
>   framework/profile: Add a __len__ method to TestProfile
>   framework: Use custom class for ASM parser tests
>   framework/test: add a test class for built-in constants
>   tests: use BuiltInConstantsClass
>   framework: use a class method for building test via parsing
>   framework: do the same for shader test
>   framework/test: Split multishader too
>   framework/test/piglit_test: make cl_concurrency always a boolean
>   framework/test: Add class for cl-program-tester
>   framework/test: Make shader paths relative
>   framework/test: use relative paths for GLSLParser tests
>   

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-11 Thread Marek Olšák
On Tue, Apr 10, 2018 at 9:54 PM, Mark Janes  wrote:

> Dylan Baker  writes:
>
> > Quoting Marek Olšák (2018-04-10 14:22:10)
> >> On Tue, Apr 10, 2018 at 2:15 PM, Dylan Baker 
> wrote:
> >>
> >> Quoting Eric Anholt (2018-04-09 17:10:35)
> >> > Marek Olšák  writes:
> >> >
> >> > > Is this use case affected?
> >> > >
> >> > > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c
> quick
> >> > > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
> >> > >
> >> > > Yes, that is just 1 command to run all those test suites at the
> same
> >> time.
> >> > >
> >> > > I use my personal "deqp" piglit branch that also disables
> process
> >> isolation
> >> > > for glcts and deqp, and parses the deqp mustpass lists which
> are in txt
> >> > > files.
> >> >
> >> > Parsing the mustpass lists sounds really useful.  Trying to
> construct an
> >> > appropriate command line otherwise has been quite a challenge.
> >>
> >> That option is already in core piglit, you just need to configure
> your
> >> piglit.conf appropriately. I guess we really should have that in the
> >> piglit.conf.example file...
> >>
> >>
> >> I doubt it. Why would I have these then:
> >>
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> 0b11344e0c18b9bf07ad12381b94f308f362eb88
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> b086d8f82d41338055ab48bdda78c4a0c1ee02d0
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> 90beefa825cda792eaa72bff2cefac463af6d08a
> >>
> >> Marek
> >
> > Because in late 2016 (there is some internal stuff that happened so we
> stopped
> > using piglit to run deqp I'm not entirely happy about) they changed the
> mustpass
> > list from xml to txt and no one ever updated it in master.
>
> I'm not sure what you are referring to here.  We stopped using piglit
> for dEQP because the run-time was 10X faster with a custom runner.  We
> don't have process isolation anymore, but we recover from
> crashes/assertions without paying the penalty of iterating the dEQP test
> list for each test run.
>
> Google dEQP authors recommended this path.
>

My deqp piglit branch might be just as fast or nearly as fast as your
custom runner if process isolation is explicitly disabled.

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-11 Thread Dylan Baker
Quoting Mark Janes (2018-04-10 18:54:46)
> Dylan Baker  writes:
> 
> > Quoting Marek Olšák (2018-04-10 14:22:10)
> >> On Tue, Apr 10, 2018 at 2:15 PM, Dylan Baker  wrote:
> >> 
> >> Quoting Eric Anholt (2018-04-09 17:10:35)
> >> > Marek Olšák  writes:
> >> >
> >> > > Is this use case affected?
> >> > >
> >> > > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c 
> >> quick
> >> > > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
> >> > >
> >> > > Yes, that is just 1 command to run all those test suites at the 
> >> same
> >> time.
> >> > >
> >> > > I use my personal "deqp" piglit branch that also disables process
> >> isolation
> >> > > for glcts and deqp, and parses the deqp mustpass lists which are 
> >> in txt
> >> > > files.
> >> >
> >> > Parsing the mustpass lists sounds really useful.  Trying to 
> >> construct an
> >> > appropriate command line otherwise has been quite a challenge.
> >> 
> >> That option is already in core piglit, you just need to configure your
> >> piglit.conf appropriately. I guess we really should have that in the
> >> piglit.conf.example file...
> >> 
> >> 
> >> I doubt it. Why would I have these then:
> >> 
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> 0b11344e0c18b9bf07ad12381b94f308f362eb88
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> b086d8f82d41338055ab48bdda78c4a0c1ee02d0
> >> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> >> 90beefa825cda792eaa72bff2cefac463af6d08a
> >> 
> >> Marek
> >
> > Because in late 2016 (there is some internal stuff that happened so we 
> > stopped
> > using piglit to run deqp I'm not entirely happy about) they changed the 
> > mustpass
> > list from xml to txt and no one ever updated it in master.
> 
> I'm not sure what you are referring to here.  We stopped using piglit
> for dEQP because the run-time was 10X faster with a custom runner.  We
> don't have process isolation anymore, but we recover from
> crashes/assertions without paying the penalty of iterating the dEQP test
> list for each test run.
> 
> Google dEQP authors recommended this path.
> 
> > Dylan

I was concerned that we would end up re-implementing so much of piglit that we
would invest way more time in it than fixing piglit, and not have something as
well tested or as widely deployed. Anyway, that's in the past at this point.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-10 Thread Mark Janes
Dylan Baker  writes:

> Quoting Marek Olšák (2018-04-10 14:22:10)
>> On Tue, Apr 10, 2018 at 2:15 PM, Dylan Baker  wrote:
>> 
>> Quoting Eric Anholt (2018-04-09 17:10:35)
>> > Marek Olšák  writes:
>> >
>> > > Is this use case affected?
>> > >
>> > > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
>> > > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
>> > >
>> > > Yes, that is just 1 command to run all those test suites at the same
>> time.
>> > >
>> > > I use my personal "deqp" piglit branch that also disables process
>> isolation
>> > > for glcts and deqp, and parses the deqp mustpass lists which are in 
>> txt
>> > > files.
>> >
>> > Parsing the mustpass lists sounds really useful.  Trying to construct 
>> an
>> > appropriate command line otherwise has been quite a challenge.
>> 
>> That option is already in core piglit, you just need to configure your
>> piglit.conf appropriately. I guess we really should have that in the
>> piglit.conf.example file...
>> 
>> 
>> I doubt it. Why would I have these then:
>> 
>> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
>> 0b11344e0c18b9bf07ad12381b94f308f362eb88
>> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
>> b086d8f82d41338055ab48bdda78c4a0c1ee02d0
>> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
>> 90beefa825cda792eaa72bff2cefac463af6d08a
>> 
>> Marek
>
> Because in late 2016 (there is some internal stuff that happened so we stopped
> using piglit to run deqp I'm not entirely happy about) they changed the 
> mustpass
> list from xml to txt and no one ever updated it in master.

I'm not sure what you are referring to here.  We stopped using piglit
for dEQP because the run-time was 10X faster with a custom runner.  We
don't have process isolation anymore, but we recover from
crashes/assertions without paying the penalty of iterating the dEQP test
list for each test run.

Google dEQP authors recommended this path.

> Dylan
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-10 Thread Dylan Baker
Quoting Marek Olšák (2018-04-10 14:22:10)
> On Tue, Apr 10, 2018 at 2:15 PM, Dylan Baker  wrote:
> 
> Quoting Eric Anholt (2018-04-09 17:10:35)
> > Marek Olšák  writes:
> >
> > > Is this use case affected?
> > >
> > > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
> > > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
> > >
> > > Yes, that is just 1 command to run all those test suites at the same
> time.
> > >
> > > I use my personal "deqp" piglit branch that also disables process
> isolation
> > > for glcts and deqp, and parses the deqp mustpass lists which are in 
> txt
> > > files.
> >
> > Parsing the mustpass lists sounds really useful.  Trying to construct an
> > appropriate command line otherwise has been quite a challenge.
> 
> That option is already in core piglit, you just need to configure your
> piglit.conf appropriately. I guess we really should have that in the
> piglit.conf.example file...
> 
> 
> I doubt it. Why would I have these then:
> 
> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> 0b11344e0c18b9bf07ad12381b94f308f362eb88
> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> b086d8f82d41338055ab48bdda78c4a0c1ee02d0
> https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=
> 90beefa825cda792eaa72bff2cefac463af6d08a
> 
> Marek

Because in late 2016 (there is some internal stuff that happened so we stopped
using piglit to run deqp I'm not entirely happy about) they changed the mustpass
list from xml to txt and no one ever updated it in master.

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-10 Thread Marek Olšák
On Tue, Apr 10, 2018 at 2:15 PM, Dylan Baker  wrote:

> Quoting Eric Anholt (2018-04-09 17:10:35)
> > Marek Olšák  writes:
> >
> > > Is this use case affected?
> > >
> > > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
> > > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
> > >
> > > Yes, that is just 1 command to run all those test suites at the same
> time.
> > >
> > > I use my personal "deqp" piglit branch that also disables process
> isolation
> > > for glcts and deqp, and parses the deqp mustpass lists which are in txt
> > > files.
> >
> > Parsing the mustpass lists sounds really useful.  Trying to construct an
> > appropriate command line otherwise has been quite a challenge.
>
> That option is already in core piglit, you just need to configure your
> piglit.conf appropriately. I guess we really should have that in the
> piglit.conf.example file...
>

I doubt it. Why would I have these then:

https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=0b11344e0c18b9bf07ad12381b94f308f362eb88
https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=b086d8f82d41338055ab48bdda78c4a0c1ee02d0
https://cgit.freedesktop.org/~mareko/piglit/commit/?h=deqp=90beefa825cda792eaa72bff2cefac463af6d08a

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-10 Thread Dylan Baker
Quoting Eric Anholt (2018-04-09 17:10:35)
> Marek Olšák  writes:
> 
> > Is this use case affected?
> >
> > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
> > cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
> >
> > Yes, that is just 1 command to run all those test suites at the same time.
> >
> > I use my personal "deqp" piglit branch that also disables process isolation
> > for glcts and deqp, and parses the deqp mustpass lists which are in txt
> > files.
> 
> Parsing the mustpass lists sounds really useful.  Trying to construct an
> appropriate command line otherwise has been quite a challenge.

That option is already in core piglit, you just need to configure your
piglit.conf appropriately. I guess we really should have that in the
piglit.conf.example file...

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-09 Thread Eric Anholt
Marek Olšák  writes:

> Is this use case affected?
>
> piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
> cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
>
> Yes, that is just 1 command to run all those test suites at the same time.
>
> I use my personal "deqp" piglit branch that also disables process isolation
> for glcts and deqp, and parses the deqp mustpass lists which are in txt
> files.

Parsing the mustpass lists sounds really useful.  Trying to construct an
appropriate command line otherwise has been quite a challenge.


signature.asc
Description: PGP signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-05 Thread Marek Olšák
On Thu, Apr 5, 2018 at 2:05 PM, Dylan Baker  wrote:

> Quoting Marek Olšák (2018-04-04 16:37:47)
> > Is this use case affected?
> >
> > piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
> cts_gl45
> > deqp_gles2 deqp_gles3 deqp_gles31
>
> You could either run that as:
>
> piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick_gl
> quick_shader_multi glslparser cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31
>
> or you cold write yourself a metaprofile that would look something like:
>
> 
> 
>   quick_gl
>   quick_shader_multi
>   glslparser
>   cts
>   gl45
>   deqp
>   ...
> 
>
> We could probably put metaprofiles for the multiple-shader at a time
> profiles
> in piglit, I just didn't want to get too over board with them.
>
> >
> > Yes, that is just 1 command to run all those test suites at the same
> time.
> >
> > I use my personal "deqp" piglit branch that also disables process
> isolation for
> > glcts and deqp, and parses the deqp mustpass lists which are in txt
> files.
> >
> > Marek
>
> I have not touched the --process-isolation flag since I knew that you had
> that
> branch (and I have a branch for dEQP as well that I might try to dust off
> and
> get working again).
>

Nicolai wrote the deqp and glcts patches removing process isolation. I've
never looked at them and I probably wouldn't understand them either.
Anybody is free to clean up them up and merge.

Marek
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-05 Thread Dylan Baker
Quoting Marek Olšák (2018-04-04 16:37:47)
> Is this use case affected?
> 
> piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick cts_gl45
> deqp_gles2 deqp_gles3 deqp_gles31

You could either run that as:

piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick_gl
quick_shader_multi glslparser cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31

or you cold write yourself a metaprofile that would look something like:



  quick_gl
  quick_shader_multi
  glslparser
  cts
  gl45
  deqp
  ...


We could probably put metaprofiles for the multiple-shader at a time profiles
in piglit, I just didn't want to get too over board with them.

> 
> Yes, that is just 1 command to run all those test suites at the same time.
> 
> I use my personal "deqp" piglit branch that also disables process isolation 
> for
> glcts and deqp, and parses the deqp mustpass lists which are in txt files.
> 
> Marek

I have not touched the --process-isolation flag since I knew that you had that
branch (and I have a branch for dEQP as well that I might try to dust off and
get working again).

Dylan


signature.asc
Description: signature
___
Piglit mailing list
Piglit@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/piglit


Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-04 Thread Marek Olšák
Is this use case affected?

piglit run --deqp-mustpass-list --process-isolation 0 -p gbm -c quick
cts_gl45 deqp_gles2 deqp_gles3 deqp_gles31

Yes, that is just 1 command to run all those test suites at the same time.

I use my personal "deqp" piglit branch that also disables process isolation
for glcts and deqp, and parses the deqp mustpass lists which are in txt
files.

Marek


On Wed, Apr 4, 2018 at 6:26 PM, Dylan Baker  wrote:

> I don't expect everyone I've CC'd to give thorough review (or any
> review), I've mostly CC'd people who I think would be interested in this
> work, or who's work flow I might be altered by it.
>
> Piglit has struggled to cope with the growing number of tests that it
> contains, especially with startup time. Piglit has always calculated
> tests at runtime, which was not a problem when there were only a few
> hundred or even thousand tests. Piglit now has roughly 55,000
> OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> means that piglit needs to keep a python object for each of those tests
> in memory, which has sent the resident memory usage soaring. We've also
> moved to automatic test discovery for glslparser, asmparser, and shader
> tests, which is very convenient and reduces typing, but further
> increases the amount of time spent starting up. This has even made
> features which decrease runtime, like fast skipping, hurt startup
> performance, making it a less than desirable tradeoff in some cases.
> Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> an unheard of startup time. That might be okay to run 55,000 tests, but
> not if you only need a dozen, such as when bisecting.
>
> This series is my proposal to fix that, mainly by moving much of that
> cost to build time. This series creates the infrastructure build XML
> base profiles at build time, which are installed with piglit instead of
> the python profiles. These profiles are lazily iterated over to ease
> memory usage, test objects are created as they are run, and python can
> garbage collect them as soon as they are done running. Along with that
> any filters applied to profiles (like removing 80% of the vs_in shader
> tests in quick) are done before the profile is serialized, and all fast
> skipping information is collected at build time as well, and encoded in
> the XML. All this means that start times are vastly reduced.
>
> For example:
> XML profiles
> quick: 0.5
> shader: 0.5
>
> master
> quick: 11.6
> shader: 7.3
>
> This series also implements some optimizations for running without
> filters or test-lists, if you add a filter quick would take 2.5
> seconds, because that is necessary to calculate the total number of
> tests before starting.
>
> To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> llvmpipe working this series adds meta profiles, small XML snippets that
> list other profiles. These can contain other meta profiles, xml
> profiles, or python profiles. This means that for most uses cases your
> existing command line will still work, `./piglit run quick out -c` will
> still do exactly the same thing as before, just faster.
>
> The XML generated is dumb, there is no encoding of options or logic. An
> early version of this series did contain logic and options, but the
> result was pretty terrible. It was very hard to read, and the code to
> handle it was very complicated. I've chosen not to go down that path.
> There are drawbacks, some things that relied on run time generation have
> cannot be handled the same way, among them the "multi shader" concept,
> where shader_runner consumes a directory of shader_tests at a time. This
> was previously handled via a --process-isolation=false flag, now its
> encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> was also an option to use glslparsertest with ES shaders and
> ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> added metaprofiles for these cases, although we certainly could (or you
> can write your own, the schema is dead simple), so `./piglit run quick
> out --process-isolation=false` is now `./piglit run quick_gl glslparser
> quick_shader_multi out`.
>
> I've run this through our CI extensively, and gotten green results out
> of it across the board.
>
> I know this is a big series, but piglit makes a lot of assumptions about
> the
> test profiles being created at runtime, and we've had to changes those
> assumptions.
>
>
> Dylan Baker (35):
>   update git ignore for this series
>   test/piglit_test: add ROOT_DIR variable
>   framework/profile: Allow a group manager class to be overwritten
>   framework/test: Use getter for altering PiglitBaseTest Command
>   framework/test: expose required and excluded platforms
>   framework/profile: Add a __len__ method to TestProfile
>   framework: Use custom class for ASM parser tests
>   framework/test: add a test class for built-in constants
>   tests: use 

Re: [Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-04 Thread Dylan Baker
Quoting Dylan Baker (2018-04-04 15:26:48)
> I don't expect everyone I've CC'd to give thorough review (or any
> review), I've mostly CC'd people who I think would be interested in this
> work, or who's work flow I might be altered by it.
> 
> Piglit has struggled to cope with the growing number of tests that it
> contains, especially with startup time. Piglit has always calculated
> tests at runtime, which was not a problem when there were only a few
> hundred or even thousand tests. Piglit now has roughly 55,000
> OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
> means that piglit needs to keep a python object for each of those tests
> in memory, which has sent the resident memory usage soaring. We've also
> moved to automatic test discovery for glslparser, asmparser, and shader
> tests, which is very convenient and reduces typing, but further
> increases the amount of time spent starting up. This has even made
> features which decrease runtime, like fast skipping, hurt startup
> performance, making it a less than desirable tradeoff in some cases.
> Even on a relatively fast machine with an nvme disk 15-20 seconds is not
> an unheard of startup time. That might be okay to run 55,000 tests, but
> not if you only need a dozen, such as when bisecting.
> 
> This series is my proposal to fix that, mainly by moving much of that
> cost to build time. This series creates the infrastructure build XML
> base profiles at build time, which are installed with piglit instead of
> the python profiles. These profiles are lazily iterated over to ease
> memory usage, test objects are created as they are run, and python can
> garbage collect them as soon as they are done running. Along with that
> any filters applied to profiles (like removing 80% of the vs_in shader
> tests in quick) are done before the profile is serialized, and all fast
> skipping information is collected at build time as well, and encoded in
> the XML. All this means that start times are vastly reduced.
> 
> For example:
> XML profiles
> quick: 0.5
> shader: 0.5
> 
> master
> quick: 11.6
> shader: 7.3
> 
> This series also implements some optimizations for running without
> filters or test-lists, if you add a filter quick would take 2.5
> seconds, because that is necessary to calculate the total number of
> tests before starting.
> 
> To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
> llvmpipe working this series adds meta profiles, small XML snippets that
> list other profiles. These can contain other meta profiles, xml
> profiles, or python profiles. This means that for most uses cases your
> existing command line will still work, `./piglit run quick out -c` will
> still do exactly the same thing as before, just faster.
> 
> The XML generated is dumb, there is no encoding of options or logic. An
> early version of this series did contain logic and options, but the
> result was pretty terrible. It was very hard to read, and the code to
> handle it was very complicated. I've chosen not to go down that path.
> There are drawbacks, some things that relied on run time generation have
> cannot be handled the same way, among them the "multi shader" concept,
> where shader_runner consumes a directory of shader_tests at a time. This
> was previously handled via a --process-isolation=false flag, now its
> encoded into profiles, "shader_multi" and "quick_shader_multi"; there
> was also an option to use glslparsertest with ES shaders and
> ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
> added metaprofiles for these cases, although we certainly could (or you
> can write your own, the schema is dead simple), so `./piglit run quick
> out --process-isolation=false` is now `./piglit run quick_gl glslparser
> quick_shader_multi out`.
> 
> I've run this through our CI extensively, and gotten green results out
> of it across the board.
> 
> I know this is a big series, but piglit makes a lot of assumptions about the
> test profiles being created at runtime, and we've had to changes those
> assumptions.
> 
> 
> Dylan Baker (35):
>   update git ignore for this series
>   test/piglit_test: add ROOT_DIR variable
>   framework/profile: Allow a group manager class to be overwritten
>   framework/test: Use getter for altering PiglitBaseTest Command
>   framework/test: expose required and excluded platforms
>   framework/profile: Add a __len__ method to TestProfile
>   framework: Use custom class for ASM parser tests
>   framework/test: add a test class for built-in constants
>   tests: use BuiltInConstantsClass
>   framework: use a class method for building test via parsing
>   framework: do the same for shader test
>   framework/test: Split multishader too
>   framework/test/piglit_test: make cl_concurrency always a boolean
>   framework/test: Add class for cl-program-tester
>   framework/test: Make shader paths relative
>   framework/test: use relative paths for GLSLParser tests
>   

[Piglit] [PATCH 00/35] Serialize profiles into XML at build time

2018-04-04 Thread Dylan Baker
I don't expect everyone I've CC'd to give thorough review (or any
review), I've mostly CC'd people who I think would be interested in this
work, or who's work flow I might be altered by it.

Piglit has struggled to cope with the growing number of tests that it
contains, especially with startup time. Piglit has always calculated
tests at runtime, which was not a problem when there were only a few
hundred or even thousand tests. Piglit now has roughly 55,000
OpenGL/OpenGL ES tests, which is a lot to calculate at start up. It also
means that piglit needs to keep a python object for each of those tests
in memory, which has sent the resident memory usage soaring. We've also
moved to automatic test discovery for glslparser, asmparser, and shader
tests, which is very convenient and reduces typing, but further
increases the amount of time spent starting up. This has even made
features which decrease runtime, like fast skipping, hurt startup
performance, making it a less than desirable tradeoff in some cases.
Even on a relatively fast machine with an nvme disk 15-20 seconds is not
an unheard of startup time. That might be okay to run 55,000 tests, but
not if you only need a dozen, such as when bisecting.

This series is my proposal to fix that, mainly by moving much of that
cost to build time. This series creates the infrastructure build XML
base profiles at build time, which are installed with piglit instead of
the python profiles. These profiles are lazily iterated over to ease
memory usage, test objects are created as they are run, and python can
garbage collect them as soon as they are done running. Along with that
any filters applied to profiles (like removing 80% of the vs_in shader
tests in quick) are done before the profile is serialized, and all fast
skipping information is collected at build time as well, and encoded in
the XML. All this means that start times are vastly reduced.

For example:
XML profiles
quick: 0.5
shader: 0.5

master
quick: 11.6
shader: 7.3

This series also implements some optimizations for running without
filters or test-lists, if you add a filter quick would take 2.5
seconds, because that is necessary to calculate the total number of
tests before starting.

To keep classic profiles like all, quick, quick_cl, gpu, cpu, and
llvmpipe working this series adds meta profiles, small XML snippets that
list other profiles. These can contain other meta profiles, xml
profiles, or python profiles. This means that for most uses cases your
existing command line will still work, `./piglit run quick out -c` will
still do exactly the same thing as before, just faster.

The XML generated is dumb, there is no encoding of options or logic. An
early version of this series did contain logic and options, but the
result was pretty terrible. It was very hard to read, and the code to
handle it was very complicated. I've chosen not to go down that path.
There are drawbacks, some things that relied on run time generation have
cannot be handled the same way, among them the "multi shader" concept,
where shader_runner consumes a directory of shader_tests at a time. This
was previously handled via a --process-isolation=false flag, now its
encoded into profiles, "shader_multi" and "quick_shader_multi"; there
was also an option to use glslparsertest with ES shaders and
ARB_ES_compatibility, that is now "glslparser_arb_compat". I haven't
added metaprofiles for these cases, although we certainly could (or you
can write your own, the schema is dead simple), so `./piglit run quick
out --process-isolation=false` is now `./piglit run quick_gl glslparser
quick_shader_multi out`.

I've run this through our CI extensively, and gotten green results out
of it across the board.

I know this is a big series, but piglit makes a lot of assumptions about the
test profiles being created at runtime, and we've had to changes those
assumptions.


Dylan Baker (35):
  update git ignore for this series
  test/piglit_test: add ROOT_DIR variable
  framework/profile: Allow a group manager class to be overwritten
  framework/test: Use getter for altering PiglitBaseTest Command
  framework/test: expose required and excluded platforms
  framework/profile: Add a __len__ method to TestProfile
  framework: Use custom class for ASM parser tests
  framework/test: add a test class for built-in constants
  tests: use BuiltInConstantsClass
  framework: use a class method for building test via parsing
  framework: do the same for shader test
  framework/test: Split multishader too
  framework/test/piglit_test: make cl_concurrency always a boolean
  framework/test: Add class for cl-program-tester
  framework/test: Make shader paths relative
  framework/test: use relative paths for GLSLParser tests
  tests/all: Make asmparser tests path relative
  framework/test: make BuiltInConstantTest files relative
  framework/test: make CLProgramTester take relative paths
  profile: Add support for loading xml based profiles
  profile: allow