Re: [Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise preemption timeout
Quoting Antonio Argenziano (2018-04-13 18:20:02) > > > On 13/04/18 08:59, Chris Wilson wrote: > > die. What we expect to happen is spin[0] is (more or less, there is still > > dmesg) silently killed by the preempt timeout. If that timeout doesn't > > The silent part is interesting, how do we make sure that during normal > preemption operations (e.g. preempt on an ARB_CHECK) we didn't silently > discard the preempted batch? Do we care? Not particularly. From our point of view, the goal is that the high priority spin[2] runs, no matter what. If the other requests cooperate, that works out best for them. The challenge for the test itself is detecting when the timeout was hit. We aren't particularly good at demonstrating the spinner doesn't block preemption, it is demonstrated in other tests, but we don't assert that it is so. -Chris ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise preemption timeout
On 13/04/18 08:59, Chris Wilson wrote: Quoting Antonio Argenziano (2018-04-13 16:54:27) On 13/04/18 07:14, Chris Wilson wrote: Set up a unpreemptible spinner such that the only way we can inject a high priority request onto the GPU is by resetting the spinner. The test fails if we trigger hangcheck rather than the fast timeout mechanism. Signed-off-by: Chris Wilson --- lib/i915/gem_context.c| 72 +++ lib/i915/gem_context.h| 3 ++ lib/igt_dummyload.c | 12 +-- lib/igt_dummyload.h | 3 ++ tests/gem_exec_schedule.c | 34 ++ 5 files changed, 106 insertions(+), 18 deletions(-) ... @@ -449,8 +457,6 @@ void igt_spin_batch_end(igt_spin_t *spin) if (!spin) return; - igt_assert(*spin->batch == MI_ARB_CHK || -*spin->batch == MI_BATCH_BUFFER_END); I am not sure why we needed this, but it seems safe to remove. *spin->batch = MI_BATCH_BUFFER_END; __sync_synchronize(); } diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c index 6ff15b6ef..93254945b 100644 --- a/tests/gem_exec_schedule.c +++ b/tests/gem_exec_schedule.c @@ -656,6 +656,37 @@ static void preemptive_hang(int fd, unsigned ring) gem_context_destroy(fd, ctx[HI]); } +static void preempt_timeout(int fd, unsigned ring) +{ + igt_spin_t *spin[3]; + uint32_t ctx; + + igt_require(__gem_context_set_preempt_timeout(fd, 0, 0)); + + ctx = gem_context_create(fd); + gem_context_set_priority(fd, ctx, MIN_PRIO); + spin[0] = __igt_spin_batch_new_hang(fd, ctx, ring); + spin[1] = __igt_spin_batch_new_hang(fd, ctx, ring); Should we send MAX_ELSP_QLEN batches to match other preemption tests? + gem_context_destroy(fd, ctx); + + ctx = gem_context_create(fd); + gem_context_set_priority(fd, ctx, MAX_PRIO); + gem_context_set_preempt_timeout(fd, ctx, 1000 * 1000); + spin[2] = __igt_spin_batch_new(fd, ctx, ring, 0); + gem_context_destroy(fd, ctx); + + igt_spin_batch_end(spin[2]); + gem_sync(fd, spin[2]->handle); Does this guarantee that spin[1] did not overtake spin[2]? It does as well. Neither spin[0] or spin[1] can complete without being reset at this point. If they are reset (by hangcheck) we detect that and Cool. die. What we expect to happen is spin[0] is (more or less, there is still dmesg) silently killed by the preempt timeout. If that timeout doesn't The silent part is interesting, how do we make sure that during normal preemption operations (e.g. preempt on an ARB_CHECK) we didn't silently discard the preempted batch? Do we care? Test looks good, Reviewed-by: Antonio Argenziano Thanks, Antonio happen, more hangcheck. What we don't check here is how quick. Now we could reasonably assert that the spin[2] -> gem_sync takes less than 2ms. -Chris ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise preemption timeout
Quoting Antonio Argenziano (2018-04-13 16:54:27) > > > On 13/04/18 07:14, Chris Wilson wrote: > > Set up a unpreemptible spinner such that the only way we can inject a > > high priority request onto the GPU is by resetting the spinner. The test > > fails if we trigger hangcheck rather than the fast timeout mechanism. > > > > Signed-off-by: Chris Wilson > > --- > > lib/i915/gem_context.c| 72 +++ > > lib/i915/gem_context.h| 3 ++ > > lib/igt_dummyload.c | 12 +-- > > lib/igt_dummyload.h | 3 ++ > > tests/gem_exec_schedule.c | 34 ++ > > 5 files changed, 106 insertions(+), 18 deletions(-) > > > > ... > > > @@ -449,8 +457,6 @@ void igt_spin_batch_end(igt_spin_t *spin) > > if (!spin) > > return; > > > > - igt_assert(*spin->batch == MI_ARB_CHK || > > -*spin->batch == MI_BATCH_BUFFER_END); > > I am not sure why we needed this, but it seems safe to remove. > > > *spin->batch = MI_BATCH_BUFFER_END; > > __sync_synchronize(); > > } > > > diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c > > index 6ff15b6ef..93254945b 100644 > > --- a/tests/gem_exec_schedule.c > > +++ b/tests/gem_exec_schedule.c > > @@ -656,6 +656,37 @@ static void preemptive_hang(int fd, unsigned ring) > > gem_context_destroy(fd, ctx[HI]); > > } > > > > +static void preempt_timeout(int fd, unsigned ring) > > +{ > > + igt_spin_t *spin[3]; > > + uint32_t ctx; > > + > > + igt_require(__gem_context_set_preempt_timeout(fd, 0, 0)); > > + > > + ctx = gem_context_create(fd); > > + gem_context_set_priority(fd, ctx, MIN_PRIO); > > + spin[0] = __igt_spin_batch_new_hang(fd, ctx, ring); > > + spin[1] = __igt_spin_batch_new_hang(fd, ctx, ring); > > + gem_context_destroy(fd, ctx); > > + > > + ctx = gem_context_create(fd); > > + gem_context_set_priority(fd, ctx, MAX_PRIO); > > + gem_context_set_preempt_timeout(fd, ctx, 1000 * 1000); > > + spin[2] = __igt_spin_batch_new(fd, ctx, ring, 0); > > + gem_context_destroy(fd, ctx); > > + > > + igt_spin_batch_end(spin[2]); > > + gem_sync(fd, spin[2]->handle); > > Does this guarantee that spin[1] did not overtake spin[2]? It does as well. Neither spin[0] or spin[1] can complete without being reset at this point. If they are reset (by hangcheck) we detect that and die. What we expect to happen is spin[0] is (more or less, there is still dmesg) silently killed by the preempt timeout. If that timeout doesn't happen, more hangcheck. What we don't check here is how quick. Now we could reasonably assert that the spin[2] -> gem_sync takes less than 2ms. -Chris ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Re: [Intel-gfx] [PATCH igt] igt/gem_exec_schedule: Exercise preemption timeout
On 13/04/18 07:14, Chris Wilson wrote: Set up a unpreemptible spinner such that the only way we can inject a high priority request onto the GPU is by resetting the spinner. The test fails if we trigger hangcheck rather than the fast timeout mechanism. Signed-off-by: Chris Wilson --- lib/i915/gem_context.c| 72 +++ lib/i915/gem_context.h| 3 ++ lib/igt_dummyload.c | 12 +-- lib/igt_dummyload.h | 3 ++ tests/gem_exec_schedule.c | 34 ++ 5 files changed, 106 insertions(+), 18 deletions(-) ... @@ -449,8 +457,6 @@ void igt_spin_batch_end(igt_spin_t *spin) if (!spin) return; - igt_assert(*spin->batch == MI_ARB_CHK || - *spin->batch == MI_BATCH_BUFFER_END); I am not sure why we needed this, but it seems safe to remove. *spin->batch = MI_BATCH_BUFFER_END; __sync_synchronize(); } diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c index 6ff15b6ef..93254945b 100644 --- a/tests/gem_exec_schedule.c +++ b/tests/gem_exec_schedule.c @@ -656,6 +656,37 @@ static void preemptive_hang(int fd, unsigned ring) gem_context_destroy(fd, ctx[HI]); } +static void preempt_timeout(int fd, unsigned ring) +{ + igt_spin_t *spin[3]; + uint32_t ctx; + + igt_require(__gem_context_set_preempt_timeout(fd, 0, 0)); + + ctx = gem_context_create(fd); + gem_context_set_priority(fd, ctx, MIN_PRIO); + spin[0] = __igt_spin_batch_new_hang(fd, ctx, ring); + spin[1] = __igt_spin_batch_new_hang(fd, ctx, ring); + gem_context_destroy(fd, ctx); + + ctx = gem_context_create(fd); + gem_context_set_priority(fd, ctx, MAX_PRIO); + gem_context_set_preempt_timeout(fd, ctx, 1000 * 1000); + spin[2] = __igt_spin_batch_new(fd, ctx, ring, 0); + gem_context_destroy(fd, ctx); + + igt_spin_batch_end(spin[2]); + gem_sync(fd, spin[2]->handle); Does this guarantee that spin[1] did not overtake spin[2]? Thanks, Antonio + + /* spin[0] is kicked, leaving spin[1] running */ + + igt_assert(gem_bo_busy(fd, spin[1]->handle)); + + igt_spin_batch_free(fd, spin[2]); + igt_spin_batch_free(fd, spin[1]); + igt_spin_batch_free(fd, spin[0]); +} + static void deep(int fd, unsigned ring) { #define XS 8 @@ -1120,6 +1151,9 @@ igt_main igt_subtest_f("preempt-self-%s", e->name) preempt_self(fd, e->exec_id | e->flags); + igt_subtest_f("preempt-timeout-%s", e->name) + preempt_timeout(fd, e->exec_id | e->flags); + igt_subtest_f("preempt-other-%s", e->name) preempt_other(fd, e->exec_id | e->flags, 0); ___ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx