On Fri, Aug 9, 2019 at 6:45 AM Steven Price <steven.pr...@arm.com> wrote:
>
> On 09/08/2019 04:01, Rob Herring wrote:
> [...]
> > I was worried too. It seems to be working pretty well though, but more
> > testing would be good. I don't think there are a lot of usecases that
> > use more AS than the h/w has (8 on T860), but I'm not sure.
>
> Yeah, 8 is overkill. Some GPUs only have 4 which is a little tight and
> might come to bite when supporting queueing on the GPU. In this patch
> panfrost_mmu_as_get() will simply WARN() then crash if there isn't a
> free AS:
>
> >               WARN_ON(!lru_mmu);
> >
> >               list_del_init(&lru_mmu->list);
> >               as = lru_mmu->as;
>
> This isn't a problem at the moment (there's a maximum of 2 jobs on the
> GPU at the moment). But when you start queueing jobs it's possible for
> each job to belong to a different address space. With three slots and
> for each you can have one job running and one waiting that's a minimum
> of 6 ASes, plus you might want one configured to dump counters. So a
> total of 7 are needed to avoid having to wait. Hardware designers like
> powers of 2 so we have 8.

I think this could be solved by acquiring the AS in the job dependency
hook instead. That may make the timeout handling more complicated as
I'm not sure if dependencies are re-done. Tomeu is more familiar with
the scheduler code, so I'll let him chime in.

Rob
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to