Re: [PATCH v3 8/8] drm/etnaviv: implement per-process address spaces on MMUv2

2019-08-14 Thread Guido Günther
Hi, On Fri, Aug 09, 2019 at 02:04:24PM +0200, Lucas Stach wrote: > This builds on top of the MMU contexts introduced earlier. Instead of having > one context per GPU core, each GPU client receives its own context. > > On MMUv1 this still means a single shared pagetable set is used by all >

Re: [PATCH v3 8/8] drm/etnaviv: implement per-process address spaces on MMUv2

2019-08-09 Thread Lucas Stach
Am Freitag, den 09.08.2019, 14:04 +0200 schrieb Lucas Stach: > This builds on top of the MMU contexts introduced earlier. Instead of having > one context per GPU core, each GPU client receives its own context. > > On MMUv1 this still means a single shared pagetable set is used by all > clients,

Re: [PATCH v3 8/8] drm/etnaviv: implement per-process address spaces on MMUv2

2019-08-09 Thread Philipp Zabel
On Fri, 2019-08-09 at 14:04 +0200, Lucas Stach wrote: > This builds on top of the MMU contexts introduced earlier. Instead of having > one context per GPU core, each GPU client receives its own context. > > On MMUv1 this still means a single shared pagetable set is used by all > clients, but on

[PATCH v3 8/8] drm/etnaviv: implement per-process address spaces on MMUv2

2019-08-09 Thread Lucas Stach
This builds on top of the MMU contexts introduced earlier. Instead of having one context per GPU core, each GPU client receives its own context. On MMUv1 this still means a single shared pagetable set is used by all clients, but on MMUv2 there is now a distinct set of pagetables for each client.