Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-09 Thread Felix Kuehling
Am 2021-02-09 um 9:08 a.m. schrieb Daniel Vetter:
> On Tue, Feb 9, 2021 at 12:15 PM Felix Kuehling  wrote:
>> Am 2021-02-09 um 1:37 a.m. schrieb Daniel Vetter:
>>> On Tue, Feb 9, 2021 at 4:13 AM Bas Nieuwenhuizen
>>>  wrote:
 On Thu, Jan 28, 2021 at 4:40 PM Felix Kuehling  
 wrote:
> Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
>> Am 27.01.21 um 23:00 schrieb Felix Kuehling:
>>> Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
 Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
> Op 27-01-2021 om 01:22 schreef Felix Kuehling:
>> Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
>>> Recently there was a fairly long thread about recoreable hardware
>>> page
>>> faults, how they can deadlock, and what to do about that.
>>>
>>> While the discussion is still fresh I figured good time to try and
>>> document the conclusions a bit.
>>>
>>> References:
>>> https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
>>>
>>> Cc: Maarten Lankhorst 
>>> Cc: Thomas Hellström 
>>> Cc: "Christian König" 
>>> Cc: Jerome Glisse 
>>> Cc: Felix Kuehling 
>>> Signed-off-by: Daniel Vetter 
>>> Cc: Sumit Semwal 
>>> Cc: linux-me...@vger.kernel.org
>>> Cc: linaro-mm-...@lists.linaro.org
>>> --
>>> I'll be away next week, but figured I'll type this up quickly for
>>> some
>>> comments and to check whether I got this all roughly right.
>>>
>>> Critique very much wanted on this, so that we can make sure hw which
>>> can't preempt (with pagefaults pending) like gfx10 has a clear
>>> path to
>>> support page faults in upstream. So anything I missed, got wrong or
>>> like that would be good.
>>> -Daniel
>>> ---
>>>Documentation/driver-api/dma-buf.rst | 66
>>> 
>>>1 file changed, 66 insertions(+)
>>>
>>> diff --git a/Documentation/driver-api/dma-buf.rst
>>> b/Documentation/driver-api/dma-buf.rst
>>> index a2133d69872c..e924c1e4f7a3 100644
>>> --- a/Documentation/driver-api/dma-buf.rst
>>> +++ b/Documentation/driver-api/dma-buf.rst
>>> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>>>  userspace is allowed to use userspace fencing or long running
>>> compute
>>>  workloads. This also means no implicit fencing for shared
>>> buffers in these
>>>  cases.
>>> +
>>> +Recoverable Hardware Page Faults Implications
>>> +~
>>> +
>>> +Modern hardware supports recoverable page faults, which has a
>>> lot of
>>> +implications for DMA fences.
>>> +
>>> +First, a pending page fault obviously holds up the work that's
>>> running on the
>>> +accelerator and a memory allocation is usually required to resolve
>>> the fault.
>>> +But memory allocations are not allowed to gate completion of DMA
>>> fences, which
>>> +means any workload using recoverable page faults cannot use DMA
>>> fences for
>>> +synchronization. Synchronization fences controlled by userspace
>>> must be used
>>> +instead.
>>> +
>>> +On GPUs this poses a problem, because current desktop compositor
>>> protocols on
>>> +Linus rely on DMA fences, which means without an entirely new
>>> userspace stack
>>> +built on top of userspace fences, they cannot benefit from
>>> recoverable page
>>> +faults. The exception is when page faults are only used as
>>> migration hints and
>>> +never to on-demand fill a memory request. For now this means
>>> recoverable page
>>> +faults on GPUs are limited to pure compute workloads.
>>> +
>>> +Furthermore GPUs usually have shared resources between the 3D
>>> rendering and
>>> +compute side, like compute units or command submission engines. If
>>> both a 3D
>>> +job with a DMA fence and a compute workload using recoverable page
>>> faults are
>>> +pending they could deadlock:
>>> +
>>> +- The 3D workload might need to wait for the compute job to finish
>>> and release
>>> +  hardware resources first.
>>> +
>>> +- The compute workload might be stuck in a page fault, because the
>>> memory
>>> +  allocation is waiting for the DMA fence of the 3D workload to
>>> complete.
>>> +
>>> +There are a few ways to prevent this problem:
>>> +
>>> +- Compute workloads can always be preempted, even when a page
>>> fault is 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-09 Thread Daniel Vetter
On Tue, Feb 9, 2021 at 12:15 PM Felix Kuehling  wrote:
> Am 2021-02-09 um 1:37 a.m. schrieb Daniel Vetter:
> > On Tue, Feb 9, 2021 at 4:13 AM Bas Nieuwenhuizen
> >  wrote:
> >> On Thu, Jan 28, 2021 at 4:40 PM Felix Kuehling  
> >> wrote:
> >>> Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
>  Am 27.01.21 um 23:00 schrieb Felix Kuehling:
> > Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
> >> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
> >>> Op 27-01-2021 om 01:22 schreef Felix Kuehling:
>  Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> > Recently there was a fairly long thread about recoreable hardware
> > page
> > faults, how they can deadlock, and what to do about that.
> >
> > While the discussion is still fresh I figured good time to try and
> > document the conclusions a bit.
> >
> > References:
> > https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
> >
> > Cc: Maarten Lankhorst 
> > Cc: Thomas Hellström 
> > Cc: "Christian König" 
> > Cc: Jerome Glisse 
> > Cc: Felix Kuehling 
> > Signed-off-by: Daniel Vetter 
> > Cc: Sumit Semwal 
> > Cc: linux-me...@vger.kernel.org
> > Cc: linaro-mm-...@lists.linaro.org
> > --
> > I'll be away next week, but figured I'll type this up quickly for
> > some
> > comments and to check whether I got this all roughly right.
> >
> > Critique very much wanted on this, so that we can make sure hw which
> > can't preempt (with pagefaults pending) like gfx10 has a clear
> > path to
> > support page faults in upstream. So anything I missed, got wrong or
> > like that would be good.
> > -Daniel
> > ---
> >Documentation/driver-api/dma-buf.rst | 66
> > 
> >1 file changed, 66 insertions(+)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst
> > b/Documentation/driver-api/dma-buf.rst
> > index a2133d69872c..e924c1e4f7a3 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -257,3 +257,69 @@ fences in the kernel. This means:
> >  userspace is allowed to use userspace fencing or long running
> > compute
> >  workloads. This also means no implicit fencing for shared
> > buffers in these
> >  cases.
> > +
> > +Recoverable Hardware Page Faults Implications
> > +~
> > +
> > +Modern hardware supports recoverable page faults, which has a
> > lot of
> > +implications for DMA fences.
> > +
> > +First, a pending page fault obviously holds up the work that's
> > running on the
> > +accelerator and a memory allocation is usually required to resolve
> > the fault.
> > +But memory allocations are not allowed to gate completion of DMA
> > fences, which
> > +means any workload using recoverable page faults cannot use DMA
> > fences for
> > +synchronization. Synchronization fences controlled by userspace
> > must be used
> > +instead.
> > +
> > +On GPUs this poses a problem, because current desktop compositor
> > protocols on
> > +Linus rely on DMA fences, which means without an entirely new
> > userspace stack
> > +built on top of userspace fences, they cannot benefit from
> > recoverable page
> > +faults. The exception is when page faults are only used as
> > migration hints and
> > +never to on-demand fill a memory request. For now this means
> > recoverable page
> > +faults on GPUs are limited to pure compute workloads.
> > +
> > +Furthermore GPUs usually have shared resources between the 3D
> > rendering and
> > +compute side, like compute units or command submission engines. If
> > both a 3D
> > +job with a DMA fence and a compute workload using recoverable page
> > faults are
> > +pending they could deadlock:
> > +
> > +- The 3D workload might need to wait for the compute job to finish
> > and release
> > +  hardware resources first.
> > +
> > +- The compute workload might be stuck in a page fault, because the
> > memory
> > +  allocation is waiting for the DMA fence of the 3D workload to
> > complete.
> > +
> > +There are a few ways to prevent this problem:
> > +
> > +- Compute workloads can always be preempted, even when a page
> > fault is pending
> > +  and not yet repaired. Not all 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-09 Thread Felix Kuehling

Am 2021-02-09 um 1:37 a.m. schrieb Daniel Vetter:
> On Tue, Feb 9, 2021 at 4:13 AM Bas Nieuwenhuizen
>  wrote:
>> On Thu, Jan 28, 2021 at 4:40 PM Felix Kuehling  
>> wrote:
>>> Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
 Am 27.01.21 um 23:00 schrieb Felix Kuehling:
> Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
>> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
>>> Op 27-01-2021 om 01:22 schreef Felix Kuehling:
 Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> Recently there was a fairly long thread about recoreable hardware
> page
> faults, how they can deadlock, and what to do about that.
>
> While the discussion is still fresh I figured good time to try and
> document the conclusions a bit.
>
> References:
> https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
>
> Cc: Maarten Lankhorst 
> Cc: Thomas Hellström 
> Cc: "Christian König" 
> Cc: Jerome Glisse 
> Cc: Felix Kuehling 
> Signed-off-by: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> --
> I'll be away next week, but figured I'll type this up quickly for
> some
> comments and to check whether I got this all roughly right.
>
> Critique very much wanted on this, so that we can make sure hw which
> can't preempt (with pagefaults pending) like gfx10 has a clear
> path to
> support page faults in upstream. So anything I missed, got wrong or
> like that would be good.
> -Daniel
> ---
>Documentation/driver-api/dma-buf.rst | 66
> 
>1 file changed, 66 insertions(+)
>
> diff --git a/Documentation/driver-api/dma-buf.rst
> b/Documentation/driver-api/dma-buf.rst
> index a2133d69872c..e924c1e4f7a3 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>  userspace is allowed to use userspace fencing or long running
> compute
>  workloads. This also means no implicit fencing for shared
> buffers in these
>  cases.
> +
> +Recoverable Hardware Page Faults Implications
> +~
> +
> +Modern hardware supports recoverable page faults, which has a
> lot of
> +implications for DMA fences.
> +
> +First, a pending page fault obviously holds up the work that's
> running on the
> +accelerator and a memory allocation is usually required to resolve
> the fault.
> +But memory allocations are not allowed to gate completion of DMA
> fences, which
> +means any workload using recoverable page faults cannot use DMA
> fences for
> +synchronization. Synchronization fences controlled by userspace
> must be used
> +instead.
> +
> +On GPUs this poses a problem, because current desktop compositor
> protocols on
> +Linus rely on DMA fences, which means without an entirely new
> userspace stack
> +built on top of userspace fences, they cannot benefit from
> recoverable page
> +faults. The exception is when page faults are only used as
> migration hints and
> +never to on-demand fill a memory request. For now this means
> recoverable page
> +faults on GPUs are limited to pure compute workloads.
> +
> +Furthermore GPUs usually have shared resources between the 3D
> rendering and
> +compute side, like compute units or command submission engines. If
> both a 3D
> +job with a DMA fence and a compute workload using recoverable page
> faults are
> +pending they could deadlock:
> +
> +- The 3D workload might need to wait for the compute job to finish
> and release
> +  hardware resources first.
> +
> +- The compute workload might be stuck in a page fault, because the
> memory
> +  allocation is waiting for the DMA fence of the 3D workload to
> complete.
> +
> +There are a few ways to prevent this problem:
> +
> +- Compute workloads can always be preempted, even when a page
> fault is pending
> +  and not yet repaired. Not all hardware supports this.
> +
> +- DMA fence workloads and workloads which need page fault handling
> have
> +  independent hardware resources to guarantee forward progress.
> This could be
> +  achieved through e.g. through 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-08 Thread Daniel Vetter
On Tue, Feb 9, 2021 at 4:13 AM Bas Nieuwenhuizen
 wrote:
>
> On Thu, Jan 28, 2021 at 4:40 PM Felix Kuehling  wrote:
> >
> > Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
> > > Am 27.01.21 um 23:00 schrieb Felix Kuehling:
> > >> Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
> > >>> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
> >  Op 27-01-2021 om 01:22 schreef Felix Kuehling:
> > > Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> > >> Recently there was a fairly long thread about recoreable hardware
> > >> page
> > >> faults, how they can deadlock, and what to do about that.
> > >>
> > >> While the discussion is still fresh I figured good time to try and
> > >> document the conclusions a bit.
> > >>
> > >> References:
> > >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cfelix.kuehling%40amd.com%7C4e4884be55d74c4dda1408d8c35fd0ab%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474163592260552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=y2VzC4vbfMi0ctyerAHfqODZ6tthz1FUDwpMCp0PIrQ%3Dreserved=0
> > >>
> > >> Cc: Maarten Lankhorst 
> > >> Cc: Thomas Hellström 
> > >> Cc: "Christian König" 
> > >> Cc: Jerome Glisse 
> > >> Cc: Felix Kuehling 
> > >> Signed-off-by: Daniel Vetter 
> > >> Cc: Sumit Semwal 
> > >> Cc: linux-me...@vger.kernel.org
> > >> Cc: linaro-mm-...@lists.linaro.org
> > >> --
> > >> I'll be away next week, but figured I'll type this up quickly for
> > >> some
> > >> comments and to check whether I got this all roughly right.
> > >>
> > >> Critique very much wanted on this, so that we can make sure hw which
> > >> can't preempt (with pagefaults pending) like gfx10 has a clear
> > >> path to
> > >> support page faults in upstream. So anything I missed, got wrong or
> > >> like that would be good.
> > >> -Daniel
> > >> ---
> > >>Documentation/driver-api/dma-buf.rst | 66
> > >> 
> > >>1 file changed, 66 insertions(+)
> > >>
> > >> diff --git a/Documentation/driver-api/dma-buf.rst
> > >> b/Documentation/driver-api/dma-buf.rst
> > >> index a2133d69872c..e924c1e4f7a3 100644
> > >> --- a/Documentation/driver-api/dma-buf.rst
> > >> +++ b/Documentation/driver-api/dma-buf.rst
> > >> @@ -257,3 +257,69 @@ fences in the kernel. This means:
> > >>  userspace is allowed to use userspace fencing or long running
> > >> compute
> > >>  workloads. This also means no implicit fencing for shared
> > >> buffers in these
> > >>  cases.
> > >> +
> > >> +Recoverable Hardware Page Faults Implications
> > >> +~
> > >> +
> > >> +Modern hardware supports recoverable page faults, which has a
> > >> lot of
> > >> +implications for DMA fences.
> > >> +
> > >> +First, a pending page fault obviously holds up the work that's
> > >> running on the
> > >> +accelerator and a memory allocation is usually required to resolve
> > >> the fault.
> > >> +But memory allocations are not allowed to gate completion of DMA
> > >> fences, which
> > >> +means any workload using recoverable page faults cannot use DMA
> > >> fences for
> > >> +synchronization. Synchronization fences controlled by userspace
> > >> must be used
> > >> +instead.
> > >> +
> > >> +On GPUs this poses a problem, because current desktop compositor
> > >> protocols on
> > >> +Linus rely on DMA fences, which means without an entirely new
> > >> userspace stack
> > >> +built on top of userspace fences, they cannot benefit from
> > >> recoverable page
> > >> +faults. The exception is when page faults are only used as
> > >> migration hints and
> > >> +never to on-demand fill a memory request. For now this means
> > >> recoverable page
> > >> +faults on GPUs are limited to pure compute workloads.
> > >> +
> > >> +Furthermore GPUs usually have shared resources between the 3D
> > >> rendering and
> > >> +compute side, like compute units or command submission engines. If
> > >> both a 3D
> > >> +job with a DMA fence and a compute workload using recoverable page
> > >> faults are
> > >> +pending they could deadlock:
> > >> +
> > >> +- The 3D workload might need to wait for the compute job to finish
> > >> and release
> > >> +  hardware resources first.
> > >> +
> > >> +- The compute workload might be stuck in a page fault, because the
> > >> memory
> > >> +  allocation is waiting for the DMA fence of the 3D workload to
> > >> complete.
> > >> +
> > >> +There are a few ways to prevent this problem:
> > >> +
> > 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-08 Thread Bas Nieuwenhuizen
On Thu, Jan 28, 2021 at 4:40 PM Felix Kuehling  wrote:
>
> Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
> > Am 27.01.21 um 23:00 schrieb Felix Kuehling:
> >> Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
> >>> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
>  Op 27-01-2021 om 01:22 schreef Felix Kuehling:
> > Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> >> Recently there was a fairly long thread about recoreable hardware
> >> page
> >> faults, how they can deadlock, and what to do about that.
> >>
> >> While the discussion is still fresh I figured good time to try and
> >> document the conclusions a bit.
> >>
> >> References:
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cfelix.kuehling%40amd.com%7C4e4884be55d74c4dda1408d8c35fd0ab%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474163592260552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=y2VzC4vbfMi0ctyerAHfqODZ6tthz1FUDwpMCp0PIrQ%3Dreserved=0
> >>
> >> Cc: Maarten Lankhorst 
> >> Cc: Thomas Hellström 
> >> Cc: "Christian König" 
> >> Cc: Jerome Glisse 
> >> Cc: Felix Kuehling 
> >> Signed-off-by: Daniel Vetter 
> >> Cc: Sumit Semwal 
> >> Cc: linux-me...@vger.kernel.org
> >> Cc: linaro-mm-...@lists.linaro.org
> >> --
> >> I'll be away next week, but figured I'll type this up quickly for
> >> some
> >> comments and to check whether I got this all roughly right.
> >>
> >> Critique very much wanted on this, so that we can make sure hw which
> >> can't preempt (with pagefaults pending) like gfx10 has a clear
> >> path to
> >> support page faults in upstream. So anything I missed, got wrong or
> >> like that would be good.
> >> -Daniel
> >> ---
> >>Documentation/driver-api/dma-buf.rst | 66
> >> 
> >>1 file changed, 66 insertions(+)
> >>
> >> diff --git a/Documentation/driver-api/dma-buf.rst
> >> b/Documentation/driver-api/dma-buf.rst
> >> index a2133d69872c..e924c1e4f7a3 100644
> >> --- a/Documentation/driver-api/dma-buf.rst
> >> +++ b/Documentation/driver-api/dma-buf.rst
> >> @@ -257,3 +257,69 @@ fences in the kernel. This means:
> >>  userspace is allowed to use userspace fencing or long running
> >> compute
> >>  workloads. This also means no implicit fencing for shared
> >> buffers in these
> >>  cases.
> >> +
> >> +Recoverable Hardware Page Faults Implications
> >> +~
> >> +
> >> +Modern hardware supports recoverable page faults, which has a
> >> lot of
> >> +implications for DMA fences.
> >> +
> >> +First, a pending page fault obviously holds up the work that's
> >> running on the
> >> +accelerator and a memory allocation is usually required to resolve
> >> the fault.
> >> +But memory allocations are not allowed to gate completion of DMA
> >> fences, which
> >> +means any workload using recoverable page faults cannot use DMA
> >> fences for
> >> +synchronization. Synchronization fences controlled by userspace
> >> must be used
> >> +instead.
> >> +
> >> +On GPUs this poses a problem, because current desktop compositor
> >> protocols on
> >> +Linus rely on DMA fences, which means without an entirely new
> >> userspace stack
> >> +built on top of userspace fences, they cannot benefit from
> >> recoverable page
> >> +faults. The exception is when page faults are only used as
> >> migration hints and
> >> +never to on-demand fill a memory request. For now this means
> >> recoverable page
> >> +faults on GPUs are limited to pure compute workloads.
> >> +
> >> +Furthermore GPUs usually have shared resources between the 3D
> >> rendering and
> >> +compute side, like compute units or command submission engines. If
> >> both a 3D
> >> +job with a DMA fence and a compute workload using recoverable page
> >> faults are
> >> +pending they could deadlock:
> >> +
> >> +- The 3D workload might need to wait for the compute job to finish
> >> and release
> >> +  hardware resources first.
> >> +
> >> +- The compute workload might be stuck in a page fault, because the
> >> memory
> >> +  allocation is waiting for the DMA fence of the 3D workload to
> >> complete.
> >> +
> >> +There are a few ways to prevent this problem:
> >> +
> >> +- Compute workloads can always be preempted, even when a page
> >> fault is pending
> >> +  and not yet repaired. Not all hardware supports this.
> >> +
> >> +- DMA fence workloads and workloads which need page fault handling
> >> have
> >> + 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-02-02 Thread Daniel Vetter
Back from vacations.

On Thu, Jan 28, 2021 at 04:46:55PM +0100, Christian König wrote:
> Am 28.01.21 um 16:39 schrieb Felix Kuehling:
> > Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
> > > Am 27.01.21 um 23:00 schrieb Felix Kuehling:
> > > > Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
> > > > > Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
> > > > > > Op 27-01-2021 om 01:22 schreef Felix Kuehling:
> > > > > > > Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> > > > > > > > Recently there was a fairly long thread about recoreable 
> > > > > > > > hardware
> > > > > > > > page
> > > > > > > > faults, how they can deadlock, and what to do about that.
> > > > > > > > 
> > > > > > > > While the discussion is still fresh I figured good time to try 
> > > > > > > > and
> > > > > > > > document the conclusions a bit.
> > > > > > > > 
> > > > > > > > References:
> > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cfelix.kuehling%40amd.com%7C4e4884be55d74c4dda1408d8c35fd0ab%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474163592260552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=y2VzC4vbfMi0ctyerAHfqODZ6tthz1FUDwpMCp0PIrQ%3Dreserved=0
> > > > > > > > 
> > > > > > > > Cc: Maarten Lankhorst 
> > > > > > > > Cc: Thomas Hellström 
> > > > > > > > Cc: "Christian König" 
> > > > > > > > Cc: Jerome Glisse 
> > > > > > > > Cc: Felix Kuehling 
> > > > > > > > Signed-off-by: Daniel Vetter 
> > > > > > > > Cc: Sumit Semwal 
> > > > > > > > Cc: linux-me...@vger.kernel.org
> > > > > > > > Cc: linaro-mm-...@lists.linaro.org
> > > > > > > > -- 
> > > > > > > > I'll be away next week, but figured I'll type this up quickly 
> > > > > > > > for
> > > > > > > > some
> > > > > > > > comments and to check whether I got this all roughly right.
> > > > > > > > 
> > > > > > > > Critique very much wanted on this, so that we can make sure hw 
> > > > > > > > which
> > > > > > > > can't preempt (with pagefaults pending) like gfx10 has a clear
> > > > > > > > path to
> > > > > > > > support page faults in upstream. So anything I missed, got 
> > > > > > > > wrong or
> > > > > > > > like that would be good.
> > > > > > > > -Daniel
> > > > > > > > ---
> > > > > > > >     Documentation/driver-api/dma-buf.rst | 66
> > > > > > > > 
> > > > > > > >     1 file changed, 66 insertions(+)
> > > > > > > > 
> > > > > > > > diff --git a/Documentation/driver-api/dma-buf.rst
> > > > > > > > b/Documentation/driver-api/dma-buf.rst
> > > > > > > > index a2133d69872c..e924c1e4f7a3 100644
> > > > > > > > --- a/Documentation/driver-api/dma-buf.rst
> > > > > > > > +++ b/Documentation/driver-api/dma-buf.rst
> > > > > > > > @@ -257,3 +257,69 @@ fences in the kernel. This means:
> > > > > > > >   userspace is allowed to use userspace fencing or long 
> > > > > > > > running
> > > > > > > > compute
> > > > > > > >   workloads. This also means no implicit fencing for shared
> > > > > > > > buffers in these
> > > > > > > >   cases.
> > > > > > > > +
> > > > > > > > +Recoverable Hardware Page Faults Implications
> > > > > > > > +~
> > > > > > > > +
> > > > > > > > +Modern hardware supports recoverable page faults, which has a
> > > > > > > > lot of
> > > > > > > > +implications for DMA fences.
> > > > > > > > +
> > > > > > > > +First, a pending page fault obviously holds up the work that's
> > > > > > > > running on the
> > > > > > > > +accelerator and a memory allocation is usually required to 
> > > > > > > > resolve
> > > > > > > > the fault.
> > > > > > > > +But memory allocations are not allowed to gate completion of 
> > > > > > > > DMA
> > > > > > > > fences, which
> > > > > > > > +means any workload using recoverable page faults cannot use DMA
> > > > > > > > fences for
> > > > > > > > +synchronization. Synchronization fences controlled by userspace
> > > > > > > > must be used
> > > > > > > > +instead.
> > > > > > > > +
> > > > > > > > +On GPUs this poses a problem, because current desktop 
> > > > > > > > compositor
> > > > > > > > protocols on
> > > > > > > > +Linus rely on DMA fences, which means without an entirely new
> > > > > > > > userspace stack
> > > > > > > > +built on top of userspace fences, they cannot benefit from
> > > > > > > > recoverable page
> > > > > > > > +faults. The exception is when page faults are only used as
> > > > > > > > migration hints and
> > > > > > > > +never to on-demand fill a memory request. For now this means
> > > > > > > > recoverable page
> > > > > > > > +faults on GPUs are limited to pure compute workloads.
> > > > > > > > +
> > > > > > > > +Furthermore GPUs usually have shared resources between the 3D
> > > > > > > > rendering and
> > > > > > > > +compute side, like compute units or command submission 
> > > > > > > > engines. If
> > > 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-28 Thread Christian König

Am 28.01.21 um 16:39 schrieb Felix Kuehling:

Am 2021-01-28 um 2:39 a.m. schrieb Christian König:

Am 27.01.21 um 23:00 schrieb Felix Kuehling:

Am 2021-01-27 um 7:16 a.m. schrieb Christian König:

Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:

Op 27-01-2021 om 01:22 schreef Felix Kuehling:

Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:

Recently there was a fairly long thread about recoreable hardware
page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cfelix.kuehling%40amd.com%7C4e4884be55d74c4dda1408d8c35fd0ab%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474163592260552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=y2VzC4vbfMi0ctyerAHfqODZ6tthz1FUDwpMCp0PIrQ%3Dreserved=0

Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for
some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear
path to
support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
    Documentation/driver-api/dma-buf.rst | 66

    1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
  userspace is allowed to use userspace fencing or long running
compute
  workloads. This also means no implicit fencing for shared
buffers in these
  cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a
lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's
running on the
+accelerator and a memory allocation is usually required to resolve
the fault.
+But memory allocations are not allowed to gate completion of DMA
fences, which
+means any workload using recoverable page faults cannot use DMA
fences for
+synchronization. Synchronization fences controlled by userspace
must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor
protocols on
+Linus rely on DMA fences, which means without an entirely new
userspace stack
+built on top of userspace fences, they cannot benefit from
recoverable page
+faults. The exception is when page faults are only used as
migration hints and
+never to on-demand fill a memory request. For now this means
recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D
rendering and
+compute side, like compute units or command submission engines. If
both a 3D
+job with a DMA fence and a compute workload using recoverable page
faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish
and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the
memory
+  allocation is waiting for the DMA fence of the 3D workload to
complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page
fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling
have
+  independent hardware resources to guarantee forward progress.
This could be
+  achieved through e.g. through dedicated engines and minimal
compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only
reserving the
+  hardware resources for DMA fence workloads when they are
in-flight. This must
+  cover the time from when the DMA fence is visible to other
threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation
mechanics,
+  all workloads must be flushed from the GPU when switching
between jobs
+  requiring DMA fences or jobs requiring page fault handling: This
means all DMA
+  fences must complete before a compute job with page fault
handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA
fence can be
+  made visible anywhere in the system, all compute workloads must
be 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-28 Thread Felix Kuehling
Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
> Am 27.01.21 um 23:00 schrieb Felix Kuehling:
>> Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
>>> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
 Op 27-01-2021 om 01:22 schreef Felix Kuehling:
> Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
>> Recently there was a fairly long thread about recoreable hardware
>> page
>> faults, how they can deadlock, and what to do about that.
>>
>> While the discussion is still fresh I figured good time to try and
>> document the conclusions a bit.
>>
>> References:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cfelix.kuehling%40amd.com%7C4e4884be55d74c4dda1408d8c35fd0ab%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474163592260552%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=y2VzC4vbfMi0ctyerAHfqODZ6tthz1FUDwpMCp0PIrQ%3Dreserved=0
>>
>> Cc: Maarten Lankhorst 
>> Cc: Thomas Hellström 
>> Cc: "Christian König" 
>> Cc: Jerome Glisse 
>> Cc: Felix Kuehling 
>> Signed-off-by: Daniel Vetter 
>> Cc: Sumit Semwal 
>> Cc: linux-me...@vger.kernel.org
>> Cc: linaro-mm-...@lists.linaro.org
>> -- 
>> I'll be away next week, but figured I'll type this up quickly for
>> some
>> comments and to check whether I got this all roughly right.
>>
>> Critique very much wanted on this, so that we can make sure hw which
>> can't preempt (with pagefaults pending) like gfx10 has a clear
>> path to
>> support page faults in upstream. So anything I missed, got wrong or
>> like that would be good.
>> -Daniel
>> ---
>>    Documentation/driver-api/dma-buf.rst | 66
>> 
>>    1 file changed, 66 insertions(+)
>>
>> diff --git a/Documentation/driver-api/dma-buf.rst
>> b/Documentation/driver-api/dma-buf.rst
>> index a2133d69872c..e924c1e4f7a3 100644
>> --- a/Documentation/driver-api/dma-buf.rst
>> +++ b/Documentation/driver-api/dma-buf.rst
>> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>>  userspace is allowed to use userspace fencing or long running
>> compute
>>  workloads. This also means no implicit fencing for shared
>> buffers in these
>>  cases.
>> +
>> +Recoverable Hardware Page Faults Implications
>> +~
>> +
>> +Modern hardware supports recoverable page faults, which has a
>> lot of
>> +implications for DMA fences.
>> +
>> +First, a pending page fault obviously holds up the work that's
>> running on the
>> +accelerator and a memory allocation is usually required to resolve
>> the fault.
>> +But memory allocations are not allowed to gate completion of DMA
>> fences, which
>> +means any workload using recoverable page faults cannot use DMA
>> fences for
>> +synchronization. Synchronization fences controlled by userspace
>> must be used
>> +instead.
>> +
>> +On GPUs this poses a problem, because current desktop compositor
>> protocols on
>> +Linus rely on DMA fences, which means without an entirely new
>> userspace stack
>> +built on top of userspace fences, they cannot benefit from
>> recoverable page
>> +faults. The exception is when page faults are only used as
>> migration hints and
>> +never to on-demand fill a memory request. For now this means
>> recoverable page
>> +faults on GPUs are limited to pure compute workloads.
>> +
>> +Furthermore GPUs usually have shared resources between the 3D
>> rendering and
>> +compute side, like compute units or command submission engines. If
>> both a 3D
>> +job with a DMA fence and a compute workload using recoverable page
>> faults are
>> +pending they could deadlock:
>> +
>> +- The 3D workload might need to wait for the compute job to finish
>> and release
>> +  hardware resources first.
>> +
>> +- The compute workload might be stuck in a page fault, because the
>> memory
>> +  allocation is waiting for the DMA fence of the 3D workload to
>> complete.
>> +
>> +There are a few ways to prevent this problem:
>> +
>> +- Compute workloads can always be preempted, even when a page
>> fault is pending
>> +  and not yet repaired. Not all hardware supports this.
>> +
>> +- DMA fence workloads and workloads which need page fault handling
>> have
>> +  independent hardware resources to guarantee forward progress.
>> This could be
>> +  achieved through e.g. through dedicated engines and minimal
>> compute unit
>> +  reservations for DMA fence workloads.
>> +
>> +- The reservation approach could be 

Re: [Linaro-mm-sig] [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-27 Thread Christian König

Am 27.01.21 um 23:00 schrieb Felix Kuehling:

Am 2021-01-27 um 7:16 a.m. schrieb Christian König:

Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:

Op 27-01-2021 om 01:22 schreef Felix Kuehling:

Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:

Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7Cbee0aeff80f440bcc52108d8c2bcc11f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637473463245588199%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=ncr%2Fqv5lw0ONrYxFvfdcFAXAZ%2BXcJJa6UY%2BxGfcKGVM%3Dreserved=0
Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear path to
support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
   Documentation/driver-api/dma-buf.rst | 66

   1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
     userspace is allowed to use userspace fencing or long running
compute
     workloads. This also means no implicit fencing for shared
buffers in these
     cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's
running on the
+accelerator and a memory allocation is usually required to resolve
the fault.
+But memory allocations are not allowed to gate completion of DMA
fences, which
+means any workload using recoverable page faults cannot use DMA
fences for
+synchronization. Synchronization fences controlled by userspace
must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor
protocols on
+Linus rely on DMA fences, which means without an entirely new
userspace stack
+built on top of userspace fences, they cannot benefit from
recoverable page
+faults. The exception is when page faults are only used as
migration hints and
+never to on-demand fill a memory request. For now this means
recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D
rendering and
+compute side, like compute units or command submission engines. If
both a 3D
+job with a DMA fence and a compute workload using recoverable page
faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish
and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the
memory
+  allocation is waiting for the DMA fence of the 3D workload to
complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page
fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling
have
+  independent hardware resources to guarantee forward progress.
This could be
+  achieved through e.g. through dedicated engines and minimal
compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only
reserving the
+  hardware resources for DMA fence workloads when they are
in-flight. This must
+  cover the time from when the DMA fence is visible to other
threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation
mechanics,
+  all workloads must be flushed from the GPU when switching
between jobs
+  requiring DMA fences or jobs requiring page fault handling: This
means all DMA
+  fences must complete before a compute job with page fault
handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA
fence can be
+  made visible anywhere in the system, all compute workloads must
be preempted
+  to guarantee all pending GPU page faults are flushed.

I thought of another possible 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-27 Thread Felix Kuehling
Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
> Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
>> Op 27-01-2021 om 01:22 schreef Felix Kuehling:
>>> Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
 Recently there was a fairly long thread about recoreable hardware page
 faults, how they can deadlock, and what to do about that.

 While the discussion is still fresh I figured good time to try and
 document the conclusions a bit.

 References:
 https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7Cbee0aeff80f440bcc52108d8c2bcc11f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637473463245588199%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=ncr%2Fqv5lw0ONrYxFvfdcFAXAZ%2BXcJJa6UY%2BxGfcKGVM%3Dreserved=0
 Cc: Maarten Lankhorst 
 Cc: Thomas Hellström 
 Cc: "Christian König" 
 Cc: Jerome Glisse 
 Cc: Felix Kuehling 
 Signed-off-by: Daniel Vetter 
 Cc: Sumit Semwal 
 Cc: linux-me...@vger.kernel.org
 Cc: linaro-mm-...@lists.linaro.org
 -- 
 I'll be away next week, but figured I'll type this up quickly for some
 comments and to check whether I got this all roughly right.

 Critique very much wanted on this, so that we can make sure hw which
 can't preempt (with pagefaults pending) like gfx10 has a clear path to
 support page faults in upstream. So anything I missed, got wrong or
 like that would be good.
 -Daniel
 ---
   Documentation/driver-api/dma-buf.rst | 66
 
   1 file changed, 66 insertions(+)

 diff --git a/Documentation/driver-api/dma-buf.rst
 b/Documentation/driver-api/dma-buf.rst
 index a2133d69872c..e924c1e4f7a3 100644
 --- a/Documentation/driver-api/dma-buf.rst
 +++ b/Documentation/driver-api/dma-buf.rst
 @@ -257,3 +257,69 @@ fences in the kernel. This means:
     userspace is allowed to use userspace fencing or long running
 compute
     workloads. This also means no implicit fencing for shared
 buffers in these
     cases.
 +
 +Recoverable Hardware Page Faults Implications
 +~
 +
 +Modern hardware supports recoverable page faults, which has a lot of
 +implications for DMA fences.
 +
 +First, a pending page fault obviously holds up the work that's
 running on the
 +accelerator and a memory allocation is usually required to resolve
 the fault.
 +But memory allocations are not allowed to gate completion of DMA
 fences, which
 +means any workload using recoverable page faults cannot use DMA
 fences for
 +synchronization. Synchronization fences controlled by userspace
 must be used
 +instead.
 +
 +On GPUs this poses a problem, because current desktop compositor
 protocols on
 +Linus rely on DMA fences, which means without an entirely new
 userspace stack
 +built on top of userspace fences, they cannot benefit from
 recoverable page
 +faults. The exception is when page faults are only used as
 migration hints and
 +never to on-demand fill a memory request. For now this means
 recoverable page
 +faults on GPUs are limited to pure compute workloads.
 +
 +Furthermore GPUs usually have shared resources between the 3D
 rendering and
 +compute side, like compute units or command submission engines. If
 both a 3D
 +job with a DMA fence and a compute workload using recoverable page
 faults are
 +pending they could deadlock:
 +
 +- The 3D workload might need to wait for the compute job to finish
 and release
 +  hardware resources first.
 +
 +- The compute workload might be stuck in a page fault, because the
 memory
 +  allocation is waiting for the DMA fence of the 3D workload to
 complete.
 +
 +There are a few ways to prevent this problem:
 +
 +- Compute workloads can always be preempted, even when a page
 fault is pending
 +  and not yet repaired. Not all hardware supports this.
 +
 +- DMA fence workloads and workloads which need page fault handling
 have
 +  independent hardware resources to guarantee forward progress.
 This could be
 +  achieved through e.g. through dedicated engines and minimal
 compute unit
 +  reservations for DMA fence workloads.
 +
 +- The reservation approach could be further refined by only
 reserving the
 +  hardware resources for DMA fence workloads when they are
 in-flight. This must
 +  cover the time from when the DMA fence is visible to other
 threads up to
 +  moment when fence is completed through dma_fence_signal().
 +
 +- As a last resort, if the hardware provides no 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-27 Thread Christian König

Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:

Op 27-01-2021 om 01:22 schreef Felix Kuehling:

Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:

Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7Cbee0aeff80f440bcc52108d8c2bcc11f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637473463245588199%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=ncr%2Fqv5lw0ONrYxFvfdcFAXAZ%2BXcJJa6UY%2BxGfcKGVM%3Dreserved=0
Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear path to
support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
  Documentation/driver-api/dma-buf.rst | 66 
  1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst 
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linus rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. The exception is when page faults are only used as migration hints and
+never to on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+  all workloads must be flushed from the GPU when switching between jobs
+  requiring DMA fences or jobs requiring page fault handling: This means all 
DMA
+  fences must complete before a compute job with page fault handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA fence can be
+  made visible anywhere in the system, all compute workloads must be preempted
+  to guarantee all pending GPU page faults are flushed.

I thought of another possible workaround:

   * Partition the memory. Servicing of page faults will use a separate
 memory pool that 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-27 Thread Maarten Lankhorst
Op 27-01-2021 om 01:22 schreef Felix Kuehling:
> Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
>> Recently there was a fairly long thread about recoreable hardware page
>> faults, how they can deadlock, and what to do about that.
>>
>> While the discussion is still fresh I figured good time to try and
>> document the conclusions a bit.
>>
>> References: 
>> https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
>> Cc: Maarten Lankhorst 
>> Cc: Thomas Hellström 
>> Cc: "Christian König" 
>> Cc: Jerome Glisse 
>> Cc: Felix Kuehling 
>> Signed-off-by: Daniel Vetter 
>> Cc: Sumit Semwal 
>> Cc: linux-me...@vger.kernel.org
>> Cc: linaro-mm-...@lists.linaro.org
>> --
>> I'll be away next week, but figured I'll type this up quickly for some
>> comments and to check whether I got this all roughly right.
>>
>> Critique very much wanted on this, so that we can make sure hw which
>> can't preempt (with pagefaults pending) like gfx10 has a clear path to
>> support page faults in upstream. So anything I missed, got wrong or
>> like that would be good.
>> -Daniel
>> ---
>>  Documentation/driver-api/dma-buf.rst | 66 
>>  1 file changed, 66 insertions(+)
>>
>> diff --git a/Documentation/driver-api/dma-buf.rst 
>> b/Documentation/driver-api/dma-buf.rst
>> index a2133d69872c..e924c1e4f7a3 100644
>> --- a/Documentation/driver-api/dma-buf.rst
>> +++ b/Documentation/driver-api/dma-buf.rst
>> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>>userspace is allowed to use userspace fencing or long running compute
>>workloads. This also means no implicit fencing for shared buffers in these
>>cases.
>> +
>> +Recoverable Hardware Page Faults Implications
>> +~
>> +
>> +Modern hardware supports recoverable page faults, which has a lot of
>> +implications for DMA fences.
>> +
>> +First, a pending page fault obviously holds up the work that's running on 
>> the
>> +accelerator and a memory allocation is usually required to resolve the 
>> fault.
>> +But memory allocations are not allowed to gate completion of DMA fences, 
>> which
>> +means any workload using recoverable page faults cannot use DMA fences for
>> +synchronization. Synchronization fences controlled by userspace must be used
>> +instead.
>> +
>> +On GPUs this poses a problem, because current desktop compositor protocols 
>> on
>> +Linus rely on DMA fences, which means without an entirely new userspace 
>> stack
>> +built on top of userspace fences, they cannot benefit from recoverable page
>> +faults. The exception is when page faults are only used as migration hints 
>> and
>> +never to on-demand fill a memory request. For now this means recoverable 
>> page
>> +faults on GPUs are limited to pure compute workloads.
>> +
>> +Furthermore GPUs usually have shared resources between the 3D rendering and
>> +compute side, like compute units or command submission engines. If both a 3D
>> +job with a DMA fence and a compute workload using recoverable page faults 
>> are
>> +pending they could deadlock:
>> +
>> +- The 3D workload might need to wait for the compute job to finish and 
>> release
>> +  hardware resources first.
>> +
>> +- The compute workload might be stuck in a page fault, because the memory
>> +  allocation is waiting for the DMA fence of the 3D workload to complete.
>> +
>> +There are a few ways to prevent this problem:
>> +
>> +- Compute workloads can always be preempted, even when a page fault is 
>> pending
>> +  and not yet repaired. Not all hardware supports this.
>> +
>> +- DMA fence workloads and workloads which need page fault handling have
>> +  independent hardware resources to guarantee forward progress. This could 
>> be
>> +  achieved through e.g. through dedicated engines and minimal compute unit
>> +  reservations for DMA fence workloads.
>> +
>> +- The reservation approach could be further refined by only reserving the
>> +  hardware resources for DMA fence workloads when they are in-flight. This 
>> must
>> +  cover the time from when the DMA fence is visible to other threads up to
>> +  moment when fence is completed through dma_fence_signal().
>> +
>> +- As a last resort, if the hardware provides no useful reservation 
>> mechanics,
>> +  all workloads must be flushed from the GPU when switching between jobs
>> +  requiring DMA fences or jobs requiring page fault handling: This means 
>> all DMA
>> +  fences must complete before a compute job with page fault handling can be
>> +  inserted into the scheduler queue. And vice versa, before a DMA fence can 
>> be
>> +  made visible anywhere in the system, all compute workloads must be 
>> preempted
>> +  to guarantee all pending GPU page faults are flushed.
> I thought of another possible workaround:
>
>   * Partition the memory. Servicing of page faults will use a separate
> memory pool that can always be allocated from without waiting for
> fences. This includes memory 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-26 Thread Felix Kuehling
Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> Recently there was a fairly long thread about recoreable hardware page
> faults, how they can deadlock, and what to do about that.
>
> While the discussion is still fresh I figured good time to try and
> document the conclusions a bit.
>
> References: 
> https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
> Cc: Maarten Lankhorst 
> Cc: Thomas Hellström 
> Cc: "Christian König" 
> Cc: Jerome Glisse 
> Cc: Felix Kuehling 
> Signed-off-by: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> --
> I'll be away next week, but figured I'll type this up quickly for some
> comments and to check whether I got this all roughly right.
>
> Critique very much wanted on this, so that we can make sure hw which
> can't preempt (with pagefaults pending) like gfx10 has a clear path to
> support page faults in upstream. So anything I missed, got wrong or
> like that would be good.
> -Daniel
> ---
>  Documentation/driver-api/dma-buf.rst | 66 
>  1 file changed, 66 insertions(+)
>
> diff --git a/Documentation/driver-api/dma-buf.rst 
> b/Documentation/driver-api/dma-buf.rst
> index a2133d69872c..e924c1e4f7a3 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>userspace is allowed to use userspace fencing or long running compute
>workloads. This also means no implicit fencing for shared buffers in these
>cases.
> +
> +Recoverable Hardware Page Faults Implications
> +~
> +
> +Modern hardware supports recoverable page faults, which has a lot of
> +implications for DMA fences.
> +
> +First, a pending page fault obviously holds up the work that's running on the
> +accelerator and a memory allocation is usually required to resolve the fault.
> +But memory allocations are not allowed to gate completion of DMA fences, 
> which
> +means any workload using recoverable page faults cannot use DMA fences for
> +synchronization. Synchronization fences controlled by userspace must be used
> +instead.
> +
> +On GPUs this poses a problem, because current desktop compositor protocols on
> +Linus rely on DMA fences, which means without an entirely new userspace stack
> +built on top of userspace fences, they cannot benefit from recoverable page
> +faults. The exception is when page faults are only used as migration hints 
> and
> +never to on-demand fill a memory request. For now this means recoverable page
> +faults on GPUs are limited to pure compute workloads.
> +
> +Furthermore GPUs usually have shared resources between the 3D rendering and
> +compute side, like compute units or command submission engines. If both a 3D
> +job with a DMA fence and a compute workload using recoverable page faults are
> +pending they could deadlock:
> +
> +- The 3D workload might need to wait for the compute job to finish and 
> release
> +  hardware resources first.
> +
> +- The compute workload might be stuck in a page fault, because the memory
> +  allocation is waiting for the DMA fence of the 3D workload to complete.
> +
> +There are a few ways to prevent this problem:
> +
> +- Compute workloads can always be preempted, even when a page fault is 
> pending
> +  and not yet repaired. Not all hardware supports this.
> +
> +- DMA fence workloads and workloads which need page fault handling have
> +  independent hardware resources to guarantee forward progress. This could be
> +  achieved through e.g. through dedicated engines and minimal compute unit
> +  reservations for DMA fence workloads.
> +
> +- The reservation approach could be further refined by only reserving the
> +  hardware resources for DMA fence workloads when they are in-flight. This 
> must
> +  cover the time from when the DMA fence is visible to other threads up to
> +  moment when fence is completed through dma_fence_signal().
> +
> +- As a last resort, if the hardware provides no useful reservation mechanics,
> +  all workloads must be flushed from the GPU when switching between jobs
> +  requiring DMA fences or jobs requiring page fault handling: This means all 
> DMA
> +  fences must complete before a compute job with page fault handling can be
> +  inserted into the scheduler queue. And vice versa, before a DMA fence can 
> be
> +  made visible anywhere in the system, all compute workloads must be 
> preempted
> +  to guarantee all pending GPU page faults are flushed.

I thought of another possible workaround:

  * Partition the memory. Servicing of page faults will use a separate
memory pool that can always be allocated from without waiting for
fences. This includes memory for page tables and memory for
migrating data to. You may steal memory from other processes that
can page fault, so no fence waiting is necessary. Being able to
steal memory at 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-22 Thread Felix Kuehling
Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter:
> Recently there was a fairly long thread about recoreable hardware page
> faults, how they can deadlock, and what to do about that.
>
> While the discussion is still fresh I figured good time to try and
> document the conclusions a bit.
Thank you Daniel. This is a good summary of our discussion. It's also an
external reference I can point our HW engineers at when they're
wondering about what "real software" does.

Regards,
  Felix


>
> References: 
> https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
> Cc: Maarten Lankhorst 
> Cc: Thomas Hellström 
> Cc: "Christian König" 
> Cc: Jerome Glisse 
> Cc: Felix Kuehling 
> Signed-off-by: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> --
> I'll be away next week, but figured I'll type this up quickly for some
> comments and to check whether I got this all roughly right.
>
> Critique very much wanted on this, so that we can make sure hw which
> can't preempt (with pagefaults pending) like gfx10 has a clear path to
> support page faults in upstream. So anything I missed, got wrong or
> like that would be good.
> -Daniel
> ---
>  Documentation/driver-api/dma-buf.rst | 66 
>  1 file changed, 66 insertions(+)
>
> diff --git a/Documentation/driver-api/dma-buf.rst 
> b/Documentation/driver-api/dma-buf.rst
> index a2133d69872c..e924c1e4f7a3 100644
> --- a/Documentation/driver-api/dma-buf.rst
> +++ b/Documentation/driver-api/dma-buf.rst
> @@ -257,3 +257,69 @@ fences in the kernel. This means:
>userspace is allowed to use userspace fencing or long running compute
>workloads. This also means no implicit fencing for shared buffers in these
>cases.
> +
> +Recoverable Hardware Page Faults Implications
> +~
> +
> +Modern hardware supports recoverable page faults, which has a lot of
> +implications for DMA fences.
> +
> +First, a pending page fault obviously holds up the work that's running on the
> +accelerator and a memory allocation is usually required to resolve the fault.
> +But memory allocations are not allowed to gate completion of DMA fences, 
> which
> +means any workload using recoverable page faults cannot use DMA fences for
> +synchronization. Synchronization fences controlled by userspace must be used
> +instead.
> +
> +On GPUs this poses a problem, because current desktop compositor protocols on
> +Linus rely on DMA fences, which means without an entirely new userspace stack
> +built on top of userspace fences, they cannot benefit from recoverable page
> +faults. The exception is when page faults are only used as migration hints 
> and
> +never to on-demand fill a memory request. For now this means recoverable page
> +faults on GPUs are limited to pure compute workloads.
> +
> +Furthermore GPUs usually have shared resources between the 3D rendering and
> +compute side, like compute units or command submission engines. If both a 3D
> +job with a DMA fence and a compute workload using recoverable page faults are
> +pending they could deadlock:
> +
> +- The 3D workload might need to wait for the compute job to finish and 
> release
> +  hardware resources first.
> +
> +- The compute workload might be stuck in a page fault, because the memory
> +  allocation is waiting for the DMA fence of the 3D workload to complete.
> +
> +There are a few ways to prevent this problem:
> +
> +- Compute workloads can always be preempted, even when a page fault is 
> pending
> +  and not yet repaired. Not all hardware supports this.
> +
> +- DMA fence workloads and workloads which need page fault handling have
> +  independent hardware resources to guarantee forward progress. This could be
> +  achieved through e.g. through dedicated engines and minimal compute unit
> +  reservations for DMA fence workloads.
> +
> +- The reservation approach could be further refined by only reserving the
> +  hardware resources for DMA fence workloads when they are in-flight. This 
> must
> +  cover the time from when the DMA fence is visible to other threads up to
> +  moment when fence is completed through dma_fence_signal().
> +
> +- As a last resort, if the hardware provides no useful reservation mechanics,
> +  all workloads must be flushed from the GPU when switching between jobs
> +  requiring DMA fences or jobs requiring page fault handling: This means all 
> DMA
> +  fences must complete before a compute job with page fault handling can be
> +  inserted into the scheduler queue. And vice versa, before a DMA fence can 
> be
> +  made visible anywhere in the system, all compute workloads must be 
> preempted
> +  to guarantee all pending GPU page faults are flushed.
> +
> +Note that workloads that run on independent hardware like copy engines or 
> other
> +GPUs do not have any impact. This allows us to keep using DMA fences 
> internally
> +in the kernel even for 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-22 Thread Daniel Vetter
On Fri, Jan 22, 2021 at 2:24 PM Christian König
 wrote:
>
> Am 22.01.21 um 14:18 schrieb Daniel Vetter:
> > On Fri, Jan 22, 2021 at 2:10 PM Christian König
> >  wrote:
> >> Am 21.01.21 um 20:40 schrieb Daniel Vetter:
> >>> Recently there was a fairly long thread about recoreable hardware page
> >>> faults, how they can deadlock, and what to do about that.
> >>>
> >>> While the discussion is still fresh I figured good time to try and
> >>> document the conclusions a bit.
> >>>
> >>> References: 
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7C25c2b659bc8f47e0bace08d8bed83728%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637469183153437091%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=GlEKsPLRRRO%2BI1JSDpvNeBFbnFacmymxkj8e7QqM5SA%3Dreserved=0
> >>> Cc: Maarten Lankhorst 
> >>> Cc: Thomas Hellström 
> >>> Cc: "Christian König" 
> >>> Cc: Jerome Glisse 
> >>> Cc: Felix Kuehling 
> >>> Signed-off-by: Daniel Vetter 
> >>> Cc: Sumit Semwal 
> >>> Cc: linux-me...@vger.kernel.org
> >>> Cc: linaro-mm-...@lists.linaro.org
> >>> --
> >>> I'll be away next week, but figured I'll type this up quickly for some
> >>> comments and to check whether I got this all roughly right.
> >>>
> >>> Critique very much wanted on this, so that we can make sure hw which
> >>> can't preempt (with pagefaults pending) like gfx10 has a clear path to
>
> One more comment here: You should probably mention that gfx10 is
> referring to AMD GPUs.

Oh that was just the single-patch cover letter. I'll drop it for the
next round since that's not going to be part of the real patch.

> >>> support page faults in upstream. So anything I missed, got wrong or
> >>> like that would be good.
> >>> -Daniel
> >>> ---
> >>>Documentation/driver-api/dma-buf.rst | 66 
> >>>1 file changed, 66 insertions(+)
> >>>
> >>> diff --git a/Documentation/driver-api/dma-buf.rst 
> >>> b/Documentation/driver-api/dma-buf.rst
> >>> index a2133d69872c..e924c1e4f7a3 100644
> >>> --- a/Documentation/driver-api/dma-buf.rst
> >>> +++ b/Documentation/driver-api/dma-buf.rst
> >>> @@ -257,3 +257,69 @@ fences in the kernel. This means:
> >>>  userspace is allowed to use userspace fencing or long running compute
> >>>  workloads. This also means no implicit fencing for shared buffers in 
> >>> these
> >>>  cases.
> >>> +
> >>> +Recoverable Hardware Page Faults Implications
> >>> +~
> >>> +
> >>> +Modern hardware supports recoverable page faults, which has a lot of
> >>> +implications for DMA fences.
> >>> +
> >>> +First, a pending page fault obviously holds up the work that's running 
> >>> on the
> >>> +accelerator and a memory allocation is usually required to resolve the 
> >>> fault.
> >>> +But memory allocations are not allowed to gate completion of DMA fences, 
> >>> which
> >>> +means any workload using recoverable page faults cannot use DMA fences 
> >>> for
> >>> +synchronization. Synchronization fences controlled by userspace must be 
> >>> used
> >>> +instead.
> >>> +
> >>> +On GPUs this poses a problem, because current desktop compositor 
> >>> protocols on
> >>> +Linus rely on DMA fences, which means without an entirely new userspace 
> >>> stack
> >>> +built on top of userspace fences, they cannot benefit from recoverable 
> >>> page
> >>> +faults. The exception is when page faults are only used as migration 
> >>> hints and
> >>> +never to on-demand fill a memory request. For now this means recoverable 
> >>> page
> >>> +faults on GPUs are limited to pure compute workloads.
> >>> +
> >>> +Furthermore GPUs usually have shared resources between the 3D rendering 
> >>> and
> >>> +compute side, like compute units or command submission engines. If both 
> >>> a 3D
> >>> +job with a DMA fence and a compute workload using recoverable page 
> >>> faults are
> >>> +pending they could deadlock:
> >>> +
> >>> +- The 3D workload might need to wait for the compute job to finish and 
> >>> release
> >>> +  hardware resources first.
> >>> +
> >>> +- The compute workload might be stuck in a page fault, because the memory
> >>> +  allocation is waiting for the DMA fence of the 3D workload to complete.
> >>> +
> >>> +There are a few ways to prevent this problem:
> >>> +
> >>> +- Compute workloads can always be preempted, even when a page fault is 
> >>> pending
> >>> +  and not yet repaired. Not all hardware supports this.
> >>> +
> >>> +- DMA fence workloads and workloads which need page fault handling have
> >>> +  independent hardware resources to guarantee forward progress. This 
> >>> could be
> >>> +  achieved through e.g. through dedicated engines and minimal compute 
> >>> unit
> >>> +  reservations for DMA fence workloads.
> >>> +
> >>> +- The reservation approach could be further refined 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-22 Thread Christian König

Am 22.01.21 um 14:18 schrieb Daniel Vetter:

On Fri, Jan 22, 2021 at 2:10 PM Christian König
 wrote:

Am 21.01.21 um 20:40 schrieb Daniel Vetter:

Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7C25c2b659bc8f47e0bace08d8bed83728%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637469183153437091%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=GlEKsPLRRRO%2BI1JSDpvNeBFbnFacmymxkj8e7QqM5SA%3Dreserved=0
Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear path to


One more comment here: You should probably mention that gfx10 is 
referring to AMD GPUs.



support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
   Documentation/driver-api/dma-buf.rst | 66 
   1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst 
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
 userspace is allowed to use userspace fencing or long running compute
 workloads. This also means no implicit fencing for shared buffers in these
 cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linus rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. The exception is when page faults are only used as migration hints and
+never to on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().

Up till here it makes perfect sense, but what should this paragraph mean ?

Instead of reserving a few CU at driver load, to guarantee that
dma-fence workloads can always complete, we only do the reservatation
while a problematic dma_fence is in the system, and note yet
signalled. Of course that approach needs to be very careful, to really
make sure you can't ever deadlock by accident because the dynamic
reservation at runtime was done a notch too late.

This allows us to use all CUs on pure compute workloads (on servers,
and on desktop as long as nothing gets 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-22 Thread Daniel Vetter
On Fri, Jan 22, 2021 at 2:10 PM Christian König
 wrote:
>
> Am 21.01.21 um 20:40 schrieb Daniel Vetter:
> > Recently there was a fairly long thread about recoreable hardware page
> > faults, how they can deadlock, and what to do about that.
> >
> > While the discussion is still fresh I figured good time to try and
> > document the conclusions a bit.
> >
> > References: 
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7C94782d99ad7d4e1cc57c08d8be447d74%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637468548672516391%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=AT8QP2r2UczSqCKkPRTJI1cQ0GOGyykgLcMfW8NbD8w%3Dreserved=0
> > Cc: Maarten Lankhorst 
> > Cc: Thomas Hellström 
> > Cc: "Christian König" 
> > Cc: Jerome Glisse 
> > Cc: Felix Kuehling 
> > Signed-off-by: Daniel Vetter 
> > Cc: Sumit Semwal 
> > Cc: linux-me...@vger.kernel.org
> > Cc: linaro-mm-...@lists.linaro.org
> > --
> > I'll be away next week, but figured I'll type this up quickly for some
> > comments and to check whether I got this all roughly right.
> >
> > Critique very much wanted on this, so that we can make sure hw which
> > can't preempt (with pagefaults pending) like gfx10 has a clear path to
> > support page faults in upstream. So anything I missed, got wrong or
> > like that would be good.
> > -Daniel
> > ---
> >   Documentation/driver-api/dma-buf.rst | 66 
> >   1 file changed, 66 insertions(+)
> >
> > diff --git a/Documentation/driver-api/dma-buf.rst 
> > b/Documentation/driver-api/dma-buf.rst
> > index a2133d69872c..e924c1e4f7a3 100644
> > --- a/Documentation/driver-api/dma-buf.rst
> > +++ b/Documentation/driver-api/dma-buf.rst
> > @@ -257,3 +257,69 @@ fences in the kernel. This means:
> > userspace is allowed to use userspace fencing or long running compute
> > workloads. This also means no implicit fencing for shared buffers in 
> > these
> > cases.
> > +
> > +Recoverable Hardware Page Faults Implications
> > +~
> > +
> > +Modern hardware supports recoverable page faults, which has a lot of
> > +implications for DMA fences.
> > +
> > +First, a pending page fault obviously holds up the work that's running on 
> > the
> > +accelerator and a memory allocation is usually required to resolve the 
> > fault.
> > +But memory allocations are not allowed to gate completion of DMA fences, 
> > which
> > +means any workload using recoverable page faults cannot use DMA fences for
> > +synchronization. Synchronization fences controlled by userspace must be 
> > used
> > +instead.
> > +
> > +On GPUs this poses a problem, because current desktop compositor protocols 
> > on
> > +Linus rely on DMA fences, which means without an entirely new userspace 
> > stack
> > +built on top of userspace fences, they cannot benefit from recoverable page
> > +faults. The exception is when page faults are only used as migration hints 
> > and
> > +never to on-demand fill a memory request. For now this means recoverable 
> > page
> > +faults on GPUs are limited to pure compute workloads.
> > +
> > +Furthermore GPUs usually have shared resources between the 3D rendering and
> > +compute side, like compute units or command submission engines. If both a 
> > 3D
> > +job with a DMA fence and a compute workload using recoverable page faults 
> > are
> > +pending they could deadlock:
> > +
> > +- The 3D workload might need to wait for the compute job to finish and 
> > release
> > +  hardware resources first.
> > +
> > +- The compute workload might be stuck in a page fault, because the memory
> > +  allocation is waiting for the DMA fence of the 3D workload to complete.
> > +
> > +There are a few ways to prevent this problem:
> > +
> > +- Compute workloads can always be preempted, even when a page fault is 
> > pending
> > +  and not yet repaired. Not all hardware supports this.
> > +
> > +- DMA fence workloads and workloads which need page fault handling have
> > +  independent hardware resources to guarantee forward progress. This could 
> > be
> > +  achieved through e.g. through dedicated engines and minimal compute unit
> > +  reservations for DMA fence workloads.
> > +
>
> > +- The reservation approach could be further refined by only reserving the
> > +  hardware resources for DMA fence workloads when they are in-flight. This 
> > must
> > +  cover the time from when the DMA fence is visible to other threads up to
> > +  moment when fence is completed through dma_fence_signal().
>
> Up till here it makes perfect sense, but what should this paragraph mean ?

Instead of reserving a few CU at driver load, to guarantee that
dma-fence workloads can always complete, we only do the reservatation
while a problematic dma_fence is in the system, and note yet
signalled. Of course that 

Re: [PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-22 Thread Christian König

Am 21.01.21 um 20:40 schrieb Daniel Vetter:

Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References: 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F20210107030127.20393-1-Felix.Kuehling%40amd.com%2Fdata=04%7C01%7Cchristian.koenig%40amd.com%7C94782d99ad7d4e1cc57c08d8be447d74%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637468548672516391%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=AT8QP2r2UczSqCKkPRTJI1cQ0GOGyykgLcMfW8NbD8w%3Dreserved=0
Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear path to
support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
  Documentation/driver-api/dma-buf.rst | 66 
  1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst 
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linus rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. The exception is when page faults are only used as migration hints and
+never to on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+



+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().


Up till here it makes perfect sense, but what should this paragraph mean ?


+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+  all workloads must be flushed from the GPU when switching between jobs
+  requiring DMA fences or jobs requiring page fault handling: This means all 
DMA
+  fences must complete before a compute job with page fault handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA fence can be
+  made visible anywhere in the system, all compute workloads must be preempted
+  to guarantee all pending GPU page faults are flushed.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally

[PATCH] RFC: dma-fence: Document recoverable page fault implications

2021-01-21 Thread Daniel Vetter
Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit.

References: 
https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
Cc: Maarten Lankhorst 
Cc: Thomas Hellström 
Cc: "Christian König" 
Cc: Jerome Glisse 
Cc: Felix Kuehling 
Signed-off-by: Daniel Vetter 
Cc: Sumit Semwal 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
I'll be away next week, but figured I'll type this up quickly for some
comments and to check whether I got this all roughly right.

Critique very much wanted on this, so that we can make sure hw which
can't preempt (with pagefaults pending) like gfx10 has a clear path to
support page faults in upstream. So anything I missed, got wrong or
like that would be good.
-Daniel
---
 Documentation/driver-api/dma-buf.rst | 66 
 1 file changed, 66 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst 
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..e924c1e4f7a3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,69 @@ fences in the kernel. This means:
   userspace is allowed to use userspace fencing or long running compute
   workloads. This also means no implicit fencing for shared buffers in these
   cases.
+
+Recoverable Hardware Page Faults Implications
+~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linus rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. The exception is when page faults are only used as migration hints and
+never to on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few ways to prevent this problem:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+  all workloads must be flushed from the GPU when switching between jobs
+  requiring DMA fences or jobs requiring page fault handling: This means all 
DMA
+  fences must complete before a compute job with page fault handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA fence can be
+  made visible anywhere in the system, all compute workloads must be preempted
+  to guarantee all pending GPU page faults are flushed.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally
+in the kernel even for resolving hardware page faults, e.g. by using copy
+engines to clear or copy memory needed to resolve the page fault.
+
+In some ways this page fault problem is a special case of the `Infinite DMA
+Fences` discussions: Infinite fences from compute workloads are allowed to
+depend on DMA fences, but not the other way around. And not even the page fault
+problem is new, because some other CPU thread in userspace might
+hit a page fault which holds up a userspace fence -