Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2023-01-05 Thread Daniel Vetter
On Thu, 8 Dec 2022 at 05:54, Alex Deucher  wrote:
>
> On Wed, Nov 30, 2022 at 6:11 AM Daniel Vetter  wrote:
> >
> > On Fri, Nov 25, 2022 at 02:52:01PM -0300, André Almeida wrote:
> > > This patchset adds a udev event for DRM device's resets.
> > >
> > > Userspace apps can trigger GPU resets by misuse of graphical APIs or 
> > > driver
> > > bugs. Either way, the GPU reset might lead the system to a broken 
> > > state[1], that
> > > might be recovered if user has access to a tty or a remote shell. 
> > > Arguably, this
> > > recovery could happen automatically by the system itself, thus this is 
> > > the goal
> > > of this patchset.
> > >
> > > For debugging and report purposes, device coredump support was already 
> > > added
> > > for amdgpu[2], but it's not suitable for programmatic usage like this one 
> > > given
> > > the uAPI not being stable and the need for parsing.
> > >
> > > GL/VK is out of scope for this use, giving that we are dealing with device
> > > resets regardless of API.
> > >
> > > A basic userspace daemon is provided at [3] showing how the interface is 
> > > used
> > > to recovery from resets.
> > >
> > > [1] A search for "reset" in DRM/AMD issue tracker shows reports of resets
> > > making the system unusable:
> > > https://gitlab.freedesktop.org/drm/amd/-/issues/?search=reset
> > >
> > > [2] 
> > > https://lore.kernel.org/amd-gfx/20220602081538.1652842-2-amaranath.somalapu...@amd.com/
> > >
> > > [3] https://gitlab.freedesktop.org/andrealmeid/gpu-resetd
> > >
> > > v2: 
> > > https://lore.kernel.org/dri-devel/20220308180403.75566-1-contactshashanksha...@gmail.com/
> > >
> > > André Almeida (1):
> > >   drm/amdgpu: Add work function for GPU reset event
> > >
> > > Shashank Sharma (1):
> > >   drm: Add GPU reset sysfs event
> >
> > This seems a bit much amd specific, and a bit much like an ad-hoc stopgap.
> >
> > On the amd specific piece:
> >
> > - amd's gpus suck the most for gpu hangs, because aside from the shader
> >   unblock, there's only device reset, which thrashes vram and display and
> >   absolutely everything. Which is terrible. Everyone else has engine only
> >   reset since years (i.e. doesn't thrash display or vram), and very often
> >   even just context reset (i.e. unless the driver is busted somehow or hw
> >   bug, malicious userspace will _only_ ever impact itself).
> >
> > - robustness extensions for gl/vk already have very clear specifications
> >   of all cases of reset, and this work here just ignores that. Yes on amd
> >   you only have device reset, but this is drm infra, so you need to be
> >   able to cope with ctx reset or reset which only affected a limited set
> >   of context. If this is for compute and compute apis lack robustness
> >   extensions, then those apis need to be fixed to fill that gap.
> >
> > - the entire deamon thing feels a bit like overkill and I'm not sure why
> >   it exists. I think for a start it would be much simpler if we just have
> >   a (per-device maybe) sysfs knob to enable automatic killing of process
> >   that die and which don't have arb robustness enabled (for gl case, for
> >   vk case the assumption is that _every_ app supports VK_DEVICE_LOST and
> >   can recover).
>
> Thinking about this a bit more, I think there are useful cases for the
> GPU reset event and a daemon.  When I refer to a daemon here, it could
> be a standalone thing or integrated into the desktop manager like
> logind or whatever.
> 1. For APIs that don't have robustness support (e.g., video
> encode/decode APIs).  This one I could kind of go either way on since,
> it probably makes sense to just kill the app if it there is no
> robustness mechanism in the API.

I think transcode might also be a case where the userspace driver can
recover, at least on the decode side. But that would most likely
require some extension to make it clear to the app what's going on.

Or people just use vk video and be done, reset support comes built-in there :-)

> 2. Telemetry collection.  It would be useful to have a central place
> to collect telemetry information about what apps seem to be
> problematic, etc.

Yeah I think standardizing reset messages and maybe device state dumps
makes sense. But that's telemetry, not making decisions about what to
kill.

> 3. A policy manager in userspace.  If you want to make some decision
> about what to do about repeat offenders or apps that claim to support
> robustness, but really don't.

Imo we should have something for this in the kernel first. Kinda like
oom killer vs userspace oom killer. Sure eventually a userspace one
makes sense for very specific. But for a baseline I think we need an
in-kernel gpu offender killer first that's a bit standardized across
the drivers. Otherwise we're just guarnateed to build the wrong uapi,
with in-kernel first solution we can experiment around a bit first.

> 4. Apps that don't use a UMD.  E.g., unit tests and IGT.  If they
> don't use a UMD, who kills them?

CI framework. They have 

Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-12-07 Thread Alex Deucher
On Wed, Nov 30, 2022 at 6:11 AM Daniel Vetter  wrote:
>
> On Fri, Nov 25, 2022 at 02:52:01PM -0300, André Almeida wrote:
> > This patchset adds a udev event for DRM device's resets.
> >
> > Userspace apps can trigger GPU resets by misuse of graphical APIs or driver
> > bugs. Either way, the GPU reset might lead the system to a broken state[1], 
> > that
> > might be recovered if user has access to a tty or a remote shell. Arguably, 
> > this
> > recovery could happen automatically by the system itself, thus this is the 
> > goal
> > of this patchset.
> >
> > For debugging and report purposes, device coredump support was already added
> > for amdgpu[2], but it's not suitable for programmatic usage like this one 
> > given
> > the uAPI not being stable and the need for parsing.
> >
> > GL/VK is out of scope for this use, giving that we are dealing with device
> > resets regardless of API.
> >
> > A basic userspace daemon is provided at [3] showing how the interface is 
> > used
> > to recovery from resets.
> >
> > [1] A search for "reset" in DRM/AMD issue tracker shows reports of resets
> > making the system unusable:
> > https://gitlab.freedesktop.org/drm/amd/-/issues/?search=reset
> >
> > [2] 
> > https://lore.kernel.org/amd-gfx/20220602081538.1652842-2-amaranath.somalapu...@amd.com/
> >
> > [3] https://gitlab.freedesktop.org/andrealmeid/gpu-resetd
> >
> > v2: 
> > https://lore.kernel.org/dri-devel/20220308180403.75566-1-contactshashanksha...@gmail.com/
> >
> > André Almeida (1):
> >   drm/amdgpu: Add work function for GPU reset event
> >
> > Shashank Sharma (1):
> >   drm: Add GPU reset sysfs event
>
> This seems a bit much amd specific, and a bit much like an ad-hoc stopgap.
>
> On the amd specific piece:
>
> - amd's gpus suck the most for gpu hangs, because aside from the shader
>   unblock, there's only device reset, which thrashes vram and display and
>   absolutely everything. Which is terrible. Everyone else has engine only
>   reset since years (i.e. doesn't thrash display or vram), and very often
>   even just context reset (i.e. unless the driver is busted somehow or hw
>   bug, malicious userspace will _only_ ever impact itself).
>
> - robustness extensions for gl/vk already have very clear specifications
>   of all cases of reset, and this work here just ignores that. Yes on amd
>   you only have device reset, but this is drm infra, so you need to be
>   able to cope with ctx reset or reset which only affected a limited set
>   of context. If this is for compute and compute apis lack robustness
>   extensions, then those apis need to be fixed to fill that gap.
>
> - the entire deamon thing feels a bit like overkill and I'm not sure why
>   it exists. I think for a start it would be much simpler if we just have
>   a (per-device maybe) sysfs knob to enable automatic killing of process
>   that die and which don't have arb robustness enabled (for gl case, for
>   vk case the assumption is that _every_ app supports VK_DEVICE_LOST and
>   can recover).

Thinking about this a bit more, I think there are useful cases for the
GPU reset event and a daemon.  When I refer to a daemon here, it could
be a standalone thing or integrated into the desktop manager like
logind or whatever.
1. For APIs that don't have robustness support (e.g., video
encode/decode APIs).  This one I could kind of go either way on since,
it probably makes sense to just kill the app if it there is no
robustness mechanism in the API.
2. Telemetry collection.  It would be useful to have a central place
to collect telemetry information about what apps seem to be
problematic, etc.
3. A policy manager in userspace.  If you want to make some decision
about what to do about repeat offenders or apps that claim to support
robustness, but really don't.
4. Apps that don't use a UMD.  E.g., unit tests and IGT.  If they
don't use a UMD, who kills them?
5. Server use cases where you have multiple GPU apps running in
containers and you want some sort of policy control or a hand in what
to do when the app causes a hang.

Alex

>
> Now onto the ad-hoc part:
>
> - Everyone hand-rolls ad-hoc gpu context structures and how to associate
>   them with a pid. I think we need to stop doing that, because it's just
>   endless pain and prevents us from building useful management stuff like
>   cgroups for drivers that work across drivers (and driver/vendor specific
>   cgroup wont be accepted by upstream cgroup maintainers). Or gpu reset
>   events and dumps like here. This is going to be some work unforutnately.
>
> - I think the best starting point is the context structure drm/scheduler
>   already has, but it needs some work:
>   * untangling it from the scheduler part, so it can be used also for
> compute context that are directly scheduled by hw
>   * (amd specific) moving amdkfd over to that context structure, at least
> internally
>   * tracking the pid in there
>
> - I think the error dump facility should also be integrated into this.
>   

Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-11-30 Thread Simon Ser
On Wednesday, November 30th, 2022 at 16:23, André Almeida 
 wrote:

> On 11/28/22 06:30, Simon Ser wrote:
> 
> > The PID is racy, the user-space daemon could end up killing an
> > unrelated process… Is there any way we could use a pidfd instead?
> 
> Is the PID race condition something that really happens or rather
> something theoretical?

A PID race can happen in practice if many PIDs get spawned. On Linux
PIDs wrap around pretty quickly.

Note, even a sandboxed program inside its own PID namespace can trigger
the wrap-around.

> Anyway, I can't see how pidfd and uevent would work together. Since
> uevent it's kind of a broadcast and pidfd is an anon file, it wouldn't
> be possible to say to userspace which is the fd to be used giving that
> file descriptors are per process resources.

Yeah, I can see how this can be difficult to integrate with uevent.

> On the other hand, this interface could be converted to be an ioctl that
> userspace would block waiting for a reset notification, then the kernel
> could create a pidfd and give to the blocked process the right fd. We
> would probably need a queue to make sure no event is lost.

A blocking IOCTL wouldn't be very nice, you can't integrate that into
an event loop for instance…


Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-11-30 Thread André Almeida

On 11/28/22 06:30, Simon Ser wrote:

The PID is racy, the user-space daemon could end up killing an
unrelated process… Is there any way we could use a pidfd instead?


Is the PID race condition something that really happens or rather 
something theoretical?


Anyway, I can't see how pidfd and uevent would work together. Since 
uevent it's kind of a broadcast and pidfd is an anon file, it wouldn't 
be possible to say to userspace which is the fd to be used giving that 
file descriptors are per process resources.


On the other hand, this interface could be converted to be an ioctl that 
userspace would block waiting for a reset notification, then the kernel 
could create a pidfd and give to the blocked process the right fd. We 
would probably need a queue to make sure no event is lost.


Thanks
André


Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-11-30 Thread Daniel Vetter
On Fri, Nov 25, 2022 at 02:52:01PM -0300, André Almeida wrote:
> This patchset adds a udev event for DRM device's resets.
> 
> Userspace apps can trigger GPU resets by misuse of graphical APIs or driver
> bugs. Either way, the GPU reset might lead the system to a broken state[1], 
> that
> might be recovered if user has access to a tty or a remote shell. Arguably, 
> this
> recovery could happen automatically by the system itself, thus this is the 
> goal
> of this patchset.
> 
> For debugging and report purposes, device coredump support was already added
> for amdgpu[2], but it's not suitable for programmatic usage like this one 
> given
> the uAPI not being stable and the need for parsing.
> 
> GL/VK is out of scope for this use, giving that we are dealing with device
> resets regardless of API.
> 
> A basic userspace daemon is provided at [3] showing how the interface is used
> to recovery from resets.
> 
> [1] A search for "reset" in DRM/AMD issue tracker shows reports of resets
> making the system unusable:
> https://gitlab.freedesktop.org/drm/amd/-/issues/?search=reset
> 
> [2] 
> https://lore.kernel.org/amd-gfx/20220602081538.1652842-2-amaranath.somalapu...@amd.com/
> 
> [3] https://gitlab.freedesktop.org/andrealmeid/gpu-resetd
> 
> v2: 
> https://lore.kernel.org/dri-devel/20220308180403.75566-1-contactshashanksha...@gmail.com/
> 
> André Almeida (1):
>   drm/amdgpu: Add work function for GPU reset event
> 
> Shashank Sharma (1):
>   drm: Add GPU reset sysfs event

This seems a bit much amd specific, and a bit much like an ad-hoc stopgap.

On the amd specific piece:

- amd's gpus suck the most for gpu hangs, because aside from the shader
  unblock, there's only device reset, which thrashes vram and display and
  absolutely everything. Which is terrible. Everyone else has engine only
  reset since years (i.e. doesn't thrash display or vram), and very often
  even just context reset (i.e. unless the driver is busted somehow or hw
  bug, malicious userspace will _only_ ever impact itself).

- robustness extensions for gl/vk already have very clear specifications
  of all cases of reset, and this work here just ignores that. Yes on amd
  you only have device reset, but this is drm infra, so you need to be
  able to cope with ctx reset or reset which only affected a limited set
  of context. If this is for compute and compute apis lack robustness
  extensions, then those apis need to be fixed to fill that gap.

- the entire deamon thing feels a bit like overkill and I'm not sure why
  it exists. I think for a start it would be much simpler if we just have
  a (per-device maybe) sysfs knob to enable automatic killing of process
  that die and which don't have arb robustness enabled (for gl case, for
  vk case the assumption is that _every_ app supports VK_DEVICE_LOST and
  can recover).

Now onto the ad-hoc part:

- Everyone hand-rolls ad-hoc gpu context structures and how to associate
  them with a pid. I think we need to stop doing that, because it's just
  endless pain and prevents us from building useful management stuff like
  cgroups for drivers that work across drivers (and driver/vendor specific
  cgroup wont be accepted by upstream cgroup maintainers). Or gpu reset
  events and dumps like here. This is going to be some work unforutnately.

- I think the best starting point is the context structure drm/scheduler
  already has, but it needs some work:
  * untangling it from the scheduler part, so it can be used also for
compute context that are directly scheduled by hw
  * (amd specific) moving amdkfd over to that context structure, at least
internally
  * tracking the pid in there

- I think the error dump facility should also be integrated into this.
  Userspace needs to know which dump is associated with which reset event,
  so that remote crash reporting works correctly.

- ideally this framework can keep track of impacted context so that
  drivers don't have to reinvent the "which context are impacted"
  robustness ioctl book-keeping all on their own. For amd gpus it's kinda
  easy, since the impact is "everything", but for other gpus the impact
  can be all the way from "only one context" to "only contexts actively
  running on $set_of_engines" to "all the context actively running" to "we
  thrashed vram, everything is gone"

- i915 has a bunch of this already, but I have honestly no idea whether
  it's any use because i915-gem is terminally not switching over to
  drm/scheduler (it needs a full rewrite, which is happening somewhere).
  So might only be useful to look at to make sure we're not building
  something which only works for full device reset gpus and nothing else.
  Over the various generations i915 has pretty much every possible gpu
  reset options you can think of, with resulting different reporting
  requirements to make sure robustness extensions work correctly.

- pid isn't enough once you have engine/context reset, you need pid (well
  drm_file really, but I guess we 

Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-11-28 Thread Simon Ser
The PID is racy, the user-space daemon could end up killing an
unrelated process… Is there any way we could use a pidfd instead?


Re: [PATCH v3 0/2] drm: Add GPU reset sysfs

2022-11-28 Thread Pekka Paalanen
On Fri, 25 Nov 2022 14:52:01 -0300
André Almeida  wrote:

> This patchset adds a udev event for DRM device's resets.

Hi,

this seems a good idea to me.

> Userspace apps can trigger GPU resets by misuse of graphical APIs or driver
> bugs. Either way, the GPU reset might lead the system to a broken state[1], 
> that
> might be recovered if user has access to a tty or a remote shell. Arguably, 
> this
> recovery could happen automatically by the system itself, thus this is the 
> goal
> of this patchset.
> 
> For debugging and report purposes, device coredump support was already added
> for amdgpu[2], but it's not suitable for programmatic usage like this one 
> given
> the uAPI not being stable and the need for parsing.
> 
> GL/VK is out of scope for this use, giving that we are dealing with device
> resets regardless of API.

I see that the reported PID is intended to be the culprit, the process
that caused the GPU to crash or hang, if identified. Hence, killing
that process perhaps makes sense, even if it could recover on its own
through GL/VK "device lost" mechanism.

"VRAM lost" is interesting. Innocent processes essentially lost the GPU
in that case, I suppose, but that's no reason to kill them and restart
the whole graphics stack outright. Those that actually handle GL/VK
device lost should theoretically be fine, right?

Display servers can make more enlightened decisions on whether they
need to restart or not, if they are implemented to handle that.

The example gpu-resetd [3] behaviour in that case seems sub-optimal.
Could it do better? How would it know, or avoid knowing, which
processes handled the GPU reset fine and which need external restarting?

Maybe gpu-resetd should kill the culprit only if it causes resets
repeatedly? But if the culprit does not handle device lost and also
does not die... how do you know you need to kill it?


Thanks,
pq

> 
> A basic userspace daemon is provided at [3] showing how the interface is used
> to recovery from resets.
> 
> [1] A search for "reset" in DRM/AMD issue tracker shows reports of resets
> making the system unusable:
> https://gitlab.freedesktop.org/drm/amd/-/issues/?search=reset
> 
> [2] 
> https://lore.kernel.org/amd-gfx/20220602081538.1652842-2-amaranath.somalapu...@amd.com/
> 
> [3] https://gitlab.freedesktop.org/andrealmeid/gpu-resetd
> 
> v2: 
> https://lore.kernel.org/dri-devel/20220308180403.75566-1-contactshashanksha...@gmail.com/
> 
> André Almeida (1):
>   drm/amdgpu: Add work function for GPU reset event
> 
> Shashank Sharma (1):
>   drm: Add GPU reset sysfs event
> 
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h|  4 +++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 30 ++
>  drivers/gpu/drm/drm_sysfs.c| 26 +++
>  include/drm/drm_sysfs.h| 13 ++
>  4 files changed, 73 insertions(+)
> 



pgpGtG1hFhnrq.pgp
Description: OpenPGP digital signature