Re: [PATCH] drm/amdgpu: fix amdgpu_ras_block_late_init error handler

2022-02-22 Thread Kenny Ho
On Thu, Feb 17, 2022 at 2:06 PM Alex Deucher wrote: > > On Thu, Feb 17, 2022 at 2:04 PM Nick Desaulniers > wrote: > > > > > > Alex, > > Has AMD been able to set up clang builds, yet? > > No. I think some individual teams do, but it's never been integrated > into our larger CI systems as of yet

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-05-07 Thread Kenny Ho
On Fri, May 7, 2021 at 12:54 PM Daniel Vetter wrote: > > SRIOV is kinda by design vendor specific. You set up the VF endpoint, it > shows up, it's all hw+fw magic. Nothing for cgroups to manage here at all. Right, so in theory you just use the device cgroup with the VF endpoints. > All I meant

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-05-07 Thread Kenny Ho
On Fri, May 7, 2021 at 4:59 AM Daniel Vetter wrote: > > Hm I missed that. I feel like time-sliced-of-a-whole gpu is the easier gpu > cgroups controler to get started, since it's much closer to other cgroups > that control bandwidth of some kind. Whether it's i/o bandwidth or compute > bandwidht

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-05-06 Thread Kenny Ho
Sorry for the late reply (I have been working on other stuff.) On Fri, Feb 5, 2021 at 8:49 AM Daniel Vetter wrote: > > So I agree that on one side CU mask can be used for low-level quality > of service guarantees (like the CLOS cache stuff on intel cpus as an > example), and that's going to be

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-02-03 Thread Kenny Ho
n, Feb 01, 2021 at 11:51:07AM -0500, Kenny Ho wrote: > > On Mon, Feb 1, 2021 at 9:49 AM Daniel Vetter wrote: > > > - there's been a pile of cgroups proposal to manage gpus at the drm > > > subsystem level, some by Kenny, and frankly this at least looks a bit > &g

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-02-01 Thread Kenny Ho
No Daniel, this is quick *draft* to get a conversation going. Bpf was actually a path suggested by Tejun back in 2018 so I think you are mischaracterizing this quite a bit. "2018-11-20 Kenny Ho: To put the questions in more concrete terms, let say a user wants to expose certain

Re: [RFC] Add BPF_PROG_TYPE_CGROUP_IOCTL

2021-02-01 Thread Kenny Ho
, this is quick *draft* to get a conversation going. Bpf was actually a path suggested by Tejun back in 2018 so I think you are mischaracterizing this quite a bit. "2018-11-20 Kenny Ho: To put the questions in more concrete terms, let say a user wants to expose certain part of a gpu to a parti

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-14 Thread Kenny Ho
ow-jitter/low-latency sharing of a single GPU if you have whatever hardware support you need today? Regards, Kenny > > On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter wrote: > > > > > > On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho wrote: > > > > > > > > Ok

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-14 Thread Kenny Ho
suggestion, if not...question 2.) 2) If spatial sharing is required to support GPU HPC use cases, what would you implement if you have the hardware support today? Regards, Kenny On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter wrote: > > On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho wrote: > > >

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-14 Thread Kenny Ho
itching cost is zero.) As a drm co-maintainer, are you suggesting GPU has no place in the HPC use case? Regards, Kenny On Tue, Apr 14, 2020 at 8:52 AM Daniel Vetter wrote: > > On Tue, Apr 14, 2020 at 2:47 PM Kenny Ho wrote: > > On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter wrote: >

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-14 Thread Kenny Ho
Hi Daniel, On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter wrote: > My understanding from talking with a few other folks is that > the cpumask-style CU-weight thing is not something any other gpu can > reasonably support (and we have about 6+ of those in-tree) How does Intel plan to support the

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-13 Thread Kenny Ho
Hi, On Mon, Apr 13, 2020 at 4:54 PM Tejun Heo wrote: > > Allocations definitely are acceptable and it's not a pre-requisite to have > work-conserving control first either. Here, given the lack of consensus in > terms of what even constitute resource units, I don't think it'd be a good > idea to

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-04-13 Thread Kenny Ho
conserving implementation first, especially when we have users asking for such functionality? Regards, Kenny On Mon, Apr 13, 2020 at 3:11 PM Tejun Heo wrote: > > Hello, Kenny. > > On Tue, Mar 24, 2020 at 02:49:27PM -0400, Kenny Ho wrote: > > Can you elaborate more on what are the mis

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-03-24 Thread Kenny Ho
Hi Tejun, Can you elaborate more on what are the missing pieces? Regards, Kenny On Tue, Mar 24, 2020 at 2:46 PM Tejun Heo wrote: > > On Tue, Mar 17, 2020 at 12:03:20PM -0400, Kenny Ho wrote: > > What's your thoughts on this latest series? > > My overall impression is that th

Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-03-17 Thread Kenny Ho
Hi Tejun, What's your thoughts on this latest series? Regards, Kenny On Wed, Feb 26, 2020 at 2:02 PM Kenny Ho wrote: > > This is a submission for the introduction of a new cgroup controller for the > drm subsystem follow a series of RFCs [v1, v2, v3, v4] > > Changes from PR

[PATCH v2 10/11] drm, cgroup: add update trigger after limit change

2020-02-26 Thread Kenny Ho
type for the migrated task. Change-Id: I0ce7c4e5a04c31bd0f8d9853a383575d4bc9a3fa Signed-off-by: Kenny Ho --- include/drm/drm_drv.h | 10 kernel/cgroup/drm.c | 58 +++ 2 files changed, 68 insertions(+) diff --git a/include/drm/drm_drv.h b/include

[PATCH v2 07/11] drm, cgroup: Add total GEM buffer allocation limit

2020-02-26 Thread Kenny Ho
allocation limit for /dev/dri/card1 to 1GB echo "226:1 1g" > gpu.buffer.total.max Set allocation limit for /dev/dri/card0 to 512MB echo "226:0 512m" > gpu.buffer.total.max Change-Id: Id3265bbd0fafe84a16b59617df79bd32196160be Signed-off-by: Kenny Ho ---

[PATCH v2 08/11] drm, cgroup: Add peak GEM buffer allocation limit

2020-02-26 Thread Kenny Ho
(such as k, m, g) can be used. Set largest allocation for /dev/dri/card1 to 4MB echo "226:1 4m" > gpu.buffer.peak.max Change-Id: I5ab3fb4a442b6cbd5db346be595897c90217da69 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 +++

[PATCH v2 09/11] drm, cgroup: Add compute as gpu cgroup resource

2020-02-26 Thread Kenny Ho
Enumeration of the subdevices = == Change-Id: Idde0ef9a331fd67bb9c7eb8ef9978439e6452488 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 21 +++ include/drm/drm_cgroup.h| 3 + include/linux/cgroup_drm.h

[PATCH v2 11/11] drm/amdgpu: Integrate with DRM cgroup

2020-02-26 Thread Kenny Ho
as defined by the drmcg the kfd process belongs to. Change-Id: I2930e76ef9ac6d36d0feb81f604c89a4208e6614 Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 + drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 29 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 7

[PATCH v2 01/11] cgroup: Introduce cgroup for drm subsystem

2020-02-26 Thread Kenny Ho
virtualization.) Change-Id: Ia90aed8c4cb89ff20d8216a903a765655b44fc9a Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 - Documentation/cgroup-v1/drm.rst | 1 + include/linux/cgroup_drm.h | 92 + include/linux/cgroup_subsys.h

[PATCH v2 06/11] drm, cgroup: Add GEM buffer allocation count stats

2020-02-26 Thread Kenny Ho
gpu.buffer.count.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of GEM buffer allocated. Change-Id: Iad29bdf44390dbcee07b1e72ea0ff811aa3b9dcd Signed-off-by: Kenny Ho --- Documentation

[PATCH v2 05/11] drm, cgroup: Add peak GEM buffer allocation stats

2020-02-26 Thread Kenny Ho
gpu.buffer.peak.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Largest (high water mark) GEM buffer allocated in bytes. Change-Id: I40fe4c13c1cea8613b3e04b802f3e1f19eaab4fc Signed-off-by: Kenny Ho

[PATCH v2 03/11] drm, cgroup: Initialize drmcg properties

2020-02-26 Thread Kenny Ho
applies to the root cgroup since it can be created before DRM devices are available. The drmcg controller will go through all existing drm cgroups and initialize them with the new device accordingly. Change-Id: I64e421d8dfcc22ee8282cc1305960e20c2704db7 Signed-off-by: Kenny Ho --- drivers/gpu/drm

[PATCH v2 04/11] drm, cgroup: Add total GEM buffer allocation stats

2020-02-26 Thread Kenny Ho
by the drm device's major:minor. Total GEM buffer allocation in bytes. Change-Id: Ibc1f646ca7dbc588e2d11802b156b524696a23e7 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 50 +- drivers/gpu/drm/drm_gem.c | 9 ++ include/drm/drm_cgroup.h

[PATCH v2 02/11] drm, cgroup: Bind drm and cgroup subsystem

2020-02-26 Thread Kenny Ho
Since the drm subsystem can be compiled as a module and drm devices can be added and removed during run time, add several functions to bind the drm subsystem as well as drm devices with drmcg. Two pairs of functions: drmcg_bind/drmcg_unbind - used to bind/unbind the drm subsystem to the cgroup

[PATCH v2 00/11] new cgroup controller for gpu/drm subsystem

2020-02-26 Thread Kenny Ho
] https://github.com/kubernetes/kubernetes/issues/52757 Kenny Ho (11): cgroup: Introduce cgroup for drm subsystem drm, cgroup: Bind drm and cgroup subsystem drm, cgroup: Initialize drmcg properties drm, cgroup: Add total GEM buffer allocation stats drm, cgroup: Add peak GEM buffer

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-20 Thread Kenny Ho
Thanks, I will take a look. Regards, Kenny On Wed, Feb 19, 2020 at 1:38 PM Johannes Weiner wrote: > > On Wed, Feb 19, 2020 at 11:28:48AM -0500, Kenny Ho wrote: > > On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote: > > > > > > Yes, I'd go with absolute

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-19 Thread Kenny Ho
On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote: > > Yes, I'd go with absolute units when it comes to memory, because it's > not a renewable resource like CPU and IO, and so we do have cliff > behavior around the edge where you transition from ok to not-enough. > > memory.low is a bit in

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-14 Thread Kenny Ho
Hi Tejun, On Fri, Feb 14, 2020 at 2:17 PM Tejun Heo wrote: > > I have to agree with Daniel here. My apologies if I weren't clear > enough. Here's one interface I can think of: > > * compute weight: The same format as io.weight. Proportional control >of gpu compute. > > * memory low: Please

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-14 Thread Kenny Ho
he documentation in this patch: "Some DRM > > devices may only support lgpu as anonymous resources. In such case, > > the significance of the position of the set bits in list will be > > ignored." What Intel does with the user expressed configuration of &qu

Re: [PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-14 Thread Kenny Ho
e user expressed configuration of "5 out of 100" is entirely up to Intel (time slice if you like, change to specific EUs later if you like, or make it driver configurable to support both if you like.) Regards, Kenny > > On Fri, Feb 14, 2020 at 9:57 AM Kenny Ho wrote: >>

[PATCH 01/11] cgroup: Introduce cgroup for drm subsystem

2020-02-14 Thread Kenny Ho
virtualization.) Change-Id: Ia90aed8c4cb89ff20d8216a903a765655b44fc9a Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 - Documentation/cgroup-v1/drm.rst | 1 + include/linux/cgroup_drm.h | 92 + include/linux/cgroup_subsys.h

[PATCH 02/11] drm, cgroup: Bind drm and cgroup subsystem

2020-02-14 Thread Kenny Ho
Since the drm subsystem can be compiled as a module and drm devices can be added and removed during run time, add several functions to bind the drm subsystem as well as drm devices with drmcg. Two pairs of functions: drmcg_bind/drmcg_unbind - used to bind/unbind the drm subsystem to the cgroup

[PATCH 11/11] drm/amdgpu: Integrate with DRM cgroup

2020-02-14 Thread Kenny Ho
by the drmcg the kfd process belongs to. Change-Id: I2930e76ef9ac6d36d0feb81f604c89a4208e6614 Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 + drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 29 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 + drivers

[PATCH 04/11] drm, cgroup: Add total GEM buffer allocation stats

2020-02-14 Thread Kenny Ho
by the drm device's major:minor. Total GEM buffer allocation in bytes. Change-Id: Ibc1f646ca7dbc588e2d11802b156b524696a23e7 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 50 +- drivers/gpu/drm/drm_gem.c | 9 ++ include/drm/drm_cgroup.h

[PATCH 09/11] drm, cgroup: Introduce lgpu as DRM cgroup resource

2020-02-14 Thread Kenny Ho
ing the relationship between the cgroups and their configurations in drm.lgpu. Change-Id: Idde0ef9a331fd67bb9c7eb8ef9978439e6452488 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 80 ++ include/drm/drm_cgroup.h| 3 + include/linux/cgroup_dr

[PATCH 07/11] drm, cgroup: Add total GEM buffer allocation limit

2020-02-14 Thread Kenny Ho
allocation limit for /dev/dri/card1 to 1GB echo "226:1 1g" > drm.buffer.total.max Set allocation limit for /dev/dri/card0 to 512MB echo "226:0 512m" > drm.buffer.total.max Change-Id: Id3265bbd0fafe84a16b59617df79bd32196160be Signed-off-by: Kenny Ho ---

[PATCH 10/11] drm, cgroup: add update trigger after limit change

2020-02-14 Thread Kenny Ho
type for the migrated task. Change-Id: I0ce7c4e5a04c31bd0f8d9853a383575d4bc9a3fa Signed-off-by: Kenny Ho --- include/drm/drm_drv.h | 10 kernel/cgroup/drm.c | 59 ++- 2 files changed, 68 insertions(+), 1 deletion(-) diff --git a/include/drm

[PATCH 05/11] drm, cgroup: Add peak GEM buffer allocation stats

2020-02-14 Thread Kenny Ho
drm.buffer.peak.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Largest (high water mark) GEM buffer allocated in bytes. Change-Id: I40fe4c13c1cea8613b3e04b802f3e1f19eaab4fc Signed-off-by: Kenny Ho

[PATCH 08/11] drm, cgroup: Add peak GEM buffer allocation limit

2020-02-14 Thread Kenny Ho
(such as k, m, g) can be used. Set largest allocation for /dev/dri/card1 to 4MB echo "226:1 4m" > drm.buffer.peak.max Change-Id: I5ab3fb4a442b6cbd5db346be595897c90217da69 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 +++

[PATCH 06/11] drm, cgroup: Add GEM buffer allocation count stats

2020-02-14 Thread Kenny Ho
drm.buffer.count.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of GEM buffer allocated. Change-Id: Iad29bdf44390dbcee07b1e72ea0ff811aa3b9dcd Signed-off-by: Kenny Ho --- Documentation

[PATCH 03/11] drm, cgroup: Initialize drmcg properties

2020-02-14 Thread Kenny Ho
applies to the root cgroup since it can be created before DRM devices are available. The drmcg controller will go through all existing drm cgroups and initialize them with the new device accordingly. Change-Id: I64e421d8dfcc22ee8282cc1305960e20c2704db7 Signed-off-by: Kenny Ho --- drivers/gpu/drm

[PATCH 00/11] new cgroup controller for gpu/drm subsystem

2020-02-14 Thread Kenny Ho
://github.com/RadeonOpenCompute/k8s-device-plugin [8] https://github.com/kubernetes/kubernetes/issues/52757 Kenny Ho (11): cgroup: Introduce cgroup for drm subsystem drm, cgroup: Bind drm and cgroup subsystem drm, cgroup: Initialize drmcg properties drm, cgroup: Add total GEM buffer allocation

Re: [PATCH RFC v4 07/16] drm, cgroup: Add total GEM buffer allocation limit

2019-11-28 Thread Kenny Ho
On Tue, Oct 1, 2019 at 10:30 AM Michal Koutný wrote: > On Thu, Aug 29, 2019 at 02:05:24AM -0400, Kenny Ho wrote: > > drm.buffer.default > > A read-only flat-keyed file which exists on the root cgroup. > > Each entry is keyed by the drm

Re: [PATCH RFC v4 02/16] cgroup: Introduce cgroup for drm subsystem

2019-11-28 Thread Kenny Ho
On Tue, Oct 1, 2019 at 10:31 AM Michal Koutný wrote: > On Thu, Aug 29, 2019 at 02:05:19AM -0400, Kenny Ho wrote: > > +struct cgroup_subsys drm_cgrp_subsys = { > > + .css_alloc = drmcg_css_alloc, > > + .css_free = drmcg_css_free, > > +

Re: Proposal to report GPU private memory allocations with sysfs nodes [plain text version]

2019-10-31 Thread Kenny Ho
ussion to me or > have me cc'ed in that thread? > > Best, > Yiwei > > On Wed, Oct 30, 2019 at 10:23 PM Kenny Ho wrote: >> >> Hi Yiwei, >> >> I am not sure if you are aware, there is an ongoing RFC on adding drm >> support in cgroup for the purpose of res

Re: Proposal to report GPU private memory allocations with sysfs nodes [plain text version]

2019-10-30 Thread Kenny Ho
Hi Yiwei, I am not sure if you are aware, there is an ongoing RFC on adding drm support in cgroup for the purpose of resource tracking. One of the resource is GPU memory. It's not exactly the same as what you are proposing (it doesn't track API usage, but it tracks the type of GPU memory from

Re: [PATCH RFC v4 14/16] drm, cgroup: Introduce lgpu as DRM cgroup resource

2019-10-09 Thread Kenny Ho
: > > On 2019-08-29 2:05 a.m., Kenny Ho wrote: > > > drm.lgpu > > > A read-write nested-keyed file which exists on all cgroups. > > > Each entry is keyed by the DRM device's major:minor. > > > > > > lgpu stands for

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-05 Thread Kenny Ho
On Thu, Sep 5, 2019 at 4:32 PM Daniel Vetter wrote: > *snip* > drm_dev_unregister gets called on hotunplug, so your cgroup-internal > tracking won't get out of sync any more than the drm_minor list gets > out of sync with drm_devices. The trouble with drm_minor is just that > cgroup doesn't track

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-05 Thread Kenny Ho
On Thu, Sep 5, 2019 at 4:06 PM Daniel Vetter wrote: > > On Thu, Sep 5, 2019 at 8:28 PM Kenny Ho wrote: > > > > (resent in plain text mode) > > > > Hi Daniel, > > > > This is the previous patch relevant to this discussion: > > https://patchwork.

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-05 Thread Kenny Ho
ter wrote: > > On Tue, Sep 03, 2019 at 04:43:45PM -0400, Kenny Ho wrote: > > On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote: > > > On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote: > > > > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote: > > > > >

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-05 Thread Kenny Ho
2019 at 04:43:45PM -0400, Kenny Ho wrote: > > On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote: > > > On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote: > > > > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter > wrote: > > > > > Iterating over mi

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-03 Thread Kenny Ho
On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote: > On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote: > > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote: > > > Iterating over minors for cgroups sounds very, very wrong. Why do we care > > > whether a buffer was al

Re: [PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-09-03 Thread Kenny Ho
On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote: > > On Thu, Aug 29, 2019 at 02:05:18AM -0400, Kenny Ho wrote: > > To allow other subsystems to iterate through all stored DRM minors and > > act upon them. > > > > Also exposes drm_minor_acquire and drm_min

Re: [PATCH RFC v4 00/16] new cgroup controller for gpu/drm subsystem

2019-09-03 Thread Kenny Ho
On Tue, Sep 3, 2019 at 5:20 AM Daniel Vetter wrote: > > On Tue, Sep 3, 2019 at 10:24 AM Koenig, Christian > wrote: > > > > Am 03.09.19 um 10:02 schrieb Daniel Vetter: > > > On Thu, Aug 29, 2019 at 02:05:17AM -0400, Kenny Ho wrote: > > >> With this

Re: [PATCH RFC v4 00/16] new cgroup controller for gpu/drm subsystem

2019-09-03 Thread Kenny Ho
Hi Tejun, Thanks for looking into this. I can definitely help where I can and I am sure other experts will jump in if I start misrepresenting the reality :) (as Daniel already have done.) Regarding your points, my understanding is that there isn't really a TTM vs GEM situation anymore (there is

Re: [PATCH RFC v4 13/16] drm, cgroup: Allow more aggressive memory reclaim

2019-08-29 Thread Kenny Ho
istinction which domain you need to evict stuff from. > > Regards, > Christian. > > Am 29.08.19 um 16:07 schrieb Kenny Ho: > > Thanks for the feedback Christian. I am still digging into this one. Daniel > suggested leveraging the Shrinker API for the functionality of th

Re: [PATCH RFC v4 13/16] drm, cgroup: Allow more aggressive memory reclaim

2019-08-29 Thread Kenny Ho
straightforward as far as I understand it currently.) Regards, Kenny On Thu, Aug 29, 2019 at 3:08 AM Koenig, Christian wrote: > Am 29.08.19 um 08:05 schrieb Kenny Ho: > > Allow DRM TTM memory manager to register a work_struct, such that, when > > a drmcgrp is under memory pressure, memory

[PATCH RFC v4 13/16] drm, cgroup: Allow more aggressive memory reclaim

2019-08-29 Thread Kenny Ho
Allow DRM TTM memory manager to register a work_struct, such that, when a drmcgrp is under memory pressure, memory reclaiming can be triggered immediately. Change-Id: I25ac04e2db9c19ff12652b88ebff18b44b2706d8 Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c| 49

[PATCH RFC v4 15/16] drm, cgroup: add update trigger after limit change

2019-08-29 Thread Kenny Ho
type for the migrated task. Change-Id: I68187a72818b855b5f295aefcb241cda8ab63b00 Signed-off-by: Kenny Ho --- include/drm/drm_drv.h | 10 kernel/cgroup/drm.c | 57 +++ 2 files changed, 67 insertions(+) diff --git a/include/drm/drm_drv.h b/include

[PATCH RFC v4 16/16] drm/amdgpu: Integrate with DRM cgroup

2019-08-29 Thread Kenny Ho
by the drmcg the kfd process belongs to. Change-Id: I69a57452c549173a1cd623c30dc57195b3b6563e Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 + drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 21 +++ drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 + drivers/gpu

[PATCH RFC v4 05/16] drm, cgroup: Add peak GEM buffer allocation stats

2019-08-29 Thread Kenny Ho
drm.buffer.peak.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Largest (high water mark) GEM buffer allocated in bytes. Change-Id: I79e56222151a3d33a76a61ba0097fe93ebb3449f Signed-off-by: Kenny Ho

[PATCH RFC v4 10/16] drm, cgroup: Add TTM buffer peak usage stats

2019-08-29 Thread Kenny Ho
== == Reading returns the following:: 226:0 system=0 tt=0 vram=0 priv=0 226:1 system=0 tt=9035776 vram=17768448 priv=16809984 226:2 system=0 tt=9035776 vram=17768448 priv=16809984 Change-Id: I986e44533848f66411465bdd52105e78105a709a Signed-off-by: Kenny Ho --- include

[PATCH RFC v4 09/16] drm, cgroup: Add TTM buffer allocation stats

2019-08-29 Thread Kenny Ho
A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of evictions. Change-Id: Ice2c4cc845051229549bebeb6aa2d7d6153bdf6a Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +- drivers/gpu

[PATCH RFC v4 11/16] drm, cgroup: Add per cgroup bw measure and control

2019-08-29 Thread Kenny Ho
=9223372036854775807 avg_bytes_per_us=65536 Change-Id: Ie573491325ccc16535bb943e7857f43bd0962add Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c | 7 + include/drm/drm_cgroup.h | 19 +++ include/linux/cgroup_drm.h | 16 ++ kernel/cgroup/drm.c | 319 ++- 4

[PATCH RFC v4 14/16] drm, cgroup: Introduce lgpu as DRM cgroup resource

2019-08-29 Thread Kenny Ho
its in list will be ignored. This lgpu resource supports the 'allocation' resource distribution model. Change-Id: I1afcacf356770930c7f925df043e51ad06ceb98e Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 46 include/drm/drm_cgrou

[PATCH RFC v4 02/16] cgroup: Introduce cgroup for drm subsystem

2019-08-29 Thread Kenny Ho
virtualization.) Change-Id: I6830d3990f63f0c13abeba29b1d330cf28882831 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 - Documentation/cgroup-v1/drm.rst | 1 + include/linux/cgroup_drm.h | 92 + include/linux/cgroup_subsys.h

[PATCH RFC v4 07/16] drm, cgroup: Add total GEM buffer allocation limit

2019-08-29 Thread Kenny Ho
allocation limit for /dev/dri/card1 to 1GB echo "226:1 1g" > drm.buffer.total.max Set allocation limit for /dev/dri/card0 to 512MB echo "226:0 512m" > drm.buffer.total.max Change-Id: I96e0b7add4d331ed8bb267b3c9243d360c6e9903 Signed-off-by: Kenny Ho ---

[PATCH RFC v4 12/16] drm, cgroup: Add soft VRAM limit

2019-08-29 Thread Kenny Ho
: I7988e28a453b53140b40a28c176239acbc81d491 Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c | 7 ++ include/drm/drm_cgroup.h | 17 + include/linux/cgroup_drm.h | 2 + kernel/cgroup/drm.c | 135 +++ 4 files changed, 161 insertions

[PATCH RFC v4 08/16] drm, cgroup: Add peak GEM buffer allocation limit

2019-08-29 Thread Kenny Ho
(such as k, m, g) can be used. Set largest allocation for /dev/dri/card1 to 4MB echo "226:1 4m" > drm.buffer.peak.max Change-Id: I0830d56775568e1cf215b56cc892d5e7945e9f25 Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 18 ++

[PATCH RFC v4 03/16] drm, cgroup: Initialize drmcg properties

2019-08-29 Thread Kenny Ho
to the root cgroup since it can be created before DRM devices are available. The drmcg controller will go through all existing drm cgroups and initialize them with the new device accordingly. Change-Id: I908ee6975ea0585e4c30eafde4599f87094d8c65 Signed-off-by: Kenny Ho --- drivers/gpu/drm

[PATCH RFC v4 01/16] drm: Add drm_minor_for_each

2019-08-29 Thread Kenny Ho
: I7c4b67ce6b31f06d1037b03435386ff5b8144ca5 Signed-off-by: Kenny Ho --- drivers/gpu/drm/drm_drv.c | 19 +++ drivers/gpu/drm/drm_internal.h | 4 include/drm/drm_drv.h | 4 3 files changed, 23 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c index

[PATCH RFC v4 00/16] new cgroup controller for gpu/drm subsystem

2019-08-29 Thread Kenny Ho
/kubernetes/kubernetes/issues/52757 Kenny Ho (16): drm: Add drm_minor_for_each cgroup: Introduce cgroup for drm subsystem drm, cgroup: Initialize drmcg properties drm, cgroup: Add total GEM buffer allocation stats drm, cgroup: Add peak GEM buffer allocation stats drm, cgroup: Add GEM buffer

[PATCH RFC v4 04/16] drm, cgroup: Add total GEM buffer allocation stats

2019-08-29 Thread Kenny Ho
by the drm device's major:minor. Total GEM buffer allocation in bytes. Change-Id: I9d662ec50d64bb40a37dbf47f018b2f3a1c033ad Signed-off-by: Kenny Ho --- Documentation/admin-guide/cgroup-v2.rst | 50 +- drivers/gpu/drm/drm_gem.c | 9 ++ include/drm/drm_cgroup.h

[PATCH RFC v4 06/16] drm, cgroup: Add GEM buffer allocation count stats

2019-08-29 Thread Kenny Ho
drm.buffer.count.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of GEM buffer allocated. Change-Id: Id3e1809d5fee8562e47a7d2b961688956d844ec6 Signed-off-by: Kenny Ho --- Documentation

Re: [RFC PATCH v3 00/11] new cgroup controller for gpu/drm subsystem

2019-06-29 Thread Kenny Ho
On Thu, Jun 27, 2019 at 3:24 AM Daniel Vetter wrote: > Another question I have: What about HMM? With the device memory zone > the core mm will be a lot more involved in managing that, but I also > expect that we'll have classic buffer-based management for a long time > still. So these need to

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-28 Thread Kenny Ho
On Thu, Jun 27, 2019 at 2:11 AM Daniel Vetter wrote: > I feel like a better approach would by to add a cgroup for the various > engines on the gpu, and then also account all the sdma (or whatever the > name of the amd copy engines is again) usage by ttm_bo moves to the right > cgroup. I think

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-28 Thread Kenny Ho
On Thu, Jun 27, 2019 at 5:24 PM Daniel Vetter wrote: > On Thu, Jun 27, 2019 at 02:42:43PM -0400, Kenny Ho wrote: > > Um... I am going to get a bit philosophical here and suggest that the > > idea of sharing (especially uncontrolled sharing) is inherently at odd > > with co

Re: [RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-27 Thread Kenny Ho
On Thu, Jun 27, 2019 at 2:01 AM Daniel Vetter wrote: > > btw reminds me: I guess it would be good to have a per-type .total > read-only exposed, so that userspace has an idea of how much there is? > ttm is trying to be agnostic to the allocator that's used to manage a > memory type/resource, so

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-27 Thread Kenny Ho
On Thu, Jun 27, 2019 at 1:43 AM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 06:41:32PM -0400, Kenny Ho wrote: > > So without the sharing restriction and some kind of ownership > > structure, we will have to migrate/change the owner of the buffer when > > the cgroup

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:25 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote: > > The bandwidth is measured by keeping track of the amount of bytes moved > > by ttm within a time period. We defined two type of bandwidth: burst > &g

Re: [RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:12 PM Daniel Vetter wrote: > > On Wed, Jun 26, 2019 at 11:05:18AM -0400, Kenny Ho wrote: > > drm.memory.stats > > A read-only nested-keyed file which exists on all cgroups. > > Each entry is keyed by the

Re: [RFC PATCH v3 11/11] drm, cgroup: Allow more aggressive memory reclaim

2019-06-26 Thread Kenny Ho
11:05:22AM -0400, Kenny Ho wrote: > > Allow DRM TTM memory manager to register a work_struct, such that, when > > a drmcgrp is under memory pressure, memory reclaiming can be triggered > > immediately. > > > > Change-Id: I25ac04e2db9c19ff12652b88ebff18b4

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 5:41 PM Daniel Vetter wrote: > On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote: > > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > So what happens when you start a lot of threads all at the same time, > > > allocating gem b

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 5:04 PM Daniel Vetter wrote: > On Wed, Jun 26, 2019 at 10:37 PM Kenny Ho wrote: > > (sending again, I keep missing the reply-all in gmail.) > You can make it the default somewhere in the gmail options. Um... interesting, my option was actually not set (n

Re: [RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote: > > > drm.buffer.default > > A read-only flat-keyed file which exists on the root cgroup. > > Each entry is keyed by the drm device's major:minor. > > > > Default limits on the total GEM buffer allocation in bytes. > >

Re: [RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
(sending again, I keep missing the reply-all in gmail.) On Wed, Jun 26, 2019 at 11:56 AM Daniel Vetter wrote: > > Why the separate, explicit registration step? I think a simpler design for > drivers would be that we set up cgroups if there's anything to be > controlled, and then for GEM drivers

Re: [RFC PATCH v3 01/11] cgroup: Introduce cgroup for drm subsystem

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 11:49 AM Daniel Vetter wrote: > > Bunch of naming bikesheds I appreciate the suggestions, naming is hard :). > > +#include > > + > > +struct drmcgrp { > > drm_cgroup for more consistency how we usually call these things. I was hoping to keep the symbol short if

[RFC PATCH v3 08/11] drm, cgroup: Add TTM buffer peak usage stats

2019-06-26 Thread Kenny Ho
== == Reading returns the following:: 226:0 system=0 tt=0 vram=0 priv=0 226:1 system=0 tt=9035776 vram=17768448 priv=16809984 226:2 system=0 tt=9035776 vram=17768448 priv=16809984 Change-Id: I986e44533848f66411465bdd52105e78105a709a Signed-off-by: Kenny Ho --- include

[RFC PATCH v3 11/11] drm, cgroup: Allow more aggressive memory reclaim

2019-06-26 Thread Kenny Ho
Allow DRM TTM memory manager to register a work_struct, such that, when a drmcgrp is under memory pressure, memory reclaiming can be triggered immediately. Change-Id: I25ac04e2db9c19ff12652b88ebff18b44b2706d8 Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c| 47

[RFC PATCH v3 10/11] drm, cgroup: Add soft VRAM limit

2019-06-26 Thread Kenny Ho
: I7988e28a453b53140b40a28c176239acbc81d491 Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c | 7 ++ include/drm/drm_cgroup.h | 15 include/linux/cgroup_drm.h | 2 + kernel/cgroup/drm.c | 145 +++ 4 files changed, 169 insertions

[RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

2019-06-26 Thread Kenny Ho
Change-Id: I908ee6975ea0585e4c30eafde4599f87094d8c65 Signed-off-by: Kenny Ho --- include/drm/drm_cgroup.h | 24 include/linux/cgroup_drm.h | 10 kernel/cgroup/drm.c| 116 + 3 files changed, 150 insertions(+) create mode 100644

[RFC PATCH v3 05/11] drm, cgroup: Add peak GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
"226:1 4m" > drm.buffer.peak.max Change-Id: I0830d56775568e1cf215b56cc892d5e7945e9f25 Signed-off-by: Kenny Ho --- include/linux/cgroup_drm.h | 3 ++ kernel/cgroup/drm.c| 61 ++ 2 files changed, 64 insertions(+) diff --git a/include/linux

[RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
=9223372036854775807 avg_bytes_per_us=65536 Change-Id: Ie573491325ccc16535bb943e7857f43bd0962add Signed-off-by: Kenny Ho --- drivers/gpu/drm/ttm/ttm_bo.c | 7 + include/drm/drm_cgroup.h | 13 ++ include/linux/cgroup_drm.h | 14 ++ kernel/cgroup/drm.c | 309 ++- 4

[RFC PATCH v3 07/11] drm, cgroup: Add TTM buffer allocation stats

2019-06-26 Thread Kenny Ho
A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of evictions. Change-Id: Ice2c4cc845051229549bebeb6aa2d7d6153bdf6a Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +- drivers/gpu

[RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

2019-06-26 Thread Kenny Ho
echo "226:0 512m" > drm.buffer.total.max Change-Id: I4c249d06d45ec709d6481d4cbe87c5168545c5d0 Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4 + drivers/gpu/drm/drm_gem.c | 8 + drivers/gpu/drm/drm_prime.c| 9 +

[RFC PATCH v3 06/11] drm, cgroup: Add GEM buffer allocation count stats

2019-06-26 Thread Kenny Ho
drm.buffer.count.stats A read-only flat-keyed file which exists on all cgroups. Each entry is keyed by the drm device's major:minor. Total number of GEM buffer allocated. Change-Id: Id3e1809d5fee8562e47a7d2b961688956d844ec6 Signed-off-by: Kenny Ho --- include/linux

[RFC PATCH v3 03/11] drm/amdgpu: Register AMD devices for DRM cgroup

2019-06-26 Thread Kenny Ho
Change-Id: I3750fc657b956b52750a36cb303c54fa6a265b44 Signed-off-by: Kenny Ho --- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c index da7b4fe8ade3..2568fd730161

[RFC PATCH v3 00/11] new cgroup controller for gpu/drm subsystem

2019-06-26 Thread Kenny Ho
-queries-with-postgresql-pg-strom-in-openshift-3-10/ [7] https://github.com/RadeonOpenCompute/k8s-device-plugin [8] https://github.com/kubernetes/kubernetes/issues/52757 Kenny Ho (11): cgroup: Introduce cgroup for drm subsystem cgroup: Add mechanism to register DRM devices drm/amdgpu: Register AMD

  1   2   >