Re: [PATCH RFC 5/5] drm/amdgpu: Add accounting of buffer object creation request via DRM cgroup

2018-11-27 Thread Kenny Ho
Ah I see.  Thank you for the clarification.

Regards,
Kenny
On Tue, Nov 27, 2018 at 3:31 PM Christian König
 wrote:
>
> Am 27.11.18 um 19:15 schrieb Kenny Ho:
> > Hey Christian,
> >
> > Sorry for the late reply, I missed this for some reason.
> >
> > On Wed, Nov 21, 2018 at 5:00 AM Christian König
> >  wrote:
> >>> diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
> >>> index 370e9a5536ef..531726443104 100644
> >>> --- a/include/uapi/drm/amdgpu_drm.h
> >>> +++ b/include/uapi/drm/amdgpu_drm.h
> >>> @@ -72,6 +72,18 @@ extern "C" {
> >>>#define DRM_IOCTL_AMDGPU_FENCE_TO_HANDLE DRM_IOWR(DRM_COMMAND_BASE + 
> >>> DRM_AMDGPU_FENCE_TO_HANDLE, union drm_amdgpu_fence_to_handle)
> >>>#define DRM_IOCTL_AMDGPU_SCHED  DRM_IOW(DRM_COMMAND_BASE + 
> >>> DRM_AMDGPU_SCHED, union drm_amdgpu_sched)
> >>>
> >>> +enum amdgpu_mem_domain {
> >>> + AMDGPU_MEM_DOMAIN_CPU,
> >>> + AMDGPU_MEM_DOMAIN_GTT,
> >>> + AMDGPU_MEM_DOMAIN_VRAM,
> >>> + AMDGPU_MEM_DOMAIN_GDS,
> >>> + AMDGPU_MEM_DOMAIN_GWS,
> >>> + AMDGPU_MEM_DOMAIN_OA,
> >>> + __MAX_AMDGPU_MEM_DOMAIN
> >>> +};
> >> Well that is a clear NAK since it duplicates the TTM defines. Please use
> >> that one instead and don't make this UAPI.
> > This is defined to help with the chunk of changes below.  The
> > AMDGPU_GEM_DOMAIN* already exists and this is similar to how TTM has
> > TTM_PL_* to help with the creation of TTM_PL_FLAG_*:
> > https://elixir.bootlin.com/linux/v4.20-rc4/source/include/drm/ttm/ttm_placement.h#L36
> >
> > I don't disagree that there is a duplication here but it's
> > pre-existing so if you can help clarify my confusion that would be
> > much appreciated.
>
> The AMDGPU_GEM_DOMAIN are masks which are used in the frontend IOCTL
> interface to create BOs.
>
> TTM defines the backend pools where the memory is then allocated from to
> fill the BOs.
>
> So you are mixing frontend and backend here.
>
> In other words for the whole cgroup interface you should not make a
> single change to amdgpu_drm.h or otherwise you are doing something wrong.
>
> Regards,
> Christian.
>
> >
> > Reards,
> > Kenny
> >
> >>> +
> >>> +extern char const *amdgpu_mem_domain_names[];
> >>> +
> >>>/**
> >>> * DOC: memory domains
> >>> *
> >>> @@ -95,12 +107,12 @@ extern "C" {
> >>> * %AMDGPU_GEM_DOMAIN_OAOrdered append, used by 3D or Compute 
> >>> engines
> >>> * for appending data.
> >>> */
> >>> -#define AMDGPU_GEM_DOMAIN_CPU0x1
> >>> -#define AMDGPU_GEM_DOMAIN_GTT0x2
> >>> -#define AMDGPU_GEM_DOMAIN_VRAM   0x4
> >>> -#define AMDGPU_GEM_DOMAIN_GDS0x8
> >>> -#define AMDGPU_GEM_DOMAIN_GWS0x10
> >>> -#define AMDGPU_GEM_DOMAIN_OA 0x20
> >>> +#define AMDGPU_GEM_DOMAIN_CPU(1 << AMDGPU_MEM_DOMAIN_CPU)
> >>> +#define AMDGPU_GEM_DOMAIN_GTT(1 << AMDGPU_MEM_DOMAIN_GTT)
> >>> +#define AMDGPU_GEM_DOMAIN_VRAM   (1 << 
> >>> AMDGPU_MEM_DOMAIN_VRAM)
> >>> +#define AMDGPU_GEM_DOMAIN_GDS(1 << AMDGPU_MEM_DOMAIN_GDS)
> >>> +#define AMDGPU_GEM_DOMAIN_GWS(1 << AMDGPU_MEM_DOMAIN_GWS)
> >>> +#define AMDGPU_GEM_DOMAIN_OA (1 << AMDGPU_MEM_DOMAIN_OA)
> >>>#define AMDGPU_GEM_DOMAIN_MASK  (AMDGPU_GEM_DOMAIN_CPU | \
> >>> AMDGPU_GEM_DOMAIN_GTT | \
> >>> AMDGPU_GEM_DOMAIN_VRAM | \
> > ___
> > amd-gfx mailing list
> > amd-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH RFC 5/5] drm/amdgpu: Add accounting of buffer object creation request via DRM cgroup

2018-11-27 Thread Christian König

Am 27.11.18 um 19:15 schrieb Kenny Ho:

Hey Christian,

Sorry for the late reply, I missed this for some reason.

On Wed, Nov 21, 2018 at 5:00 AM Christian König
 wrote:

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 370e9a5536ef..531726443104 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -72,6 +72,18 @@ extern "C" {
   #define DRM_IOCTL_AMDGPU_FENCE_TO_HANDLE DRM_IOWR(DRM_COMMAND_BASE + 
DRM_AMDGPU_FENCE_TO_HANDLE, union drm_amdgpu_fence_to_handle)
   #define DRM_IOCTL_AMDGPU_SCHED  DRM_IOW(DRM_COMMAND_BASE + 
DRM_AMDGPU_SCHED, union drm_amdgpu_sched)

+enum amdgpu_mem_domain {
+ AMDGPU_MEM_DOMAIN_CPU,
+ AMDGPU_MEM_DOMAIN_GTT,
+ AMDGPU_MEM_DOMAIN_VRAM,
+ AMDGPU_MEM_DOMAIN_GDS,
+ AMDGPU_MEM_DOMAIN_GWS,
+ AMDGPU_MEM_DOMAIN_OA,
+ __MAX_AMDGPU_MEM_DOMAIN
+};

Well that is a clear NAK since it duplicates the TTM defines. Please use
that one instead and don't make this UAPI.

This is defined to help with the chunk of changes below.  The
AMDGPU_GEM_DOMAIN* already exists and this is similar to how TTM has
TTM_PL_* to help with the creation of TTM_PL_FLAG_*:
https://elixir.bootlin.com/linux/v4.20-rc4/source/include/drm/ttm/ttm_placement.h#L36

I don't disagree that there is a duplication here but it's
pre-existing so if you can help clarify my confusion that would be
much appreciated.


The AMDGPU_GEM_DOMAIN are masks which are used in the frontend IOCTL 
interface to create BOs.


TTM defines the backend pools where the memory is then allocated from to 
fill the BOs.


So you are mixing frontend and backend here.

In other words for the whole cgroup interface you should not make a 
single change to amdgpu_drm.h or otherwise you are doing something wrong.


Regards,
Christian.



Reards,
Kenny


+
+extern char const *amdgpu_mem_domain_names[];
+
   /**
* DOC: memory domains
*
@@ -95,12 +107,12 @@ extern "C" {
* %AMDGPU_GEM_DOMAIN_OAOrdered append, used by 3D or Compute engines
* for appending data.
*/
-#define AMDGPU_GEM_DOMAIN_CPU0x1
-#define AMDGPU_GEM_DOMAIN_GTT0x2
-#define AMDGPU_GEM_DOMAIN_VRAM   0x4
-#define AMDGPU_GEM_DOMAIN_GDS0x8
-#define AMDGPU_GEM_DOMAIN_GWS0x10
-#define AMDGPU_GEM_DOMAIN_OA 0x20
+#define AMDGPU_GEM_DOMAIN_CPU(1 << AMDGPU_MEM_DOMAIN_CPU)
+#define AMDGPU_GEM_DOMAIN_GTT(1 << AMDGPU_MEM_DOMAIN_GTT)
+#define AMDGPU_GEM_DOMAIN_VRAM   (1 << AMDGPU_MEM_DOMAIN_VRAM)
+#define AMDGPU_GEM_DOMAIN_GDS(1 << AMDGPU_MEM_DOMAIN_GDS)
+#define AMDGPU_GEM_DOMAIN_GWS(1 << AMDGPU_MEM_DOMAIN_GWS)
+#define AMDGPU_GEM_DOMAIN_OA (1 << AMDGPU_MEM_DOMAIN_OA)
   #define AMDGPU_GEM_DOMAIN_MASK  (AMDGPU_GEM_DOMAIN_CPU | \
AMDGPU_GEM_DOMAIN_GTT | \
AMDGPU_GEM_DOMAIN_VRAM | \

___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH RFC 5/5] drm/amdgpu: Add accounting of buffer object creation request via DRM cgroup

2018-11-27 Thread Kenny Ho
Hey Christian,

Sorry for the late reply, I missed this for some reason.

On Wed, Nov 21, 2018 at 5:00 AM Christian König
 wrote:
> > diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
> > index 370e9a5536ef..531726443104 100644
> > --- a/include/uapi/drm/amdgpu_drm.h
> > +++ b/include/uapi/drm/amdgpu_drm.h
> > @@ -72,6 +72,18 @@ extern "C" {
> >   #define DRM_IOCTL_AMDGPU_FENCE_TO_HANDLE DRM_IOWR(DRM_COMMAND_BASE + 
> > DRM_AMDGPU_FENCE_TO_HANDLE, union drm_amdgpu_fence_to_handle)
> >   #define DRM_IOCTL_AMDGPU_SCHED  DRM_IOW(DRM_COMMAND_BASE + 
> > DRM_AMDGPU_SCHED, union drm_amdgpu_sched)
> >
> > +enum amdgpu_mem_domain {
> > + AMDGPU_MEM_DOMAIN_CPU,
> > + AMDGPU_MEM_DOMAIN_GTT,
> > + AMDGPU_MEM_DOMAIN_VRAM,
> > + AMDGPU_MEM_DOMAIN_GDS,
> > + AMDGPU_MEM_DOMAIN_GWS,
> > + AMDGPU_MEM_DOMAIN_OA,
> > + __MAX_AMDGPU_MEM_DOMAIN
> > +};
>
> Well that is a clear NAK since it duplicates the TTM defines. Please use
> that one instead and don't make this UAPI.
This is defined to help with the chunk of changes below.  The
AMDGPU_GEM_DOMAIN* already exists and this is similar to how TTM has
TTM_PL_* to help with the creation of TTM_PL_FLAG_*:
https://elixir.bootlin.com/linux/v4.20-rc4/source/include/drm/ttm/ttm_placement.h#L36

I don't disagree that there is a duplication here but it's
pre-existing so if you can help clarify my confusion that would be
much appreciated.

Reards,
Kenny

> > +
> > +extern char const *amdgpu_mem_domain_names[];
> > +
> >   /**
> >* DOC: memory domains
> >*
> > @@ -95,12 +107,12 @@ extern "C" {
> >* %AMDGPU_GEM_DOMAIN_OAOrdered append, used by 3D or Compute engines
> >* for appending data.
> >*/
> > -#define AMDGPU_GEM_DOMAIN_CPU0x1
> > -#define AMDGPU_GEM_DOMAIN_GTT0x2
> > -#define AMDGPU_GEM_DOMAIN_VRAM   0x4
> > -#define AMDGPU_GEM_DOMAIN_GDS0x8
> > -#define AMDGPU_GEM_DOMAIN_GWS0x10
> > -#define AMDGPU_GEM_DOMAIN_OA 0x20
> > +#define AMDGPU_GEM_DOMAIN_CPU(1 << AMDGPU_MEM_DOMAIN_CPU)
> > +#define AMDGPU_GEM_DOMAIN_GTT(1 << AMDGPU_MEM_DOMAIN_GTT)
> > +#define AMDGPU_GEM_DOMAIN_VRAM   (1 << AMDGPU_MEM_DOMAIN_VRAM)
> > +#define AMDGPU_GEM_DOMAIN_GDS(1 << AMDGPU_MEM_DOMAIN_GDS)
> > +#define AMDGPU_GEM_DOMAIN_GWS(1 << AMDGPU_MEM_DOMAIN_GWS)
> > +#define AMDGPU_GEM_DOMAIN_OA (1 << AMDGPU_MEM_DOMAIN_OA)
> >   #define AMDGPU_GEM_DOMAIN_MASK  (AMDGPU_GEM_DOMAIN_CPU | \
> >AMDGPU_GEM_DOMAIN_GTT | \
> >AMDGPU_GEM_DOMAIN_VRAM | \
>
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [PATCH RFC 5/5] drm/amdgpu: Add accounting of buffer object creation request via DRM cgroup

2018-11-21 Thread Christian König

Am 20.11.18 um 19:58 schrieb Kenny Ho:

Account for the total size of buffer object requested to amdgpu by
buffer type on a per cgroup basis.

x prefix in the control file name x.bo_requested.amd.stat signify
experimental.

Change-Id: Ifb680c4bcf3652879a7a659510e25680c2465cf6
Signed-off-by: Kenny Ho 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.c | 56 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.h |  3 ++
  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 13 +
  include/uapi/drm/amdgpu_drm.h   | 24 ++---
  4 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.c
index 853b77532428..e3d98ed01b79 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.c
@@ -7,6 +7,57 @@
  #include "amdgpu_ring.h"
  #include "amdgpu_drmcgrp.h"
  
+void amdgpu_drmcgrp_count_bo_req(struct task_struct *task, struct drm_device *dev,

+   u32 domain, unsigned long size)
+{
+   struct drmcgrp *drmcgrp = get_drmcgrp(task);
+   struct drmcgrp_device_resource *ddr;
+   struct drmcgrp *p;
+   struct amd_drmcgrp_dev_resource *a_ddr;
+int i;
+
+   if (drmcgrp == NULL)
+   return;
+
+   ddr = drmcgrp->dev_resources[dev->primary->index];
+
+   mutex_lock(>ddev->mutex);
+   for (p = drmcgrp; p != NULL; p = parent_drmcgrp(drmcgrp)) {
+   a_ddr = ddr_amdddr(p->dev_resources[dev->primary->index]);
+
+   for (i = 0; i < __MAX_AMDGPU_MEM_DOMAIN; i++)
+   if ( (1 << i) & domain)
+   a_ddr->bo_req_count[i] += size;
+   }
+   mutex_unlock(>ddev->mutex);
+}
+
+int amd_drmcgrp_bo_req_stat_read(struct seq_file *sf, void *v)
+{
+   struct drmcgrp *drmcgrp = css_drmcgrp(seq_css(sf));
+   struct drmcgrp_device_resource *ddr = NULL;
+   struct amd_drmcgrp_dev_resource *a_ddr = NULL;
+   int i, j;
+
+   seq_puts(sf, "---\n");
+   for (i = 0; i < MAX_DRM_DEV; i++) {
+   ddr = drmcgrp->dev_resources[i];
+
+   if (ddr == NULL || ddr->ddev->vid != amd_drmcgrp_vendor_id)
+   continue;
+
+   a_ddr = ddr_amdddr(ddr);
+
+   seq_printf(sf, "card%d:\n", i);
+   for (j = 0; j < __MAX_AMDGPU_MEM_DOMAIN; j++)
+   seq_printf(sf, "  %s: %llu\n", amdgpu_mem_domain_names[j], 
a_ddr->bo_req_count[j]);
+   }
+
+   return 0;
+}
+
+
+
  void amdgpu_drmcgrp_count_cs(struct task_struct *task, struct drm_device *dev,
enum amdgpu_ring_type r_type)
  {
@@ -55,6 +106,11 @@ int amd_drmcgrp_cmd_submit_accounting_read(struct seq_file 
*sf, void *v)
  
  
  struct cftype files[] = {

+   {
+   .name = "x.bo_requested.amd.stat",
+   .seq_show = amd_drmcgrp_bo_req_stat_read,
+   .flags = CFTYPE_NOT_ON_ROOT,
+   },
{
.name = "x.cmd_submitted.amd.stat",
.seq_show = amd_drmcgrp_cmd_submit_accounting_read,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.h
index f894a9a1059f..8b9d61e47dde 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drmcgrp.h
@@ -11,10 +11,13 @@
  struct amd_drmcgrp_dev_resource {
struct drmcgrp_device_resource ddr;
u64 cs_count[__MAX_AMDGPU_RING_TYPE];
+   u64 bo_req_count[__MAX_AMDGPU_MEM_DOMAIN];
  };
  
  void amdgpu_drmcgrp_count_cs(struct task_struct *task, struct drm_device *dev,

enum amdgpu_ring_type r_type);
+void amdgpu_drmcgrp_count_bo_req(struct task_struct *task, struct drm_device 
*dev,
+   u32 domain, unsigned long size);
  
  static inline struct amd_drmcgrp_dev_resource *ddr_amdddr(struct drmcgrp_device_resource *ddr)

  {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 7b3d1ebda9df..339e1d3edad8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -31,6 +31,17 @@
  #include 
  #include "amdgpu.h"
  #include "amdgpu_display.h"
+#include "amdgpu_drmcgrp.h"
+
+char const *amdgpu_mem_domain_names[] = {
+   [AMDGPU_MEM_DOMAIN_CPU] = "cpu",
+   [AMDGPU_MEM_DOMAIN_GTT] = "gtt",
+   [AMDGPU_MEM_DOMAIN_VRAM]= "vram",
+   [AMDGPU_MEM_DOMAIN_GDS] = "gds",
+   [AMDGPU_MEM_DOMAIN_GWS] = "gws",
+   [AMDGPU_MEM_DOMAIN_OA]  = "oa",
+   [__MAX_AMDGPU_MEM_DOMAIN]   = "_max"
+};
  
  void amdgpu_gem_object_free(struct drm_gem_object *gobj)

  {
@@ -52,6 +63,8 @@ int amdgpu_gem_object_create(struct amdgpu_device *adev, 
unsigned long size,
struct amdgpu_bo_param bp;
int r;
  
+	amdgpu_drmcgrp_count_bo_req(current, adev->ddev, initial_domain, size);

+
memset(, 0, 

Re: [PATCH RFC 5/5] drm/amdgpu: Add accounting of buffer object creation request via DRM cgroup

2018-11-20 Thread Eric Anholt
Kenny Ho  writes:

> Account for the total size of buffer object requested to amdgpu by
> buffer type on a per cgroup basis.
>
> x prefix in the control file name x.bo_requested.amd.stat signify
> experimental.

Why is a counting of the size of buffer objects ever allocated useful,
as opposed to the current size of buffer objects allocated?

And, really, why is this stat in cgroups, instead of a debugfs entry?


signature.asc
Description: PGP signature
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx