Re: [RFC PATCH 7/9] drmcg: Add initial support for tracking gpu time usage

2021-02-03 Thread Joonas Lahtinen
Quoting Brian Welty (2021-01-26 23:46:24)
> Single control below is added to DRM cgroup controller in order to track
> user execution time for GPU devices.  It is up to device drivers to
> charge execution time to the cgroup via drm_cgroup_try_charge().
> 
>   sched.runtime
>   Read-only value, displays current user execution time for each DRM
>   device. The expectation is that this is incremented by DRM device
>   driver's scheduler upon user context completion or context switch.
>   Units of time are in microseconds for consistency with cpu.stats.

Were not we also planning for a percentage style budgeting?

Capping the maximum runtime is definitely useful, but in order to
configure a system for peaceful co-existence of two or more workloads we
must also impose a limit on how big portion of the instantaneous
capacity can be used.

Regards, Joonas

> Signed-off-by: Brian Welty 
> ---
>  Documentation/admin-guide/cgroup-v2.rst |  9 +
>  include/drm/drm_cgroup.h|  2 ++
>  include/linux/cgroup_drm.h  |  2 ++
>  kernel/cgroup/drm.c | 20 
>  4 files changed, 33 insertions(+)
> 
> diff --git a/Documentation/admin-guide/cgroup-v2.rst 
> b/Documentation/admin-guide/cgroup-v2.rst
> index ccc25f03a898..f1d0f333a49e 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -2205,6 +2205,15 @@ thresholds are hit, this would then allow the DRM 
> device driver to invoke
>  some equivalent to OOM-killer or forced memory eviction for the device
>  backed memory in order to attempt to free additional space.
>  
> +The below set of control files are for time accounting of DRM devices. Units
> +of time are in microseconds.
> +
> +  sched.runtime
> +Read-only value, displays current user execution time for each DRM
> +device. The expectation is that this is incremented by DRM device
> +driver's scheduler upon user context completion or context switch.
> +
> +
>  Misc
>  
>  
> diff --git a/include/drm/drm_cgroup.h b/include/drm/drm_cgroup.h
> index 9ba0e372..315dab8a93b8 100644
> --- a/include/drm/drm_cgroup.h
> +++ b/include/drm/drm_cgroup.h
> @@ -22,6 +22,7 @@ enum drmcg_res_type {
> DRMCG_TYPE_MEM_CURRENT,
> DRMCG_TYPE_MEM_MAX,
> DRMCG_TYPE_MEM_TOTAL,
> +   DRMCG_TYPE_SCHED_RUNTIME,
> __DRMCG_TYPE_LAST,
>  };
>  
> @@ -79,5 +80,6 @@ void drm_cgroup_uncharge(struct drmcg *drmcg,struct 
> drm_device *dev,
>  enum drmcg_res_type type, u64 usage)
>  {
>  }
> +
>  #endif /* CONFIG_CGROUP_DRM */
>  #endif /* __DRM_CGROUP_H__ */
> diff --git a/include/linux/cgroup_drm.h b/include/linux/cgroup_drm.h
> index 3570636473cf..0fafa663321e 100644
> --- a/include/linux/cgroup_drm.h
> +++ b/include/linux/cgroup_drm.h
> @@ -19,6 +19,8 @@
>   */
>  struct drmcg_device_resource {
> struct page_counter memory;
> +   seqlock_t sched_lock;
> +   u64 exec_runtime;
>  };
>  
>  /**
> diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c
> index 08e75eb67593..64e9d0dbe8c8 100644
> --- a/kernel/cgroup/drm.c
> +++ b/kernel/cgroup/drm.c
> @@ -81,6 +81,7 @@ static inline int init_drmcg_single(struct drmcg *drmcg, 
> struct drm_device *dev)
> /* set defaults here */
> page_counter_init(>memory,
>   parent_ddr ? _ddr->memory : NULL);
> +   seqlock_init(>sched_lock);
> drmcg->dev_resources[minor] = ddr;
>  
> return 0;
> @@ -287,6 +288,10 @@ static int drmcg_seq_show_fn(int id, void *ptr, void 
> *data)
> seq_printf(sf, "%d:%d %llu\n", DRM_MAJOR, minor->index,
>minor->dev->drmcg_props.memory_total);
> break;
> +   case DRMCG_TYPE_SCHED_RUNTIME:
> +   seq_printf(sf, "%d:%d %llu\n", DRM_MAJOR, minor->index,
> +  ktime_to_us(ddr->exec_runtime));
> +   break;
> default:
> seq_printf(sf, "%d:%d\n", DRM_MAJOR, minor->index);
> break;
> @@ -384,6 +389,12 @@ struct cftype files[] = {
> .private = DRMCG_TYPE_MEM_TOTAL,
> .flags = CFTYPE_ONLY_ON_ROOT,
> },
> +   {
> +   .name = "sched.runtime",
> +   .seq_show = drmcg_seq_show,
> +   .private = DRMCG_TYPE_SCHED_RUNTIME,
> +   .flags = CFTYPE_NOT_ON_ROOT,
> +   },
> { } /* terminate */
>  };
>  
> @@ -440,6 +451,10 @@ EXPORT_SYMBOL(drmcg_device_early_init);
>   * choose to enact some form of memory reclaim, but the exact behavior is 
> left
>   * to the DRM device driver to define.
>   *
> + * For @res type of DRMCG_TYPE_SCHED_RUNTIME:
> + * For GPU time accounting, add @usage amount of GPU time to @drmcg for
> + * the given device.
> + *
>   * Returns 0 on success.  Otherwise, an error code is returned.
>   */
>  int drm_cgroup_try_charge(struct 

Re: [PATCH 07/15] drm/i915: Remove references to struct drm_device.pdev

2020-11-27 Thread Joonas Lahtinen
Quoting Thomas Zimmermann (2020-11-24 13:38:16)
> Using struct drm_device.pdev is deprecated. Convert i915 to struct
> drm_device.dev. No functional changes.
> 
> Signed-off-by: Thomas Zimmermann 
> Cc: Jani Nikula 
> Cc: Joonas Lahtinen 
> Cc: Rodrigo Vivi 

Any chance of sharing used a cocci script(s)? think this will
hit many in-flight series, so life would made easier :)

Or is this done manually? I notice a few places hoist the pdev
variable and others repeat the call. Regardless, using the cocci
script as baseline would make review bit more comforting.

The gvt changes would go in through the gvt tree, and we also
probably need to split between drm-intel-next/drm-intel-gt-next,
too.

Jani or Rodrigo, any thoughts?

Regards, Joonas

> ---
>  drivers/gpu/drm/i915/display/intel_bios.c |  2 +-
>  drivers/gpu/drm/i915/display/intel_cdclk.c| 14 ++---
>  drivers/gpu/drm/i915/display/intel_csr.c  |  2 +-
>  drivers/gpu/drm/i915/display/intel_dsi_vbt.c  |  2 +-
>  drivers/gpu/drm/i915/display/intel_fbdev.c|  2 +-
>  drivers/gpu/drm/i915/display/intel_gmbus.c|  2 +-
>  .../gpu/drm/i915/display/intel_lpe_audio.c|  5 +++--
>  drivers/gpu/drm/i915/display/intel_opregion.c |  6 +++---
>  drivers/gpu/drm/i915/display/intel_overlay.c  |  2 +-
>  drivers/gpu/drm/i915/display/intel_panel.c|  4 ++--
>  drivers/gpu/drm/i915/display/intel_quirks.c   |  2 +-
>  drivers/gpu/drm/i915/display/intel_sdvo.c |  2 +-
>  drivers/gpu/drm/i915/display/intel_vga.c  |  8 
>  drivers/gpu/drm/i915/gem/i915_gem_phys.c  |  6 +++---
>  drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  2 +-
>  drivers/gpu/drm/i915/gt/intel_engine_cs.c |  2 +-
>  drivers/gpu/drm/i915/gt/intel_ggtt.c  | 10 +-
>  drivers/gpu/drm/i915/gt/intel_ppgtt.c |  2 +-
>  drivers/gpu/drm/i915/gt/intel_rc6.c   |  4 ++--
>  drivers/gpu/drm/i915/gt/intel_reset.c |  6 +++---
>  drivers/gpu/drm/i915/gvt/cfg_space.c  |  5 +++--
>  drivers/gpu/drm/i915/gvt/firmware.c   | 10 +-
>  drivers/gpu/drm/i915/gvt/gtt.c| 12 +--
>  drivers/gpu/drm/i915/gvt/gvt.c|  6 +++---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  |  4 ++--
>  drivers/gpu/drm/i915/i915_debugfs.c   |  2 +-
>  drivers/gpu/drm/i915/i915_drv.c   | 20 +--
>  drivers/gpu/drm/i915/i915_drv.h   |  2 +-
>  drivers/gpu/drm/i915/i915_gem_gtt.c   |  4 ++--
>  drivers/gpu/drm/i915/i915_getparam.c  |  5 +++--
>  drivers/gpu/drm/i915/i915_gpu_error.c |  2 +-
>  drivers/gpu/drm/i915/i915_irq.c   |  6 +++---
>  drivers/gpu/drm/i915/i915_pmu.c   |  5 +++--
>  drivers/gpu/drm/i915/i915_suspend.c   |  4 ++--
>  drivers/gpu/drm/i915/i915_switcheroo.c|  4 ++--
>  drivers/gpu/drm/i915/i915_vgpu.c  |  2 +-
>  drivers/gpu/drm/i915/intel_device_info.c  |  2 +-
>  drivers/gpu/drm/i915/intel_region_lmem.c  |  8 
>  drivers/gpu/drm/i915/intel_runtime_pm.c   |  2 +-
>  drivers/gpu/drm/i915/intel_uncore.c   |  4 ++--
>  .../gpu/drm/i915/selftests/mock_gem_device.c  |  1 -
>  drivers/gpu/drm/i915/selftests/mock_gtt.c |  2 +-
>  42 files changed, 99 insertions(+), 98 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_bios.c 
> b/drivers/gpu/drm/i915/display/intel_bios.c
> index 4cc949b228f2..8879676372a3 100644
> --- a/drivers/gpu/drm/i915/display/intel_bios.c
> +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> @@ -2088,7 +2088,7 @@ bool intel_bios_is_valid_vbt(const void *buf, size_t 
> size)
>  
>  static struct vbt_header *oprom_get_vbt(struct drm_i915_private *dev_priv)
>  {
> -   struct pci_dev *pdev = dev_priv->drm.pdev;
> +   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
> void __iomem *p = NULL, *oprom;
> struct vbt_header *vbt;
> u16 vbt_size;
> diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c 
> b/drivers/gpu/drm/i915/display/intel_cdclk.c
> index c449d28d0560..a6e13208dc50 100644
> --- a/drivers/gpu/drm/i915/display/intel_cdclk.c
> +++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
> @@ -96,7 +96,7 @@ static void fixed_450mhz_get_cdclk(struct drm_i915_private 
> *dev_priv,
>  static void i85x_get_cdclk(struct drm_i915_private *dev_priv,
>struct intel_cdclk_config *cdclk_config)
>  {
> -   struct pci_dev *pdev = dev_priv->drm.pdev;
> +   struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
> u16 hpllcc = 0;
>  
> /*
> @@ -138,7 +138,7 @@ static void i85x_get_cdclk(struct drm_i915_private 
> *dev_priv,
>  static void i9

Re: [PATCH 06/34] drm/i915: convert put_page() to put_user_page*()

2019-08-02 Thread Joonas Lahtinen
Quoting john.hubb...@gmail.com (2019-08-02 05:19:37)
> From: John Hubbard 
> 
> For pages that were retained via get_user_pages*(), release those pages
> via the new put_user_page*() routines, instead of via put_page() or
> release_pages().
> 
> This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
> ("mm: introduce put_user_page*(), placeholder versions").
> 
> Note that this effectively changes the code's behavior in
> i915_gem_userptr_put_pages(): it now calls set_page_dirty_lock(),
> instead of set_page_dirty(). This is probably more accurate.

We've already fixed this in drm-tip where the current code uses
set_page_dirty_lock().

This would conflict with our tree. Rodrigo is handling
drm-intel-next for 5.4, so you guys want to coordinate how
to merge.

Regards, Joonas
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [Intel-gfx] [PATCH RFC 2/5] cgroup: Add mechanism to register vendor specific DRM devices

2018-12-05 Thread Joonas Lahtinen
Quoting Kuehling, Felix (2018-12-03 22:55:16)
> 
> On 2018-11-28 4:14 a.m., Joonas Lahtinen wrote:
> > Quoting Ho, Kenny (2018-11-27 17:41:17)
> >> On Tue, Nov 27, 2018 at 4:46 AM Joonas Lahtinen 
> >>  wrote:
> >>> I think a more abstract property "% of GPU (processing power)" might
> >>> be a more universal approach. One can then implement that through
> >>> subdividing the resources or timeslicing them, depending on the GPU
> >>> topology.
> >>>
> >>> Leasing 1/8th, 1/4th or 1/2 of the GPU would probably be the most
> >>> applicable to cloud provider usecases, too. At least that's what I
> >>> see done for the CPUs today.
> >> I think there are opportunities to slice the gpu in more than one way 
> >> (similar to the way it is done for cpu.)  We can potentially frame 
> >> resources as continuous or discrete.  Percentage definitely fits well for 
> >> continuous measurements such as time/time slices but I think there are 
> >> places for discrete units such as core counts as well.
> > I think the ask in return to the early series from Intal was to agree
> > on the variables that could be common to all of DRM subsystem.
> >
> > So we can only choose the lowest common denominator, right?
> >
> > Any core count out of total core count should translate nicely into a
> > fraction, so what would be the problem with percentage amounts?
> How would you handle overcommitment with a percentage? That is, more
> than 100% of the GPU cores assigned to cgroups. Which cgroups end up
> sharing cores would be up to chance.

I see your point. With time-slicing, you really can't overcommit. So would
assume that there would have to be second level of detail provided for
overcommitting (and deciding which cgroups are to share GPU cores).

> If we allow specifying a set of GPU cores, we can be more specific in
> assigning and sharing resources between cgroups.

As Matt outlined in the other reply to this thread, we don't really have
the concept of GPU cores. We do have the command streamers, but the
granularity is bit low.

In your architecture, does it matter which specific cores are shared, or
is it just a question of which specific cgroups would share some cores
in case of overcommit?

If we tack in the priority in addition to the percentage, you could make
a choice to share cores only at an identical priority level only. That'd
mean that in the case of overcommit, you'd aim to keep as many high
priority levels free of overcommit as possible and then for lower
priority cgroups you'd start overcommitting.

Would that even partially address the concern?

Regards, Joonas

> 
> Regards,
>   Felix
> 
> 
> >
> > Regards, Joonas
> >
> >> Regards,
> >> Kenny
> >>
> >>> That combined with the "GPU memory usable" property should be a good
> >>> starting point to start subdividing the GPU resources for multiple
> >>> users.
> >>>
> >>> Regards, Joonas
> >>>
> >>>> Your feedback is highly appreciated.
> >>>>
> >>>> Best Regards,
> >>>> Harish
> >>>>
> >>>>
> >>>>
> >>>> From: amd-gfx  on behalf of Tejun 
> >>>> Heo 
> >>>> Sent: Tuesday, November 20, 2018 5:30 PM
> >>>> To: Ho, Kenny
> >>>> Cc: cgro...@vger.kernel.org; intel-...@lists.freedesktop.org; 
> >>>> y2ke...@gmail.com; amd-gfx@lists.freedesktop.org; 
> >>>> dri-de...@lists.freedesktop.org
> >>>> Subject: Re: [PATCH RFC 2/5] cgroup: Add mechanism to register vendor 
> >>>> specific DRM devices
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>> On Tue, Nov 20, 2018 at 10:21:14PM +, Ho, Kenny wrote:
> >>>>> By this reply, are you suggesting that vendor specific resources
> >>>>> will never be acceptable to be managed under cgroup?  Let say a user
> >>>> I wouldn't say never but whatever which gets included as a cgroup
> >>>> controller should have clearly defined resource abstractions and the
> >>>> control schemes around them including support for delegation.  AFAICS,
> >>>> gpu side still seems to have a long way to go (and it's not clear
> >>>> whether that's somewhere it will or needs to end up).
> >>>>
> >>>>> want to have similar functionality as what cgroup is offering but to
> >>>>> manage vendor specific r

RE: [Intel-gfx] [PATCH RFC 2/5] cgroup: Add mechanism to register vendor specific DRM devices

2018-11-28 Thread Joonas Lahtinen
Quoting Ho, Kenny (2018-11-27 17:41:17)
> On Tue, Nov 27, 2018 at 4:46 AM Joonas Lahtinen 
>  wrote:
> > I think a more abstract property "% of GPU (processing power)" might
> > be a more universal approach. One can then implement that through
> > subdividing the resources or timeslicing them, depending on the GPU
> > topology.
> >
> > Leasing 1/8th, 1/4th or 1/2 of the GPU would probably be the most
> > applicable to cloud provider usecases, too. At least that's what I
> > see done for the CPUs today.
> I think there are opportunities to slice the gpu in more than one way 
> (similar to the way it is done for cpu.)  We can potentially frame resources 
> as continuous or discrete.  Percentage definitely fits well for continuous 
> measurements such as time/time slices but I think there are places for 
> discrete units such as core counts as well.

I think the ask in return to the early series from Intal was to agree
on the variables that could be common to all of DRM subsystem.

So we can only choose the lowest common denominator, right?

Any core count out of total core count should translate nicely into a
fraction, so what would be the problem with percentage amounts?

Regards, Joonas

> 
> Regards,
> Kenny
> 
> > That combined with the "GPU memory usable" property should be a good
> > starting point to start subdividing the GPU resources for multiple
> > users.
> >
> > Regards, Joonas
> >
> > >
> > > Your feedback is highly appreciated.
> > >
> > > Best Regards,
> > > Harish
> > >
> > >
> > >
> > > From: amd-gfx  on behalf of Tejun 
> > > Heo 
> > > Sent: Tuesday, November 20, 2018 5:30 PM
> > > To: Ho, Kenny
> > > Cc: cgro...@vger.kernel.org; intel-...@lists.freedesktop.org; 
> > > y2ke...@gmail.com; amd-gfx@lists.freedesktop.org; 
> > > dri-de...@lists.freedesktop.org
> > > Subject: Re: [PATCH RFC 2/5] cgroup: Add mechanism to register vendor 
> > > specific DRM devices
> > >
> > >
> > > Hello,
> > >
> > > On Tue, Nov 20, 2018 at 10:21:14PM +, Ho, Kenny wrote:
> > > > By this reply, are you suggesting that vendor specific resources
> > > > will never be acceptable to be managed under cgroup?  Let say a user
> > >
> > > I wouldn't say never but whatever which gets included as a cgroup
> > > controller should have clearly defined resource abstractions and the
> > > control schemes around them including support for delegation.  AFAICS,
> > > gpu side still seems to have a long way to go (and it's not clear
> > > whether that's somewhere it will or needs to end up).
> > >
> > > > want to have similar functionality as what cgroup is offering but to
> > > > manage vendor specific resources, what would you suggest as a
> > > > solution?  When you say keeping vendor specific resource regulation
> > > > inside drm or specific drivers, do you mean we should replicate the
> > > > cgroup infrastructure there or do you mean either drm or specific
> > > > driver should query existing hierarchy (such as device or perhaps
> > > > cpu) for the process organization information?
> > > >
> > > > To put the questions in more concrete terms, let say a user wants to
> > > > expose certain part of a gpu to a particular cgroup similar to the
> > > > way selective cpu cores are exposed to a cgroup via cpuset, how
> > > > should we go about enabling such functionality?
> > >
> > > Do what the intel driver or bpf is doing?  It's not difficult to hook
> > > into cgroup for identification purposes.
> > >
> > > Thanks.
> > >
> > > --
> > > tejun
> > > ___
> > > amd-gfx mailing list
> > > amd-gfx@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > >
> > >
> > > amd-gfx Info Page - freedesktop.org
> > > lists.freedesktop.org
> > > To see the collection of prior postings to the list, visit the amd-gfx 
> > > Archives.. Using amd-gfx: To post a message to all the list members, send 
> > > email to amd-gfx@lists.freedesktop.org. You can subscribe to the list, or 
> > > change your existing subscription, in the sections below.
> > > 
> > > ___
> > > Intel-gfx mailing list
> > > intel-...@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: [Intel-gfx] [PATCH RFC 2/5] cgroup: Add mechanism to register vendor specific DRM devices

2018-11-27 Thread Joonas Lahtinen
Quoting Kasiviswanathan, Harish (2018-11-26 22:59:30)
> Thanks Tejun,Eric and Christian for your replies.
> 
> We want GPUs resource management to work seamlessly with containers and 
> container orchestration. With the Intel / bpf based approach this is not 
> possible. 
> 
> From your response we gather the following. GPU resources need to be 
> abstracted. We will send a new proposal in same vein. Our current thinking is 
> to start with a single abstracted resource and build a framework that can be 
> expanded to include additional resources. We plan to start with “GPU cores”. 
> We believe all GPUs have some concept of cores or compute unit.

I think a more abstract property "% of GPU (processing power)" might
be a more universal approach. One can then implement that through
subdividing the resources or timeslicing them, depending on the GPU
topology.

Leasing 1/8th, 1/4th or 1/2 of the GPU would probably be the most
applicable to cloud provider usecases, too. At least that's what I
see done for the CPUs today.

That combined with the "GPU memory usable" property should be a good
starting point to start subdividing the GPU resources for multiple
users.

Regards, Joonas

> 
> Your feedback is highly appreciated.
> 
> Best Regards,
> Harish
> 
> 
> 
> From: amd-gfx  on behalf of Tejun Heo 
> 
> Sent: Tuesday, November 20, 2018 5:30 PM
> To: Ho, Kenny
> Cc: cgro...@vger.kernel.org; intel-...@lists.freedesktop.org; 
> y2ke...@gmail.com; amd-gfx@lists.freedesktop.org; 
> dri-de...@lists.freedesktop.org
> Subject: Re: [PATCH RFC 2/5] cgroup: Add mechanism to register vendor 
> specific DRM devices
>   
> 
> Hello,
> 
> On Tue, Nov 20, 2018 at 10:21:14PM +, Ho, Kenny wrote:
> > By this reply, are you suggesting that vendor specific resources
> > will never be acceptable to be managed under cgroup?  Let say a user
> 
> I wouldn't say never but whatever which gets included as a cgroup
> controller should have clearly defined resource abstractions and the
> control schemes around them including support for delegation.  AFAICS,
> gpu side still seems to have a long way to go (and it's not clear
> whether that's somewhere it will or needs to end up).
> 
> > want to have similar functionality as what cgroup is offering but to
> > manage vendor specific resources, what would you suggest as a
> > solution?  When you say keeping vendor specific resource regulation
> > inside drm or specific drivers, do you mean we should replicate the
> > cgroup infrastructure there or do you mean either drm or specific
> > driver should query existing hierarchy (such as device or perhaps
> > cpu) for the process organization information?
> > 
> > To put the questions in more concrete terms, let say a user wants to
> > expose certain part of a gpu to a particular cgroup similar to the
> > way selective cpu cores are exposed to a cgroup via cpuset, how
> > should we go about enabling such functionality?
> 
> Do what the intel driver or bpf is doing?  It's not difficult to hook
> into cgroup for identification purposes.
> 
> Thanks.
> 
> -- 
> tejun
> ___
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> 
> 
> amd-gfx Info Page - freedesktop.org
> lists.freedesktop.org
> To see the collection of prior postings to the list, visit the amd-gfx 
> Archives.. Using amd-gfx: To post a message to all the list members, send 
> email to amd-gfx@lists.freedesktop.org. You can subscribe to the list, or 
> change your existing subscription, in the sections below.
> 
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx