Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-29 Thread Tvrtko Ursulin


On 28/11/2017 21:39, Chris Wilson wrote:

Quoting Chris Wilson (2017-11-28 20:56:23)

Quoting Tvrtko Ursulin (2017-11-28 18:17:54)


On 20/11/2017 12:23, Lionel Landwerlin wrote:

This enables userspace to discover the engines available on the GPU.
Here is the layout on a Skylake GT4:

/sys/devices/pci:00/:00:02.0/drm/card0/gt


On this one I think Joonas had a concern that it is difficult for
userspace to get to the sysfs root from the drm file descriptor.

Lionel pointed out that for master nodes it is quite easy:

fstat(fd, );
sprintf(sysfs_root, "/sys/dev/char/%u:%u", major(st.st_rdev), 
minor(st.st_rdev));

For render nodes it is trickier in a way that they would have to
additional resolve an unknown master drm card number. For instance:

/sys/dev/char/%u:%u/device/drm/cardX

Where the "X" is unknown.

But, is it even an use case for render nodes to try and get to the
sysfs root of the master card? Are they allowed to do so?


Yes. Mesa uses render nodes, and mesa needs the topology for its
performance monitoring api.


So /sys/dev/char/226:128/ does not link to /sys/dev/char/226:0/. Maybe
we should just add a card symlink from each minor back to our sysfs
root? That seems doable.

Then we just need open("/sys/dev/char/%u:%u/card");


There is a link via what I wrote above - 
"/sys/dev/char/%u:%u/device/drm/cardX", but the "X" part is cumbersome.


Adding a link with a constant name would indeed be better, but I wasn't 
sure if it is conceptually allowed and even forgot to mention it.


Regards,

Tvrtko
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-28 Thread Chris Wilson
Quoting Chris Wilson (2017-11-28 20:56:23)
> Quoting Tvrtko Ursulin (2017-11-28 18:17:54)
> > 
> > On 20/11/2017 12:23, Lionel Landwerlin wrote:
> > > This enables userspace to discover the engines available on the GPU.
> > > Here is the layout on a Skylake GT4:
> > > 
> > > /sys/devices/pci:00/:00:02.0/drm/card0/gt
> > 
> > On this one I think Joonas had a concern that it is difficult for
> > userspace to get to the sysfs root from the drm file descriptor.
> > 
> > Lionel pointed out that for master nodes it is quite easy:
> > 
> > fstat(fd, );
> > sprintf(sysfs_root, "/sys/dev/char/%u:%u", major(st.st_rdev), 
> > minor(st.st_rdev));
> > 
> > For render nodes it is trickier in a way that they would have to
> > additional resolve an unknown master drm card number. For instance:
> > 
> > /sys/dev/char/%u:%u/device/drm/cardX
> > 
> > Where the "X" is unknown.
> > 
> > But, is it even an use case for render nodes to try and get to the
> > sysfs root of the master card? Are they allowed to do so?
> 
> Yes. Mesa uses render nodes, and mesa needs the topology for its
> performance monitoring api.

So /sys/dev/char/226:128/ does not link to /sys/dev/char/226:0/. Maybe
we should just add a card symlink from each minor back to our sysfs
root? That seems doable.

Then we just need open("/sys/dev/char/%u:%u/card");
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-28 Thread Chris Wilson
Quoting Tvrtko Ursulin (2017-11-28 18:17:54)
> 
> On 20/11/2017 12:23, Lionel Landwerlin wrote:
> > This enables userspace to discover the engines available on the GPU.
> > Here is the layout on a Skylake GT4:
> > 
> > /sys/devices/pci:00/:00:02.0/drm/card0/gt
> 
> On this one I think Joonas had a concern that it is difficult for
> userspace to get to the sysfs root from the drm file descriptor.
> 
> Lionel pointed out that for master nodes it is quite easy:
> 
> fstat(fd, );
> sprintf(sysfs_root, "/sys/dev/char/%u:%u", major(st.st_rdev), 
> minor(st.st_rdev));
> 
> For render nodes it is trickier in a way that they would have to
> additional resolve an unknown master drm card number. For instance:
> 
> /sys/dev/char/%u:%u/device/drm/cardX
> 
> Where the "X" is unknown.
> 
> But, is it even an use case for render nodes to try and get to the
> sysfs root of the master card? Are they allowed to do so?

Yes. Mesa uses render nodes, and mesa needs the topology for its
performance monitoring api.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-28 Thread Tvrtko Ursulin

On 20/11/2017 12:23, Lionel Landwerlin wrote:
> This enables userspace to discover the engines available on the GPU.
> Here is the layout on a Skylake GT4:
> 
> /sys/devices/pci:00/:00:02.0/drm/card0/gt

On this one I think Joonas had a concern that it is difficult for
userspace to get to the sysfs root from the drm file descriptor.

Lionel pointed out that for master nodes it is quite easy:

fstat(fd, );
sprintf(sysfs_root, "/sys/dev/char/%u:%u", major(st.st_rdev), 
minor(st.st_rdev));

For render nodes it is trickier in a way that they would have to
additional resolve an unknown master drm card number. For instance:

/sys/dev/char/%u:%u/device/drm/cardX

Where the "X" is unknown.

But, is it even an use case for render nodes to try and get to the
sysfs root of the master card? Are they allowed to do so?

Regards,

Tvrtko


> ├── bcs
> │   └── 0
> │   ├── capabilities
> │   ├── class
> │   └── id
> ├── rcs
> │   └── 0
> │   ├── capabilities
> │   ├── class
> │   └── id
> ├── vcs
> │   ├── 0
> │   │   ├── capabilities
> │   │   │   └── hevc
> │   │   ├── class
> │   │   └── id
> │   └── 1
> │   ├── capabilities
> │   ├── class
> │   └── id
> └── vecs
>  └── 0
>  ├── capabilities
>  ├── class
>  └── id
> 
> Further capabilities can be added later as attributes of each engine.
> 
> v2: Add capabilities sub directory (Tvrtko)
>  Move engines directory to drm/card/gt (Chris)
> 
> Signed-off-by: Lionel Landwerlin 
> ---
>   drivers/gpu/drm/i915/i915_drv.h |   5 +
>   drivers/gpu/drm/i915/i915_reg.h |   1 +
>   drivers/gpu/drm/i915/i915_sysfs.c   | 160 
> 
>   drivers/gpu/drm/i915/intel_engine_cs.c  |  12 +++
>   drivers/gpu/drm/i915/intel_ringbuffer.h |   4 +
>   5 files changed, 182 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 78c49db4280a..db550322207c 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2729,6 +2729,11 @@ struct drm_i915_private {
>   } oa;
>   } perf;
>   
> + struct {
> + struct kobject kobj;
> + struct kobject classes_kobjs[MAX_ENGINE_CLASS];
> + } gt_topology;
> +
>   /* Abstract the submission mechanism (legacy ringbuffer or execlists) 
> away */
>   struct {
>   void (*resume)(struct drm_i915_private *);
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 96c80fa0fcac..17aecd4fc6aa 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -186,6 +186,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
>   #define VIDEO_ENHANCEMENT_CLASS 2
>   #define COPY_ENGINE_CLASS   3
>   #define OTHER_CLASS 4
> +#define MAX_ENGINE_CLASS 5
>   
>   /* PCI config space */
>   
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c 
> b/drivers/gpu/drm/i915/i915_sysfs.c
> index 791759f632e1..fd04d0b93eaf 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -559,6 +559,160 @@ static void i915_setup_error_capture(struct device 
> *kdev) {}
>   static void i915_teardown_error_capture(struct device *kdev) {}
>   #endif
>   
> +static struct attribute engine_id_attr = {
> + .name = "id",
> + .mode = 0444,
> +};
> +
> +static struct attribute engine_class_attr = {
> + .name = "class",
> + .mode = 0444,
> +};
> +
> +static ssize_t
> +show_engine_attr(struct kobject *kobj, struct attribute *attr, char *buf)
> +{
> + struct intel_engine_cs *engine =
> + container_of(kobj, struct intel_engine_cs, instance_kobj);
> +
> + if (attr == _id_attr)
> + return sprintf(buf, "%hhu\n", engine->uabi_id);
> + if (attr == _class_attr)
> + return sprintf(buf, "%hhu\n", engine->uabi_class);
> + return sprintf(buf, "\n");
> +}
> +
> +static const struct sysfs_ops engine_ops = {
> + .show = show_engine_attr,
> +};
> +
> +static struct kobj_type engine_type = {
> + .sysfs_ops = _ops,
> +};
> +
> +static struct attribute engine_capability_hevc_attr = {
> + .name = "hevc",
> + .mode = 0444,
> +};
> +
> +static ssize_t
> +show_engine_capabilities_attr(struct kobject *kobj, struct attribute *attr, 
> char *buf)
> +{
> + struct intel_engine_cs *engine =
> + container_of(kobj, struct intel_engine_cs, capabilities_kobj);
> +
> + if (attr == _capability_hevc_attr)
> + return sprintf(buf, "%i\n", INTEL_GEN(engine->i915) >= 8);
> + return sprintf(buf, "\n");
> +}
> +
> +static const struct sysfs_ops engine_capabilities_ops = {
> + .show = show_engine_capabilities_attr,
> +};
> +
> +static struct kobj_type engine_capabilities_type = {
> + .sysfs_ops = _capabilities_ops,
> +};
> +
> +static int i915_setup_engines_sysfs(struct drm_i915_private *dev_priv,
> +  

Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-21 Thread Ewelina Musial
On Mon, Nov 20, 2017 at 03:57:49PM +, Tvrtko Ursulin wrote:
> 
> On 20/11/2017 12:23, Lionel Landwerlin wrote:
> > This enables userspace to discover the engines available on the GPU.
> > Here is the layout on a Skylake GT4:
> > 
> > /sys/devices/pci:00/:00:02.0/drm/card0/gt
> > ├── bcs
> 
> It niggles me a bit that engine class names are directly under gt. If we end
> up having to add something else under gt then it will be a little bit of an
> enumeration trouble for userspace.
> 
> So I wanted to suggest gt/engines/rcs if that makes sense to people?
> 
> Regards,
> 
> Tvrtko
> 
Good suggestion.  After adding anything else to gt/ will be more clearly.

--
Cheers,
Ewelina
> > │   └── 0
> > │   ├── capabilities
> > │   ├── class
> > │   └── id
> > ├── rcs
> > │   └── 0
> > │   ├── capabilities
> > │   ├── class
> > │   └── id
> > ├── vcs
> > │   ├── 0
> > │   │   ├── capabilities
> > │   │   │   └── hevc
> > │   │   ├── class
> > │   │   └── id
> > │   └── 1
> > │   ├── capabilities
> > │   ├── class
> > │   └── id
> > └── vecs
> >  └── 0
> >  ├── capabilities
> >  ├── class
> >  └── id
> > 
> > Further capabilities can be added later as attributes of each engine.
> > 
> > v2: Add capabilities sub directory (Tvrtko)
> >  Move engines directory to drm/card/gt (Chris)
> > 
> > Signed-off-by: Lionel Landwerlin 
> > ---
> >   drivers/gpu/drm/i915/i915_drv.h |   5 +
> >   drivers/gpu/drm/i915/i915_reg.h |   1 +
> >   drivers/gpu/drm/i915/i915_sysfs.c   | 160 
> > 
> >   drivers/gpu/drm/i915/intel_engine_cs.c  |  12 +++
> >   drivers/gpu/drm/i915/intel_ringbuffer.h |   4 +
> >   5 files changed, 182 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 78c49db4280a..db550322207c 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -2729,6 +2729,11 @@ struct drm_i915_private {
> > } oa;
> > } perf;
> > +   struct {
> > +   struct kobject kobj;
> > +   struct kobject classes_kobjs[MAX_ENGINE_CLASS];
> > +   } gt_topology;
> > +
> > /* Abstract the submission mechanism (legacy ringbuffer or execlists) 
> > away */
> > struct {
> > void (*resume)(struct drm_i915_private *);
> > diff --git a/drivers/gpu/drm/i915/i915_reg.h 
> > b/drivers/gpu/drm/i915/i915_reg.h
> > index 96c80fa0fcac..17aecd4fc6aa 100644
> > --- a/drivers/gpu/drm/i915/i915_reg.h
> > +++ b/drivers/gpu/drm/i915/i915_reg.h
> > @@ -186,6 +186,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
> >   #define VIDEO_ENHANCEMENT_CLASS   2
> >   #define COPY_ENGINE_CLASS 3
> >   #define OTHER_CLASS   4
> > +#define MAX_ENGINE_CLASS   5
> >   /* PCI config space */
> > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c 
> > b/drivers/gpu/drm/i915/i915_sysfs.c
> > index 791759f632e1..fd04d0b93eaf 100644
> > --- a/drivers/gpu/drm/i915/i915_sysfs.c
> > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> > @@ -559,6 +559,160 @@ static void i915_setup_error_capture(struct device 
> > *kdev) {}
> >   static void i915_teardown_error_capture(struct device *kdev) {}
> >   #endif
> > +static struct attribute engine_id_attr = {
> > +   .name = "id",
> > +   .mode = 0444,
> > +};
> > +
> > +static struct attribute engine_class_attr = {
> > +   .name = "class",
> > +   .mode = 0444,
> > +};
> > +
> > +static ssize_t
> > +show_engine_attr(struct kobject *kobj, struct attribute *attr, char *buf)
> > +{
> > +   struct intel_engine_cs *engine =
> > +   container_of(kobj, struct intel_engine_cs, instance_kobj);
> > +
> > +   if (attr == _id_attr)
> > +   return sprintf(buf, "%hhu\n", engine->uabi_id);
> > +   if (attr == _class_attr)
> > +   return sprintf(buf, "%hhu\n", engine->uabi_class);
> > +   return sprintf(buf, "\n");
> > +}
> > +
> > +static const struct sysfs_ops engine_ops = {
> > +   .show = show_engine_attr,
> > +};
> > +
> > +static struct kobj_type engine_type = {
> > +   .sysfs_ops = _ops,
> > +};
> > +
> > +static struct attribute engine_capability_hevc_attr = {
> > +   .name = "hevc",
> > +   .mode = 0444,
> > +};
> > +
> > +static ssize_t
> > +show_engine_capabilities_attr(struct kobject *kobj, struct attribute 
> > *attr, char *buf)
> > +{
> > +   struct intel_engine_cs *engine =
> > +   container_of(kobj, struct intel_engine_cs, capabilities_kobj);
> > +
> > +   if (attr == _capability_hevc_attr)
> > +   return sprintf(buf, "%i\n", INTEL_GEN(engine->i915) >= 8);
> > +   return sprintf(buf, "\n");
> > +}
> > +
> > +static const struct sysfs_ops engine_capabilities_ops = {
> > +   .show = show_engine_capabilities_attr,
> > +};
> > +
> > +static struct kobj_type engine_capabilities_type = {
> > +   .sysfs_ops = _capabilities_ops,
> > +};
> > +
> > +static int i915_setup_engines_sysfs(struct 

Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-20 Thread Tvrtko Ursulin



On 20/11/2017 16:33, Lionel Landwerlin wrote:

On 20/11/17 16:03, Tvrtko Ursulin wrote:
- I would need engine->instance here as well. 

Sure.
I thought that wasn't uabi though.


Then again engine->instance is already there as the containing directory 
name. So do I need it in a separate file, not sure.. so many questions, 
so little time.. :)


Regards,

Tvrtko
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-20 Thread Lionel Landwerlin

On 20/11/17 16:03, Tvrtko Ursulin wrote:
- I would need engine->instance here as well. 

Sure.
I thought that wasn't uabi though.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-20 Thread Tvrtko Ursulin


On 20/11/2017 12:23, Lionel Landwerlin wrote:

This enables userspace to discover the engines available on the GPU.
Here is the layout on a Skylake GT4:

/sys/devices/pci:00/:00:02.0/drm/card0/gt
├── bcs
│   └── 0
│   ├── capabilities
│   ├── class
│   └── id


One more thing - I would need engine->instance here as well.

Regards,

Tvrtko


├── rcs
│   └── 0
│   ├── capabilities
│   ├── class
│   └── id
├── vcs
│   ├── 0
│   │   ├── capabilities
│   │   │   └── hevc
│   │   ├── class
│   │   └── id
│   └── 1
│   ├── capabilities
│   ├── class
│   └── id
└── vecs
 └── 0
 ├── capabilities
 ├── class
 └── id

Further capabilities can be added later as attributes of each engine.

v2: Add capabilities sub directory (Tvrtko)
 Move engines directory to drm/card/gt (Chris)

Signed-off-by: Lionel Landwerlin 
---
  drivers/gpu/drm/i915/i915_drv.h |   5 +
  drivers/gpu/drm/i915/i915_reg.h |   1 +
  drivers/gpu/drm/i915/i915_sysfs.c   | 160 
  drivers/gpu/drm/i915/intel_engine_cs.c  |  12 +++
  drivers/gpu/drm/i915/intel_ringbuffer.h |   4 +
  5 files changed, 182 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 78c49db4280a..db550322207c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2729,6 +2729,11 @@ struct drm_i915_private {
} oa;
} perf;
  
+	struct {

+   struct kobject kobj;
+   struct kobject classes_kobjs[MAX_ENGINE_CLASS];
+   } gt_topology;
+
/* Abstract the submission mechanism (legacy ringbuffer or execlists) 
away */
struct {
void (*resume)(struct drm_i915_private *);
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 96c80fa0fcac..17aecd4fc6aa 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -186,6 +186,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
  #define VIDEO_ENHANCEMENT_CLASS   2
  #define COPY_ENGINE_CLASS 3
  #define OTHER_CLASS   4
+#define MAX_ENGINE_CLASS   5
  
  /* PCI config space */
  
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c

index 791759f632e1..fd04d0b93eaf 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -559,6 +559,160 @@ static void i915_setup_error_capture(struct device *kdev) 
{}
  static void i915_teardown_error_capture(struct device *kdev) {}
  #endif
  
+static struct attribute engine_id_attr = {

+   .name = "id",
+   .mode = 0444,
+};
+
+static struct attribute engine_class_attr = {
+   .name = "class",
+   .mode = 0444,
+};
+
+static ssize_t
+show_engine_attr(struct kobject *kobj, struct attribute *attr, char *buf)
+{
+   struct intel_engine_cs *engine =
+   container_of(kobj, struct intel_engine_cs, instance_kobj);
+
+   if (attr == _id_attr)
+   return sprintf(buf, "%hhu\n", engine->uabi_id);
+   if (attr == _class_attr)
+   return sprintf(buf, "%hhu\n", engine->uabi_class);
+   return sprintf(buf, "\n");
+}
+
+static const struct sysfs_ops engine_ops = {
+   .show = show_engine_attr,
+};
+
+static struct kobj_type engine_type = {
+   .sysfs_ops = _ops,
+};
+
+static struct attribute engine_capability_hevc_attr = {
+   .name = "hevc",
+   .mode = 0444,
+};
+
+static ssize_t
+show_engine_capabilities_attr(struct kobject *kobj, struct attribute *attr, 
char *buf)
+{
+   struct intel_engine_cs *engine =
+   container_of(kobj, struct intel_engine_cs, capabilities_kobj);
+
+   if (attr == _capability_hevc_attr)
+   return sprintf(buf, "%i\n", INTEL_GEN(engine->i915) >= 8);
+   return sprintf(buf, "\n");
+}
+
+static const struct sysfs_ops engine_capabilities_ops = {
+   .show = show_engine_capabilities_attr,
+};
+
+static struct kobj_type engine_capabilities_type = {
+   .sysfs_ops = _capabilities_ops,
+};
+
+static int i915_setup_engines_sysfs(struct drm_i915_private *dev_priv,
+   struct kobject *gt_kobj)
+{
+   struct intel_engine_cs *engine_for_class, *engine;
+   enum intel_engine_id id_for_class, id;
+   bool registred[MAX_ENGINE_CLASS] = { false, };
+   int ret;
+
+   for_each_engine(engine_for_class, dev_priv, id_for_class) {
+   struct kobject *engine_class_kobj =
+   
_priv->gt_topology.classes_kobjs[engine_for_class->class];
+
+   if (registred[engine_for_class->class])
+   continue;
+
+   registred[engine_for_class->class] = true;
+
+   ret = kobject_init_and_add(engine_class_kobj,
+  gt_kobj->ktype,
+  

Re: [Intel-gfx] [PATCH 3/4] drm/i915: expose engine availability through sysfs

2017-11-20 Thread Tvrtko Ursulin


On 20/11/2017 12:23, Lionel Landwerlin wrote:

This enables userspace to discover the engines available on the GPU.
Here is the layout on a Skylake GT4:

/sys/devices/pci:00/:00:02.0/drm/card0/gt
├── bcs


It niggles me a bit that engine class names are directly under gt. If we 
end up having to add something else under gt then it will be a little 
bit of an enumeration trouble for userspace.


So I wanted to suggest gt/engines/rcs if that makes sense to people?

Regards,

Tvrtko


│   └── 0
│   ├── capabilities
│   ├── class
│   └── id
├── rcs
│   └── 0
│   ├── capabilities
│   ├── class
│   └── id
├── vcs
│   ├── 0
│   │   ├── capabilities
│   │   │   └── hevc
│   │   ├── class
│   │   └── id
│   └── 1
│   ├── capabilities
│   ├── class
│   └── id
└── vecs
 └── 0
 ├── capabilities
 ├── class
 └── id

Further capabilities can be added later as attributes of each engine.

v2: Add capabilities sub directory (Tvrtko)
 Move engines directory to drm/card/gt (Chris)

Signed-off-by: Lionel Landwerlin 
---
  drivers/gpu/drm/i915/i915_drv.h |   5 +
  drivers/gpu/drm/i915/i915_reg.h |   1 +
  drivers/gpu/drm/i915/i915_sysfs.c   | 160 
  drivers/gpu/drm/i915/intel_engine_cs.c  |  12 +++
  drivers/gpu/drm/i915/intel_ringbuffer.h |   4 +
  5 files changed, 182 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 78c49db4280a..db550322207c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2729,6 +2729,11 @@ struct drm_i915_private {
} oa;
} perf;
  
+	struct {

+   struct kobject kobj;
+   struct kobject classes_kobjs[MAX_ENGINE_CLASS];
+   } gt_topology;
+
/* Abstract the submission mechanism (legacy ringbuffer or execlists) 
away */
struct {
void (*resume)(struct drm_i915_private *);
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 96c80fa0fcac..17aecd4fc6aa 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -186,6 +186,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
  #define VIDEO_ENHANCEMENT_CLASS   2
  #define COPY_ENGINE_CLASS 3
  #define OTHER_CLASS   4
+#define MAX_ENGINE_CLASS   5
  
  /* PCI config space */
  
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c

index 791759f632e1..fd04d0b93eaf 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -559,6 +559,160 @@ static void i915_setup_error_capture(struct device *kdev) 
{}
  static void i915_teardown_error_capture(struct device *kdev) {}
  #endif
  
+static struct attribute engine_id_attr = {

+   .name = "id",
+   .mode = 0444,
+};
+
+static struct attribute engine_class_attr = {
+   .name = "class",
+   .mode = 0444,
+};
+
+static ssize_t
+show_engine_attr(struct kobject *kobj, struct attribute *attr, char *buf)
+{
+   struct intel_engine_cs *engine =
+   container_of(kobj, struct intel_engine_cs, instance_kobj);
+
+   if (attr == _id_attr)
+   return sprintf(buf, "%hhu\n", engine->uabi_id);
+   if (attr == _class_attr)
+   return sprintf(buf, "%hhu\n", engine->uabi_class);
+   return sprintf(buf, "\n");
+}
+
+static const struct sysfs_ops engine_ops = {
+   .show = show_engine_attr,
+};
+
+static struct kobj_type engine_type = {
+   .sysfs_ops = _ops,
+};
+
+static struct attribute engine_capability_hevc_attr = {
+   .name = "hevc",
+   .mode = 0444,
+};
+
+static ssize_t
+show_engine_capabilities_attr(struct kobject *kobj, struct attribute *attr, 
char *buf)
+{
+   struct intel_engine_cs *engine =
+   container_of(kobj, struct intel_engine_cs, capabilities_kobj);
+
+   if (attr == _capability_hevc_attr)
+   return sprintf(buf, "%i\n", INTEL_GEN(engine->i915) >= 8);
+   return sprintf(buf, "\n");
+}
+
+static const struct sysfs_ops engine_capabilities_ops = {
+   .show = show_engine_capabilities_attr,
+};
+
+static struct kobj_type engine_capabilities_type = {
+   .sysfs_ops = _capabilities_ops,
+};
+
+static int i915_setup_engines_sysfs(struct drm_i915_private *dev_priv,
+   struct kobject *gt_kobj)
+{
+   struct intel_engine_cs *engine_for_class, *engine;
+   enum intel_engine_id id_for_class, id;
+   bool registred[MAX_ENGINE_CLASS] = { false, };
+   int ret;
+
+   for_each_engine(engine_for_class, dev_priv, id_for_class) {
+   struct kobject *engine_class_kobj =
+   
_priv->gt_topology.classes_kobjs[engine_for_class->class];
+
+   if (registred[engine_for_class->class])
+   continue;
+
+