On 11/12/2017 10:50, Joonas Lahtinen wrote:
+ Daniel, Chris

On Thu, 2017-12-07 at 09:21 +0000, Tvrtko Ursulin wrote:
On 04/12/2017 15:02, Lionel Landwerlin wrote:
Hi,

After discussion with Chris, Joonas & Tvrtko, this series adds an
additional commit to link the render node back to the card through a
symlink. Making it obvious from an application using a render node to
know where to get the information it needs.

Important thing to mention as well is that it is trivial to get from the
master drm fd to the sysfs root, via fstat and opendir
/sys/dev/char/<major>:<minor>. With the addition of the card symlink to
render nodes it is trivial for render node fd as well.

I am happy with this approach - it is extensible, flexible and avoids
issues with ioctl versioning or whatnot. With one value per file it is
trivial for userspace to access.

So for what I'm concerned, given how gputop would use all of this and so
be the userspace, if everyone else is happy, I think we could do a
detailed review and prehaps also think about including gputop in some
distribution to make the case 100% straightforward.

For the GPU topology I agree this is the right choice, it's going to be
about topology after all, and directory tree is the perfect candidate.
And if a new platform appears, then it's a new platform and may change
the topology well the hardware topology has changed.

For the engine enumeration, I'm not equally sold for sysfs exposing it.
It's a "linear list of engine instances with flags" how the userspace
is going to be looking at them. And it's also information about what to
pass to an IOCTL as arguments after decision has been made, and then
you already have the FD you know you'll be dealing with, at hand. So
another IOCTL for that seems more convenient.

Apart from more flexibility and easier to extend, sysfs might be a better fit for applications which do not otherwise need a drm fd. Say a top-like tool which shows engine utilization, or those patches I RFC-ed recently which do the same but per DRM client.

Okay, these stats are now available also via PMU so the argument is not the strongest I admit, but I still find it quite neat. It also might allow us to define our own policy with regards to needed privilege to access these stats, and not be governed by the perf API rules.

So I'd say for the GPU topology part, we go forward with the review and
make sure we don't expose driver internal bits that could break when
refactoring code. If the exposed N bits of information are strictly
tied to the underlying hardware, we should have no trouble maintaining
that for the foreseeable future.

Then we can continue on about the engine discovery in parallel, not
blocking GPU topology discovery.

I can live with that, but would like to keep the gt/engines/ namespace reserved for the eventuality with go with engine info in sysfs at a later stage then.

Also, Lionel, did you have plans to use the engine info straight away in gpu top, or you only needed topology? I think you were drawing a nice block diagram of a GPU so do you need it for that?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to