On 29/09/2025 15:07, Danilo Krummrich wrote:
On Wed Sep 3, 2025 at 5:23 PM CEST, Tvrtko Ursulin wrote:
This is another respin of this old work^1 which since v7 is a total rewrite and
completely changes how the control is done.

I only got some of the patches of the series, can you please send all of them
for subsequent submissions? You may also want to consider resending if you're
not getting a lot of feedback due to that. :)

There is so many cc across the series that I am reluctant to copy everyone on all patches. So I count on people being subscribed to mailing lists and being able to look into the archives if all else fails.

Regarding the luke warm response here is short video showing it in action:

https://people.igalia.com/tursulin/drm_cgroup.mp4

Please ignore the typos made in the video commentary but I would say it is worth a watch.

Lets see if that helps to paint a picture to people on what this can do. With some minimum imagination different use cases are obvious as well. For example start a compute job in the background with the UI still being responsive.
On the userspace interface side of things it is the same as before. We have
drm.weight as an interface, taking integers from 1 to 10000, the same as CPU and
IO cgroup controllers.

In general, I think it would be good to get GPU vendors to speak up to what kind
of interfaces they're heading to with firmware schedulers and potential firmware
APIs to control scheduling; especially given that this will be a uAPI.

(Adding a couple of folks to Cc.)

Having that said, I think the basic drm.weight interface is fine and should work
in any case; i.e. with the existing DRM GPU scheduler in both modes, the
upcoming DRM Jobqueue efforts and should be generic enough to work with
potential firmware interfaces we may see in the future.Yes, basic drm.weight 
should not be controversial at all.

For all drivers which use the DRM scheduler in the 1:N mode it is trivial to wire the support up once the "fair" DRM scheduler lands. Trivial because scheduling weight is directly compatible with virtual GPU time accounting fair scheduler implements. This series has an example how to do it for amdgpu and many other simple drivers could do it exactly like with a few lines of boilerplate code.

For some 1:1 firmware scheduling drivers, like xe for example, patch series also includes a sketch on how it could make use drm.weight by giving firmware a hint what is the most important, and what is least important. In practice that is also usable for some use cases. (In fact the demo video above was made with xe! Results with amdgpu are pretty similar but I hit some snags with screen recording on that device.)

Possibly the main problem causing the luke warm response, as far as I understood during the XDC last month, is what to do about the drivers where seemingly neither approach can be implemented.

Like nouveau for example. Thinking seems to be it couldn't be wired up. I don't know that driver nor hardware (firmware) so I cannot say.

To satisfy that concern one idea I had is that perhaps I could expose a new control file like drm.weight_supported or something, which could have a semantics along the lines of:

 + - all active client/drivers in the cgroup support the feature
 ? - a mix of supported and unsupported
 - - none support the drm.weight feature

That would give visibility to the "Why is this thing not doing anything on my system?" question. Then over time solutions on how to support even those problematic drivers with closed firmware could be found. There will certainly be motivation not to be the one with worse user experience.

Regards,

Tvrtko

Reply via email to