On 1/12/2023 7:15 PM, Dixit, Ashutosh wrote:
On Thu, 12 Jan 2023 18:27:52 -0800, Vinay Belgaumkar wrote:
Reading current root sysfs entries gives a min/max of all
GTs. Updating this so we return default (GT0) values when root
level sysfs entries are accessed, instead of min/max for the card.
Tests that are not multi GT capable will read incorrect sysfs
values without this change on multi-GT platforms like MTL.

Fixes: a8a4f0467d70 ("drm/i915: Fix CFI violations in gt_sysfs")
We seem to be proposing to change the previous sysfs ABI with this patch?
But even then it doesn't seem correct to use gt0 values for device level
sysfs. Actually I received the following comment about using max freq
across gt's for device level freq's (gt_act_freq_mhz etc.) from one of our
users:

I think the ABI was changed by the patch mentioned in the commit (a8a4f0467d70). If I am not mistaken, original behavior was to return the GT0 values (I will double check this).

IMO, if that patch changed the behavior, it should have been accompanied with patches that update all the tests to use the proper per GT sysfs as well.

Thanks,

Vinay.


-----
On Sun, 06 Nov 2022 08:54:04 -0800, Lawson, Lowren H wrote:

Why show maximum? Wouldn’t average be more accurate to the user experience?

As a user, I expect the ‘card’ frequency to be relatively accurate to the
entire card. If I see 1.6GHz, but the card is behaving as if it’s running a
1.0 & 1.6GHz on the different compute tiles, I’m going to see a massive
decrease in compute workload performance while at ‘maximum’ frequency.
-----

So I am not sure why max/min were previously chosen. Why not the average?

Thanks.
--
Ashutosh

Reply via email to