On 08.09.2025 13:28, Penny, Zheng wrote: > [Public] > >> -----Original Message----- >> From: Jan Beulich <jbeul...@suse.com> >> Sent: Monday, September 8, 2025 6:02 PM >> To: Penny, Zheng <penny.zh...@amd.com> >> Cc: Andryuk, Jason <jason.andr...@amd.com>; xen-devel@lists.xenproject.org >> Subject: Re: [PATCH v9 1/8] xen/cpufreq: embed hwp into struct >> cpufreq_policy{} >> >> (re-adding the list) >> >> On 05.09.2025 06:58, Penny, Zheng wrote: >>> [Public] >>> >>>> -----Original Message----- >>>> From: Jan Beulich <jbeul...@suse.com> >>>> Sent: Thursday, September 4, 2025 7:51 PM >>>> To: Penny, Zheng <penny.zh...@amd.com>; Andryuk, Jason >>>> <jason.andr...@amd.com> >>>> Cc: Andrew Cooper <andrew.coop...@citrix.com>; Roger Pau Monné >>>> <roger....@citrix.com>; xen-devel@lists.xenproject.org >>>> Subject: Re: [PATCH v9 1/8] xen/cpufreq: embed hwp into struct >>>> cpufreq_policy{} >>>> >>>> On 04.09.2025 08:35, Penny Zheng wrote: >>>>> For cpus sharing one cpufreq domain, cpufreq_driver.init() is only >>>>> invoked on the firstcpu, so current per-CPU hwp driver data struct >>>>> hwp_drv_data{} actually fails to be allocated for cpus other than >>>>> the first one. There is no need to make it per-CPU. >>>>> We embed struct hwp_drv_data{} into struct cpufreq_policy{}, then >>>>> cpus could share the hwp driver data allocated for the firstcpu, >>>>> like the way they share struct cpufreq_policy{}. We also make it a >>>>> union, with "hwp", and later "amd-cppc" as a sub-struct. >>>> >>>> And ACPI, as per my patch (which then will need re-basing). >>>> >>>>> Suggested-by: Jan Beulich <jbeul...@suse.com> >>>> >>>> Not quite, this really is Reported-by: as it's a bug you fix, and in >>>> turn it also wants to gain a Fixes: tag. This also will need backporting. >>>> >>>> It would also have been nice if you had Cc-ed Jason right away, >>>> seeing that this code was all written by him. >>>> >>>>> @@ -259,7 +258,7 @@ static int cf_check hwp_cpufreq_target(struct >>>> cpufreq_policy *policy, >>>>> unsigned int relation) { >>>>> unsigned int cpu = policy->cpu; >>>>> - struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu); >>>>> + struct hwp_drv_data *data = policy->u.hwp; >>>>> /* Zero everything to ensure reserved bits are zero... */ >>>>> union hwp_request hwp_req = { .raw = 0 }; >>>> >>>> Further down in this same function we have >>>> >>>> on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1); >>>> >>>> That's similarly problematic when the CPU denoted by policy->cpu >>>> isn't online anymore. (It's not quite clear whether all related >>>> issues would want fixing together, or in multiple patches.) >>> >>> Checking the logic in cpufreq_del_cpu(), once any processor in the >>> domain gets offline, the governor will stop. >> >> Yet with HWP and CPPC drivers being governor-less, how would that matter? > > In CPPC passive mode, we are still relying on governor to do performance > tuning. > In CPPC active mode, yes, it is governor-less, how about we disable the CPPC- > enable bit for the offline cpus?
Didn't you say that's a sticky bit? Plus how would this help with the issue at hand? Jan