Eric Saxe wrote:
> Mark Haywood wrote:
>   
>> Beautiful. Thanks for doing that. Anup and I just took a quick look at 
>> the webrev and had a question about cpupm_num_groups() in cpu_pm.c. We 
>> didn't understand the comment at the top of the routine and weren't 
>> sure how to provide you with the "guts" for the routine.
>>     
> I was initially trying to provide for the possibility that there are 
> multiple levels of CPU power management for a given CPU. For example, if 
> we're able to quiesce an entire core, perhaps we'll drop the 
> frequency...but if we can quiesce the socket, we (or at least the CPU) 
> can also drop the voltage. This interface returns the number of 
> "levels"...since the PG framework could create multiple levels of 
> groupings to implement scheduling policy against...
>
> But I actually think I can get away with just having the second 
> interface alone...please see below..
>
> Does ACPI define distinct domains for logical CPUs which can share a 
> frequency change, and logical CPUs which can share a voltage change?
>   

I assume that you are referring to P-state domains. If so, no the 
P-state domains combine both frequency and voltage in the domain.

>> Offhand, do you know of any other interfaces you want support from in 
>> there?
>>     

> I was thinking:
> - Frequency domain definition
> - Voltage domain definition
> - Or if the above two aren't separately definable, a unified DVFS domain 
> for both.
>
> cpupm_num_groups() returns the number of domains (which for the above 
> would be 2, 1, or 0).
> cpupm_get_groupid() returns a group "id" identifying the group that the 
> CPU belongs to. Returning the same "id" for two CPUs would mean that the 
> two CPUs belong in the same group.
>
> If I make cpupm_get_groupid() return (id_t)-1 if the given domain isn't 
> defined, then I can eliminate cpupm_num_groups(). (See why by looking at 
> how it's used at the top of mp_machdep.c). :)
>   
I see you changed it. ;-)

> So the above would be:
> Interface from the Processor Groups Framework to the CPU power manager:
> - Enumerate processor group power domains for a given CPU
>
> ...beyond this, I would guess we need:
>
> Interfaces from the dispatcher to the CPU power manager:
> - Indicate that a previously utilized resource has become non-utilized.
> - Indicate that a previously non-utilized resource has become utilized.
>
> Interfaces from cpu_pm to the platform driver:
> - Enumerate power domains for a given CPU
> - Enumerate power states for a given domain
> - Change the power state for a given domain
>   

We'll start working on these.

> I was also thinking about domin enumeration, and when that could/should 
> happen, and was thinking there's at least a couple of ways to go:
> - When a CPU is configured into the system, have the processor groups 
> subsystem ask the CPU power manager about any power domains the CPU is 
> involved in...and have the power manager, in turn ask the platform 
> driver, which talks to ACPI or the MD. At that time, have the CPU power 
> manager also ask the driver to enumerate power states for the domain.
>
> - Have all this be triggered by the addition of the platform driver, 
> which upon loading does all the domain/state enumeration, and then calls 
> into the CPU power manager to notify it of the domains/states which 
> would then go and call into the processor groups
> subsystem to set things up.
>
> The former is the way it pretty much is going now...the latter work 
> would require more work on the processor groups side...but it could be 
> done if there's a good reason for it. One reason I like the former, is 
> that it works well with DR, since all the right plumbing happens as a 
> result of a new CPU being configured into (or one being de-configured 
> out of) the running system.
>   

If I understand your proposals, then I think the former is the way to 
go. It is hard for us to get the domain information until the CPU is 
configured. In other words, I'm not quite sure how to read the domain 
information from ACPI until the CPU is configured.

> With the latter, we could potentially unload/load a new platform driver 
> and/or CPU power manager on a running system...which would make 
> "dropping" in new/better CPU PM support or policy on a running system as 
> possibility.
>
>
>   
>> I think for the immediate support, we'll try to provide support from 
>> the existing driver, but I'd like for us to very quickly, try to work 
>> our way towards a replacement (that will provide [CTP]-state support].
>>     
> Makes sense. Do you think we could/should code all that into the 
> platform independent CPU power manager, or a new platform specific driver?
>   

Not sure at this point.

>   
>> Also, are you ok with me syncing the gate up with a Nevada build? It 
>> does seem less necessary since you can use the hg changesets to get a 
>> base, but I still like having a known build to use as a base.
>>     
> I'm fine with it...Thanks Mark...
>
> -Eric
>
> _______________________________________________
> tesla-dev mailing list
> tesla-dev at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/tesla-dev
>   


Reply via email to