Hi Gerry,

Since we were little busy with getting PAD work done, sorry we could not
proceed with the discussion about cpu hotplug and cpupm. Just wanted to
let you know that we will start looking into getting x86 cpupm hotplug
compliant. Since now there are two different ways of cpu pm (event mode
and poll mode) we may have to think of a solution that can be used for
both. Was curious if there is a pointer to the x86 cpu hotplug design
document, requirement spec and other links that you think will be useful
for us to take a look to get an idea about the working of x86 cpu
hotplug.Also might have to look for a system here that supports x86 cpu
hotplug.

Will work together to get this issue solved.Your comments will be greatly
appreciated.

Thanks
Anup


Liu, Jiang wrote:
> Hi Mark,
>       Please refer to comments below.
>
> Mark.Haywood at Sun.COM <mailto:Mark.Haywood at Sun.COM> wrote:
>   
>> Liu, Jiang wrote:
>>     
>>> Hi Mark and Anup,
>>>     Currently I'm working on a project relative to CPU
>>>       
>> hotplug on x86 system. The new design is much more friendly to
>> CPU hotplug than currently on in onnv tree, really appreciate
>> your work. I still have several questions relative to CPU hotplug.
>>     
>>>     1) Could you please help to turn on support for driver
>>>       
>> detach in cpudrv.c? CPU hotplug has dependency on that.
>>     
>> I'll look into it to see what's involved and get back to you.
>>
>>     
>>>     2) Seems cpupm subsystem still needs configuration item
>>>       
>> 'domain_cpu-devices="/cpus/c...@*"' in ppm.conf to catch all
>> cpus at boot time. We are discussing some sort of device tree
>> reorganization for x86 system, which may break current CPU
>> domain support code in ppm driver. The sample device tree as below,
>>     
>>>     /devices/sysbus/processor at 0/cpu at 0
>>>     /devices/sysbus/processor at 0/cpu at 1
>>>     /devices/sysbus/processor at 1/cpu at 2
>>>     /devices/sysbus/processor at 1/cpu at 3
>>>     I think it's not ease to fit above device tree into
>>>       
>> current ppm driver on x86 system, any suggestion here?
>>     
>> Well, what you really want is something like:
>> /devices/sysbus/proces...@*/c...@*
>>
>> but the ppm parsing code doesn't support that. I don't have any good
>> suggestions at the moment other than to modify the ppm parsing code.
>>
>>     
>>>     3) Should line 876 and 896 in cpu_idle.c be removed? Seems it's not
>>> used any more. 
>>>
>>>       
>> Yep. Thanks!
>>
>>     
>>>     4) Should we add reference count support in CPU domain
>>>       
>> data structure? For current implementation, all P/T/C domains
>> will be freed if cpupm_free is called once for any cpu, which
>> will make all cma_domain fields in cpupm_mach_state_t invalid.
>> It may cause access violation, I think. It will also be needed
>> to support CPU hotplug.
>>     
>> Yes, I think you are right. The domain code needs a bit more work. I'm
>> not real happy with it anyway. I'll see what I can do to make it a
>> little more robust. 
>>
>>     
>>>     5) Seems current CPU domain relative code doesn't
>>>       
>> support CPU hot adding/removing, is that true?
>>     
>> That's true. We should really identify what we need to do to
>> support it.
>> I know that detach() is high on the list, but I think that filters
>> down into being able to cleanup/disable domains. I'm not sure what
>> else it means. If you know of anything let us know.
>>     
>
> That would be great and hope we could cooperate on that.
>
>   
>> As an aside, I've been thinking that we might just want to do away
>> with the CPU driver for x86 once PAD is available. I just don't know
>> that I see a reason that anyone would prefer it over PAD. And doing
>> away with the driver alleviates some problems (like the ppm stuff).
>> Anyone have an
>> opinion on this?
>>     
>
> Yes, doing away with CPU driver may simplify the implementation.
> One benefit to keep a driver for cpu pm is that a special driver optimized 
> for specific hardware could be provided for cpus. But this benefit is not so 
> attractive because no real demand currently.
> Some thoughts for fun, image that one day memory device provides CPU P-state 
> alike power management capability, is it beneficial to keep the same device 
> driver model for CPU and memory "P-state" support?
> Thanks!
>
>   
>> Thanks,
>> Mark
>>
>>     
>>>     Thanks!
>>>
>>>
>>>       
>>>> -----Original Message-----
>>>> From: tesla-dev-bounces at opensolaris.org
>>>> [mailto:tesla-dev-bounces at opensolaris.org] On Behalf Of Mark
>>>> Haywood Sent: 2008?12?9? 10:14 To: tesla-dev at opensolaris.org
>>>> Subject: [tesla-dev] CPUPM support in the kernel
>>>>
>>>> Anup and I have been working on moving the core CPUPM support from
>>>> the CPU driver - into the kernel. Our goal is to make the CPU driver
>>>> specific to polling CPU power management and not have PAD
>>>> depend on the
>>>> driver at all. That means moving a fair bit of the i86pc specific
>>>> CPU power management support (ACPI parsing and caching, speedstep,
>>>> pwrnow, cstate and tstate handling) into the kernel. This
>>>> eliminates the need for callback mechanism into the CPU driver.
>>>> Unfortunately, 
>>>> since acpica
>>>> is a module, it does require callbacks for that. But those have been
>>>> centralized into the existing uts/i86pc/os/acpi_stubs.c file.
>>>>
>>>> We've posted a webrev of our effort at:
>>>>
>>>> http://cr.opensolaris.org/~mhaywood/cpupm-move/
>>>>
>>>> We'd appreciate any comments.
>>>>
>>>> Thanks!
>>>> Mark
>>>>
>>>> _______________________________________________
>>>> tesla-dev mailing list
>>>> tesla-dev at opensolaris.org
>>>> http://mail.opensolaris.org/mailman/listinfo/tesla-dev
>>>>         
> _______________________________________________
> tesla-dev mailing list
> tesla-dev at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/tesla-dev
>   


Reply via email to