Hi Kevin, Mark, all,

Yes, from our brief discussions at ELC, and all the ensuing discussions that 
have happened in the last few weeks, it certainly seems like a good time to 
think about:
- what is a good model to tie up device idleness, latencies, constraints with 
cpu idle infrastructure - extensions to PM_QOS, part of what is being 
discussed, especially Kevin's earlier mail about QOS parameter per 
subsystem/device that may have independent clock/power domain control.

- what is a good infrastructure to subsequently allow platform-specific low 
power state - extensions to cpuidle infrastructure to allow platform-wide low 
power state? Exact conditions for such entry/exit into low power state 
(latency, wake, etc.) could be platform specific.

Is it a good idea to discuss about a model that could be applicable to other 
SOCs/platforms as well?

Thanks
Rajeev


-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Kevin Hilman
Sent: Thursday, June 03, 2010 10:28 PM
To: Gross, Mark
Cc: Neil Brown; [email protected]; Peter Zijlstra; [email protected]; LKML; 
Florian Mickler; James Bottomley; Thomas Gleixner; Linux OMAP Mailing List; 
Linux PM; Alan Cox
Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)

"Gross, Mark" <[email protected]> writes:

>>-----Original Message-----
>>From: Kevin Hilman [mailto:[email protected]]
>>Sent: Thursday, June 03, 2010 7:43 AM
>>To: Peter Zijlstra
>>Cc: Alan Cox; Gross, Mark; Florian Mickler; James Bottomley; Arve
>>Hjønnevåg; Neil Brown; [email protected]; LKML; Thomas Gleixner; Linux OMAP
>>Mailing List; Linux PM; [email protected]
>>Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)
>>
>>Peter Zijlstra <[email protected]> writes:
>>
>>> On Thu, 2010-06-03 at 11:03 +0100, Alan Cox wrote:
>>>> > [mtg: ] This has been a pain point for the PM_QOS implementation.
>>>> They change the constrain back and forth at the transaction level of
>>>> the i2c driver.  The pm_qos code really wasn't made to deal with such
>>>> hot path use, as each such change triggers a re-computation of what
>>>> the aggregate qos request is.
>>>>
>>>> That should be trivial in the usual case because 99% of the time you can
>>>> hot path
>>>>
>>>>    the QoS entry changing is the latest one
>>>>    there have been no other changes
>>>>    If it is valid I can use the cached previous aggregate I cunningly
>>>>            saved in the top QoS entry when I computed the new one
>>>>
>>>> (ie most of the time from the kernel side you have a QoS stack)
>>>
>>> Why would the kernel change the QoS state of a task? Why not have two
>>> interacting QoS variables, one for the task, one for the subsystem in
>>> question, and the action depends on their relative value?
>>
>>Yes, having a QoS parameter per-subsystem (or even per-device) is very
>>important for SoCs that have independently controlled powerdomains.
>>If all devices/subsystems in a particular powerdomain have QoS
>>parameters that permit, the power state of that powerdomain can be
>>lowered independently from system-wide power state and power states of
>>other power domains.
>>
> This seems similar to that pm_qos generalization into bus drivers we where 
> waving our hands at during the collab summit in April?  We never did get 
> into meaningful detail at that time.

The hand-waving was around how to generalize it into the driver-model,
or PM QoS.  We're already doing this for OMAP, but in an OMAP-specific
way, but it's become clear that this is something useful to
generalize.

Kevin
_______________________________________________
linux-pm mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/linux-pm
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to