Hi Terry,
        Some info for your concerns as below.

Terry.Whatley at Sun.COM <mailto:Terry.Whatley at Sun.COM> wrote:
> Liu, Jiang wrote:
>> Hi Stan,
>>      Currently Intel 5000 series MCH supports two types of power
>> saving: putting memory into low power state and shutting down unused
>>      memory. For the first type of power saving by putting memory
into
>> low 
>> power state, it's really based on memory controller thermal
>> protection mechanism and works as below. When the entire system is
>> predicted to be idle for a relative long time, the driver will
>> trigger hardware thermal protection mechanism to throttle memory
>> access requests which has similar effection of frequence deduction.
> I do not understand how throttling accesses on an idle system saves
> power, since by definition an
> idle system has already effectively throttled access by being idle. 
> One would hope that a system
> which is only taking clock interrupts would mostly be operating out of
> the cache anyhow, as well
> as not making many memory references because it is spending most if
> its time in c-state.
On Intel 5000 series MCH, memory controller still generates clock for
AMB(Advanced memory buffer) and DIMM chips as normal state when system
is in idle state without any memory access.
By enabling thermal throtting mechanism, it will throttle clock
generated by memory controller, which in turn decreases effective clock
frequence and reduces power consumption.

>> When system is in thermal
>> protection state, all memory contents are preserved and system can
>> access all memory but with reduced bandwidth and power consumption.
>> When system wakes up from idle, driver will restore normal memory
>> operation mode. The transition time is about several microseconds.
>> This kind of memory power saving will operate on the whole memory
>> instead part of them and is feature of Intel 5000 series MCH. Future
>> memory controller may support this type of power saving by hardware
>>      automatically without software's assistance. The second type of
>> memory power saving mechanism is shutting 
>> down unused memory components. All memory contents will get lost
>> after shutdown so it needs OS'es help to free up all memory in a
>> specific range before shutting down corresponding memory components.
>> To enable this type of power saving, there's much more work needed
>> to be done. OS needs to monitor memory usage and provide mechanism
>> to free/move memory in a specific range, this ability may also be
>> needed by memory hotplug. It could provide full power management on
>> memory components conforming to Solaris PM system by adopting this
>> type of memory power saving mechanism. It will get more important
>> and useful in NUMA system, some nodes with CPU and memory controller
>> may be entirely shut down when system is under low pressure. 
>> 
> What is the granularity of memory that can be shut down (DIMM, rank?).
> How does this granularity
> interact with memory interleave?  Does the MCH have the ability to do
> this now?  Can you give a reference
> to a section of the spec?  Is self-refresh supported?
Intel 5000 series MCH could enable/disable memory components at
branch/channel level in hardware. But most BIOSes will enable aggressive
interleave among branches/channels/ranks if possible for pursuing better
performance. It becomes very hard or even impossible to disable any part
of the memory when aggressive interleave is enabled. So for most
products available in marketing, it may not adopt this type of memory
power saving without changing of BIOS. But the situation may change with
future architectures and chipsets, especially when coping with NUMA and
memory hotplugging.
Please refer to "Intel(r) 5000P/5000V/5000Z Chipset Memory Controller
Hub (MCH) - Datasheet" at
http://www.intel.com/design/chipsets/datashts/313071.htm in section 3.9
and 5.2.

> 
> thanks,
> sarito
> 
>>      To sum up, the first type memory power saving mechanism is much
>> more ease to implement and will take effect when entire system is
>> idle. The second type memory power saving mechanism could provide us
>> a cpupm like "mempm" with full power management capability in
>> current and future systems, but with much much more work.
>> 
>> 
>> Stan Studzinski <mailto:stan.studzinski at sun.com> wrote:
>> 
>>> On Wed, 12 Mar 2008, Liu, Jiang wrote:
>>> 
>>> 
>>>> / All,
>>>> 
>>> />/         I would like to propose a project to enable FB-DIMM idle
power
>>> />/ saving on Intel 5000P/5000V/5000Z MCH chipsets.
>>> />/         Intel 5000 series MCH chipsets have a thermal protection
>>> />/ mechanism which can be used to reduce FB-DIMM power consumption
>>> when />/ system is idle. This power saving mechanism has been proven
>>> to be very />/ useful in real IDC. On the other hand, platforms
>>> based on Intel 5000 />/ series MCH will continue to ship until the
>>> end of 2009 plus 4-5 years' />/ lifetime.  So enabling FB-DIMM
>>> power saving feature can make systems />/ based on 5000 series MCH
>>> more power efficient and save money for />/ customers.
>>> />/         Any thoughts or suggestions?
>>> 
>>> Hi Gerry,
>>> 
>>> From your proposal it sounds like FB-DIMM power consumption can
>>> be reduced when the system is idle.
>>> How does this work? From your previous emails this feature applies
>>> to whole memory which means that one can not turnoff specified
>>> chunks of memory. Is this correct? Does this mean that for memory
>>> in a "lower power state" (or LPS) all data is preserved?. From what
>>> I understand so far is that if system tries to access LPS memory
>>> the software has to enable memory (via register) and then memory
>>> can be accessed again.  
>>> 
>>> When the system is idle and the memory is off we can't execute code
>>> from RAM? 
>>> 
>>> Stan
>>> 
>>> />/
>>> />/ Liu Jiang (Gerry)
>>> />/ Senior Software Engineer
>>> />/ OpenSolaris, OTC
>>> />/ Tel: (8610)82611515-1643
>>> />/ iNet: 8758-1643
>>> /
>>> 
>> _______________________________________________
>> tesla-dev mailing list
>> tesla-dev at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/tesla-dev

Reply via email to