On Mon, 23 Aug 2010 16:56:31 -0600, Paul Gilmartin wrote:
>
>How big is cache, typically, nowadays? Will Hiperdispatch preserve
>cache contents across context switches?

>From Harv Emery's presentation at share, level 1 cache on a
z10 and z196 is 64K for instruction cache and 128K for data
cache per core.

A z10 has an additional 3 MB of I+D cache per core ("level 1.5")
on the CPU chip and 48 MB per book (level 2).

A z196 has level 2 cache of 1.5 MB I+D cache per core and
level 3 cache of 24 MB on each chip that is shared by the
processors on that chip.  There is also 192 MB level 4 cache
per book.

In all cases, the level 1 cache is fastest and the time
required to access the cache is longer the farther away it is.

Cache contents are managed dynamically by the hardware that
accesses memory.  The cache design isn't described, but cache
designs are typically "n-way" meaning that for any memory
location that a processor can reference, there are "n" cache
lines at each level that can contain that memory location.
It is not necessary that the "n" be the same at each level
of cache.

I do not know what the "n-way" is for any of these caches.
If you have a 4-way cache that is 64K in size, then memory
considered to be composed of 16K sections, such that the
first byte of each 16K section would map to the first location
in each of the 4 cache segments can hold that memory location.

As a result, programs that reference large areas of storage
in a random fashion would tend to frequently go to higher
levels of cache, or even to main storage.

Hiperdispatch is aware of the cache architecture and attempts
to dispatch the LPAR on the same book as it was previously
dispatched, increasing the probability that the data that it
will need to reference will still be in that book's cache.
It has no way of "preserving" the cache.  If an LPAR is
dispatched on a different book than it has ever been
dispatched on, it is certain that none of its memory will
be in cache.

--
Tom Marchant

Reply via email to