As a threshold question, in the real world how many 4K, 1MB, and 2GB frames
(for example) are and remain bit-identical duplicates of each other in a
"typical" operating machine? What percentage of total memory does that
duplication represent? Do IBM's processor cache algorithms already
"de-duplicate"? How much paging overhead does memory duplication cause?

I'd have to imagine IBM has at least given some thought to these questions
and (probably) has run some performance assessments along these lines.
Though I'd also assume this is something of a moving target as cache
hierarchies, processor designs, memory sizes, and workload mixes change
over time. Ceteris paribus, to the extent anyone worries about this issue
it'd be with "relatively big" consumers of memory -- "memory hogs," if one
prefers. I don't think anybody ought to be worried about a couple or a few
duplicate 4K frames in today's "typical" machine, for example. Also, it
might be a very good thing from a performance point of view to have
duplicated memory -- one copy per processor book, for example.

If there are some real world benefits in doing something more in this area
-- non-trivial performance benefits, notably -- then at least in principle
I'd be in favor, if I get a vote.

--------------------------------------------------------------------------------------------------------
Timothy Sipples
IT Architect Executive, zEnterprise Industry Solutions, AP/GCG/MEA
E-Mail: [email protected]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to