The 64-bit address space available to memory objects opens up a whole range of 
algorithms that use lots of VIRTUAL memory, but relatively modest amounts of 
REAL memory. The data is "clumped" into a relatively small number of pages; 
it's not like sprinkling a few bytes all over the object, which would drive the 
real storage utilization through the roof.  I've used such algorithms 
successfully on both Windows and Linux, and I'm wondering if there are issues 
peculiar to z/OS that would make this approach infeasible.

As an example: Does z/OS require that there be sufficient page space available 
to back all of the space requested for a large memory object? I might have an 
object whose virtual size is several terabytes, but requires only a few 
gigabytes of real storage. 

With the amount of memory available on machines like the z/13 this is a 
perfectly sensible way to use memory, but if z/OS insists on having enough page 
space available for the whole object, this is not going to work.

I've done some testing on both z/OS and Linux on Z, with good results, but 
that's on LPARs specifically designated for experimentation. What might I run 
into on a production system?

-- Jerry

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to