Hmm. How about something like this: Allocate at bit string that will have an lower and upper limit. Whenever a new value comes in that is outside the range, reallocate the and move existing string into the new space and free the old.
In this case, the variable will be how much memory will be used that could change drastically based on the key values. If the spread is small, the memory allocated will also be small. It's been a long time since I've done mainframe programming, but isn't there a way to say "Only allocate pages that I actually use" and let the virtual storage manager allocate a real page upon write access. In the very worst case, you would have .5GB allocated. Alternatively, do a single-level index of 4k pages -- .5MB, and GETMAIN the pages as needed. max memory needed = .5GB + .5MB for index. My last real mainframe programming was in the early 80's, so I don't know if .5GB is a lot these days. Back then... Fuggedaboutit. :-) Old and rusty... Bob On Mon, Aug 23, 2010 at 4:11 PM, Bill Fairchild <[email protected]> wrote: > If the input data is seriously random, then a sparse matrix/array would > certainly cut way down on the paging load that might result from my previous > suggestion, but at the cost of having to do many more small-length GETMAINs > instead of one huge GETMAIN for 1/2 gigabyte of pageable, virtual storage. > >
