Bob Flanders asks:

> As a separate question, What if he did 128K set of pointers
> to as set of 4K pages, and allocated them as he found he
> needed a new page. It would be a max of a 1Mb (w/64bit
> pointers) overhead and would incur no more paging than if
> he allocated a single level array, right?

Correct. Except that you would have the added cost of what
you went right on to say next:

> The main cost would be the GETMAIN()s.

Doing one STORAGE OBTAIN and then simply letting RSM assign
a free page frame when and only when it is needed is -- I
would have thought -- obviously the most efficient technique
possible.  When it is possible to "GETMAIN" the entire area
ever potentially needed up front, once, for a certain fixed
cost in CPU, I do not understand how "GETMAINing" a subset
of that area, one 4K page at a time, only when it was needed,
at that same fixed cost EACH TIME a 4K page is "GETMAINed",
could ever be better.

Then, in addition, you have to allocate (and format, in
advance, by zeroing) this "128K set of pointers." That is
extra work (CPU) and extra storage.

The bit array technique is the most CPU efficient, at the
cost of real storage (where have we heard that before?).

The hash table approach that I illustrated is the fastest
approach in a reasonable amount of storage, and has the
advantage that its cost in CPU time, for the "typical
case" as described by the OP, is essentially as low as
the bit array technique. However, it is harder to code,
and therefore debug and document. In addition, a good
hash algorithm needs to be selected after some amount
of testing on actual keys. Hence, it's not trivial, as
the bit array technique would be.

--
WB

Reply via email to