Blocksize/pagesize defines how much one disk read will bring in to memory at an OS level. The application calling for the disk read (Universe in this case) will then look for the data its searching for in memory and if it does not find it will request another disk read from the OS. And on, and on, until the particular bit of data is found. So... (this being one of the overwhelmingly elegant things about the Pickuverse) this means that in a properly sized hashed file NO MATTER HOW BIG it only takes one disk read to get to any record given a known key. Ask your local Oracle/Sybase/Informix/SQL Server DBA if they can do that. Stand back though, they tend to sputter alot.

[EMAIL PROTECTED] wrote:
[Large Snip]

So, in a nutshell, if your separation matches the blocksize, then one
group will be read at a time. How much of a file is read into memory at
once, then, depends on tunables, I think. (Help, someone correct me
now...)

Karl
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to