To complicate performance analysis, if the table is updatable, you have to include the address/offset of the row adjacent to each key unless your update logic keeps all the rows in order (a large expense for a large table). That reduces the number of keys in a cache line.

And if you wish to generalize the key comparison, you have to switch to CLCL or an execute instruction, both of which have always been significantly slower than CLC.

In general, binary searches lose out to hash searches not too long after binary searches beat out sequential searches. But the critical performance factor in hash is the average length of hash clash chains, and the better hash algorithms at minimizing chain lengths tend to be more CPU intensive.

Ideally, the decision to keep a table in memory is at least partially based on it being used frequently enough by the program to avoid paging with modern systems, even though cache misses will still be a factor. And of course if the table is used by multiple applications on the same lpar concurrently, sharing the table (in common shared memory or a dataspace) increases the likelihood of it being in memory and in the cache.


Gary Weinhold Senior Application Architect DATAKINETICS | Data Performance & Optimization
Phone   +1.613.523.5500 x216
Email:  [email protected]

Visit us online at www.DKL.com E-mail Notification: The information contained in this email and any attachments is confidential and may be subject to copyright or other intellectual property protection. If you are not the intended recipient, you are not authorized to use or disclose this information, and we request that you notify us by reply mail or telephone and delete the original message from your mail system.

__________

Reply via email to