Yes, and this is where the art and work of dba comes in, as well as why one
of the biggest (imho) vulnerabilities of the mv market has been the lack of
conscious architecting, at least in legacy systems. Lumpy files are very
often the result of a programmer creating a record that can potentially be
needlessly large; think of a transaction batch file with the transaction ids
multivalued in a single attribute; now imagine 100,000 keys in that field.
So, if you can redesign the file so that this kind of thing doesn't happen,
great. If not, you'll probably want to get an idea of what's actually going
on with file IO. If you turn FILE.USAGE on for a period of maybe 24 hours of
typical usage(remember to turn it off again when you're done), and see what
percentage of the IO is oversized buffer reads. If you're hammering the
large items, it makes sense to go with a larger sep (Mark Baldridges recent
excellent articles on this are a good resource for describing why you want
to wince when you set sep to 16, and why there's a big hit when you go to
32), and take the hit on making multiple physical disk reads on the more
rarely accessed smaller items. But if you don't access those oversized items
so much, you may want to tune for the rest of the file. Definitely a
judgement call, and <plug/> Steve O'Neill will be happy to take your call if
you need help with this </plug>.
But won't this only work if your data fits into the modulo that
matches your page size? If your data is lumpy and doesn't nicely fit
into the page size/file modulo selected you get level 1overflow and
more disk IO.
_________________________________________________________________
Try Search Survival Kits: Fix up your home and better handle your cash with
Live Search!
http://imagine-windowslive.com/search/kits/default.aspx?kit=improve&locale=en-US&source=hmtagline
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/