Jeffrey Butera <[EMAIL PROTECTED]> wrote on 08/24/2005 10:18:15 AM:
> 1) Can anyone shed insight on 'reasonable' split/merge loads? By
reasonable,
> I guess I mean I'd like something a little more aggresive than the
defaults:
> I've had someone in the past suggest a split/merge load of 20/10, but
have no
> basis for that suggestion.
In my experience, the default values of 60/40 rarely work properly,
although it definitely varies by file. I usually go much lower, and 20/10
is not a bad place to start. See below for a common algorithm for
determining keyonly dynamic file configuration. Run guide on the file,
preferably with the -r option to send the output to a hashed file. Point
the dictionary of the guide output file to @UDTHOME/sys/D_UDT_GUIDE and
plug the values into the formula. This will give you a reasonable place
to start, but it's always best to experiment a bit with a small file
before working with the entire population of records. Note also that, if
this file will be recoverable under RFS, you probably want to gravitate
towards a smaller block size than indicated by the formula - if most of
the records in the file are less than 1K in size, look at a 1K block size.
> 2) Thoughts about KEYDATA/KEYONLY? I've read the documentation, but I'm
> looking for real-world insight from those who have used dynamic files in
> unidata.
I agree with the previous poster that you probably want to use KEYONLY in
this case. Widely divergent record sizes will probably not be served well
with KEYONLY. However, it may be interesting to experiment with it, just
in case.
> 3) Anything else I'm not thinking about but should? (Arguments to not
use a
> dynamic file are welcome.)
I generally don't go overboard with making files dynamic, but if the file
will continue to grow and you don't have the downtime to resize (although
I have clients with files containing many millions of records who would
kill for your 45-second resize time), go ahead and make this file dynamic.
A word of advice - use memresize to rebuild the file instead of
REBUILD.FILE. The former works much more efficiently, especially if you
give it a large MEMORY value, and the latter doesn't always work properly.
Formula for determining base modulo, block size, SPLIT_LOAD, and
MERGE_LOAD for UniData KEYONLY Dynamic Files
Step 1: Determine the blocksize. ( Use 4096 unless the Items per group is
larger then 35 or less then 2 )
A) If the MAXSIZ < 1K
ITEMSIZE = 10 * MAXSIZ
B) If 1 K < MAXSIZ < 3 K
ITEMSIZE = 5 * MAXSIZ
C) If MAXSIZ > 3 K
ITEMSIZE = 5 * (AVGSIZ + DEVSIZ )
Once you determine the item size, use it to determine the NEWBLOCKSIZE.
A) ITEMSIZE < 1024; NEWBLOCKSIZE = 1024
B) 1024 > ITEMSIZE < 2048; NEWBLOCKSIZE = 2048
C) 2048 > ITEMSIZE < 4096; NEWBLOCKSIZE = 4096
D) 4096 > ITEMSIZE < 8192; NEWBLOCKSIZE = 8192
E) 8192 > ITEMSIZE < 16384; NEWBLOCKSIZE = 16384
Step 2: Determine the actual number of items per group.
ITEMS_PER_GROUP = NEWBLOCKSIZE-32 / AVGSIZ
Step 3: Determine the base modulo
BASEMODULO = COUNT / ITEMS_PER_GROUP
Step 4: Determine SPLIT_LOAD
SPLIT_LOAD=INT((((AVGKEY + 9) * ITEMS_PER_GROUP ) /
NEW_BLOCKSIZE)*100)+1
If the SPLIT_LOAD is less then ten then: SPLIT_LOAD = 10
Step 5: Determine MERGE_LOAD
MERGE_LOAD = SPLIT_LOAD / 2 ( Rounded up )
Tim Snyder
Consulting I/T Specialist , U2 Professional Services
North American Lab Services
DB2 Information Management, IBM Software Group
717-545-6403
[EMAIL PROTECTED]
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/