Laura,

!guide gives you an indication of the proper size at the current number of records in the file - someone from IBM can correct me if I am wrong, but it always looks to me like the assumption is that you want 10 records in each group as a goal - and with some files that tend to have larger records, that will put every group into overflow no matter how big you set the blocksize. But it depends on the database you are resizing, and whether the files are expected to grow significantly before the next resize.

FAST lets you choose options that allow you to size for expected growth, and does not make the 10 records/block assumption. It allows you to tailor the percentages for different lists of files so that if you know you have files that hash badly due to the structure of the record keys or that have extremely large records, you can allow for additional room to minimize overflow. It has decent reporting options and a stats file that you can write your own reports on if you don't want to use theirs. And you don't have to write your own routines to do the memresize commands or parse the guide_advice record to do them automatically.

Of course, you can start at ground zero and write your own (there are people who have done so), and have all the features and custom reporting that you want - but I have found that for sites with large numbers of files, FAST does a good job for me and is worth purchasing. Besides which, as IBM adopts new technologies, if the factors going into the file-sizing decisions changes, Fitzgerald & Long is going to do the work necessary to keep up with any changes to file-sizing needs, where an individual company might not have the resources available to adapt a custom file-sizing software.

And no, folks, I do not work for Fitzgerald & Long, so no [AD] brackets required! I just resize large numbers of files every other weekend and use their product. I used to do file-sizing by hand, and would make some decisions differently than what is automatically generated, but on a Saturday when I am resizing 13-14,000 files, I can adjust what I need to with FAST to get a good database performance and not take all weekend to do it.

Susan Lynch
F.W. Davison & Company
----- Original Message ----- From: "Dave" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Friday, February 15, 2008 10:42 PM
Subject: Re: [U2] File Sizing for Unidata on Windows


Fast is a great product and will save you time in resizing files.

 !guide is a great tool too.

You don't need to purchase fast and can use guide instead. that's your choice.

Laura Hirsh <[EMAIL PROTECTED]> wrote:
 Hi all,

I'm working on a project, and wanted to get some feedback regarding
others experiences.

The issue is resizing files for a substantial database. Im curious about
what tools and experiences people use when trying to do the same thing.
What rules of thumb are being used to calculate modulo and block size?
How often do people schedule file resizes? Is it system wide, or on a
subset of files? How do folks manage scheduling resizes in a 24x7 shop?

Some folks recommend FAST, other folks have suggested using the
information available via !guide or file.stats, and then do a !memresize.
The interesting thing is that each of these methods seems to come up with
a different new size recommendation, and as a result, there is a lot of
trial and error. Anyone want to share their experiences? Id love to hear
them. Thanks in advance,

Laura
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/



---------------------------------
Looking for last minute shopping deals? Find them fast with Yahoo! Search.
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to