We are on UNIX, but this gives you an idea of what we do. We
have a 'cron' that looks at the date/time stamp on files at a Unix level
(UNIX 'find' command using -mtime -1). If the file was updated in the
last 24 hours, then we have a Unidata program go through and figure out
if the file is a Unidata file. If it is a Unidata file, it puts it in a
UNIX flat file called 'guide.input. Then a 'guide -i guide.input -d3 -r
/usr/ud/RAW/STAT-FILE' is run to build a database of Unidata file stats
in STAT-FILE. Once a month we have an 8 hour outage to do sever
maintenance and any file resizing is done at that point based on what
'guide' recommends.

        That works fine for static files. Dynamic files are a different
issue, as 'guide' does not help much with recommendations with dynamic
files. I usually look at a UNIX level to see how many  'overxxx' parts I
have in the file directory. If I have more than 1 or 2, I look at
recreating the file.

        I don't use 'memresize' for dynamic files, as it has a number of
short comings. It works, but it does things like create an 'overxxx'
part for every 'datxxx'. I have one file that has 31 'dat' segments and
just one 'overxxx' part, over001. If I used memresize, I would also have
31 'overxxx' parts.

        memresize also does not allow for use of the TMPPATH parameter.
So you need to be able to hold 2 copies of the file you are resizing in
the file system where it lives. 

        So I instead create a new file in another file system with the
MOD and Blocksize that I want. Then we have a verb called PHANTOM.COPY.
PHANTOM.COPY will use a Unidata list to do the copy. It breaks the list
up into how many ever PHANTOMS I specify to use. So the statement in
Unidata would look like:

PHANTOM.COPY FILEA FILEB LIST.NAME 12. 

        This would take LIST.NAME, break it up into 12 lists and pass
each list into another PHANTOM that would copy 1/12 of FILEA into FILEB.
So I have 12 simultaneous copies running building the new file. It turns
out to be almost as fast as 'memresize' and I don't have all the extra
'overxxx' parts in the file directory. Which in turn makes it easier for
me to monitor whether my dynamic files need attention.

        Since you are on Windows, this maybe does not help much. But
maybe there are ways on Windows to do something similar. - Rod

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Laura Hirsh
Sent: Friday, February 15, 2008 11:24 AM
To: [email protected]
Subject: [U2] File Sizing for Unidata on Windows

Hi all,

I'm working on a project, and wanted to get some feedback regarding
others experiences.

The issue is resizing files for a substantial database. Im curious
about what tools and experiences people use when trying to do the same
thing.
What rules of thumb are being used to calculate modulo and block size?
How often do people schedule file resizes? Is it system wide, or on a
subset of files? How do folks manage scheduling resizes in a 24x7 shop?

Some folks recommend FAST, other folks have suggested using the
information available via !guide or file.stats, and then do a
!memresize.
The interesting thing is that each of these methods seems to come up
with a different new size recommendation, and as a result, there is a
lot of trial and error. Anyone want to share their experiences? Id love
to hear them. Thanks in advance,

Laura
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to