We had the unfortunate event of having a batch process update a Unidata
static file until it hit the 2 gigabyte UNIX file limit. The programmers
estimate of how much data we were archiving was around a million records.
Unfortunately the estimate was way low. In hind sight I should have created
the file as dynamic and I would not have gotten into this pickle. But the
good news is that I was able to recover all but 992 bytes of the file. So I
thought I would document what I did in case anyone else on the list ever
encounters this unfortunate situation.

 I would not have been able to do this without applying some tidbits that I
learned from Wally many years ago about Unidata files. Unidata will not open
a normally created file that is not evenly divisible by 1024. When process
blew 2 gig, this is the size of the file:

-rw-rw-r--   1 root     mcc      2147483616 Aug  9 22:35
INSURED.ARCHIVE.01.5

The file size 2147483616 / 1024 = 2097151.96875

The solution on a UNIX box is to create a new file using 'dd':

dd if=INSURED.ARCHIVE.01.5 bs=1024 count=2097151
of=INSURED.ARCHIVE.REPAIR.01

This file looks like this after the 'dd'

-rw-rw-r--   1 rabaak   tech     2147482624 Aug 10 10:29
INSURED.ARCHIVE.REPAIR.01

after creating a VOC pointer for the new file, Unidata successfully opens
the file and I have only lost 992 bytes. Better than the whole file anyway.

COUNT INSURED.ARCHIVE.REPAIR.01
COUNT INSURED.ARCHIVE.REPAIR.01

1799394 record(s) counted.

Hopefully you don't encounter this problem. But if you do on UNIX, try 'dd'
to get back what you can. - Rod
-------
u2-users mailing list
[EMAIL PROTECTED]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to