> >(IMHO: Anything that needs to create a single 6-gig file is probably
> >broken, and should split the file into multiple parts.)
> 
> Some of our customers data sets _start_ with that kind of size.

You imply that a single dataset MUST reside in one file. Why? I find it
difficult to believe that anything that big is being processed all in
one chunk. The overhead of dividing it into 1Gb lumps chunks seems
minimal to me; intelligent handling of the chunks could even improve speed.

Peter Lister                                    [EMAIL PROTECTED]
Computer Centre,
Cranfield Institute of Technology,        Voice: +44 234 754200 ext 2828
Cranfield, Bedfordshire MK43 0AL UK         Fax: +44 234 750875

Reply via email to