In <9307010839.AA29561@xdm039> [EMAIL PROTECTED] (Peter Lister, 
Cranfield Computer Centre) writes:


>> >(IMHO: Anything that needs to create a single 6-gig file is probably
>> >broken, and should split the file into multiple parts.)
>> 
>> Some of our customers data sets _start_ with that kind of size.

>You imply that a single dataset MUST reside in one file. Why? I find it
>difficult to believe that anything that big is being processed all in
>one chunk. The overhead of dividing it into 1Gb lumps chunks seems
>minimal to me; intelligent handling of the chunks could even improve speed.

If you have a Fortran program that is simply interested in making a
linear pass through the file, switching files is pure overhead.  We
know of several algorithms like that, and do not believe we care to
impose limits on what our customers can do.  They buy our machines
because we scale and can handle large data sets, so it's easy to see
that this isn't something every workstation has to support tomorrow.

It's just another case of "Nobody could ever want more than 16K of
main memory."  Some markets have demonstrated that they can and do
want more than the current convenient limit, so we gave it to them.

Rob T
--
Rob Thurlow, Convex AFS project leader
Convex Computer Corporation, Richardson, TX
(214) 497-4405          [EMAIL PROTECTED]

Reply via email to