Bob Crandell wrote:

> I'm talking to a guy about a new workstation running Redhat.  It
> might have a striped, mirrored set of drives totaling 320 gigs for
> capacity and throughput.  He does some heavy arithmatic and uses
> most of this to hold temporary files that could be 5 - 10 gigs in
> size.  The computer he has now runs into the 2 gig limit.  Has this
> limit been elimiated?  Has anyone generated a 3 gig file and run
> less, cat or more on it?

I successfully created an 8 TB file on OpenBSD just now.  Kind of
irrelevant, but fun nonetheless.

RedHat 7.2, using an ext2 filesystem, maxed out at 4 Gb.  Debian Woody
and a Reiserfs filesystem only went to 1 Gb (but the kernel is about
a year old on that box).

There are at least four things that have to be right to have
64 bit file support.

1. The file system.  The on-disk data structures have to have room to
   store a file size bigger than 32 bits and have to have a way to
   point to enough data blocks in each file.

2. The kernel.  The kernel has to use 64 bit file offsets and sizes
   throughout.  No problem there with the 2.4 kernels.

3. libc.  Needs versions of the system calls that use 64 bit offsets.
   With glibc, that means using lseek64(), stat64(), and __off64_t.
   Or compile with -D_FILE_OFFSET_BITS=64 .

4. Applications.  Those that seek or look at files' sizes need to be
   compiled with step 3 in mind.

So far as I can tell, I had parts 2-4 correct when I tried to
create large files on RedHat and Woody.  So we need a better
file system.

Maybe the best bet for large files would be Gentoo Linux, which
includes the xFS patches.  xFS is a real 64 bit filesystem.

Those of us who remember the 16 to 32 bit transition of the late '70s
and early '80s find this all eerily familiar.

-- 
Bob Miller                              K<bob>
kbobsoft software consulting
http://kbobsoft.com                     [EMAIL PROTECTED]

Reply via email to