eric, first, let me tell you the reason for the limit in the first place.

the 2GB limit isn't linux's fault.  it's the fault of the ISO/ANSI C team.
many operating systems use the (what i call) "strange" data types like
fpos_t and off_t.  these functions *can* break when you deal with files
larger than 2GB.  the answer is to retypedef the wierd data types.  to wit:

ssize_t write(int fd, const void *buf, size_t count);
 ^                                      ^
 |                                      |
wierd                                 wierd

i think you can get the definitions for these wierd types in types.h.

however, there are functions like fseek() and ftell() which use a good old
fasioned "long int" for the offset, and you can't play around with that.


AFAIK, 2.2.18 does not support files larger than 2GB without a kernel patch.
i do believe, though, that the patch made it into the 2.3 kernel.

i think you have 4 options:
1- use 64 bit machines
2- upgrade to 2.4
3- install reiserfs
4- install 2.2.18 with a patch

note, i've /heard/ (don't take my word for it) that reiserfs simply
processes commands without producing an error if you try to seek into a file
more than 2GB.

with all these options, you're treading on thin ice.  i don't know much
about files larger than 2 GB.

eric, the best advice i can give you is to contact va linux.  you paid a
big premium for buying their products.  i'm sure they'd be more than happy
to answer your question (they better be!).

pete

ps- redirected to vox-tech so this email can be archived.

On Tue 27 Mar 01, 11:02 AM, Eric Engelhard said: 
> As I understand it, kernel 2.2.18 supports large files (>2GB). Does
> anyone have direct experience upgrading RedHat 6.2 and/or VA Linux
> tweaked Redhat 6.2 to the 2.2.18 kernel? These are production machines
> and I would like to hear about any issues BEFORE I start upgrading. And
> no, I won't even consider the 2.4 kernel at this point.
> 
> More details for the interested:
> 
> I am running tweaked RedHat 6.2 with the 2.2.14 kernel across 6
> clustered servers. This kernel is unable to support large files (>2GB),
> but so far, I have been able to avoid the issue by uncompressing with
> pipes, such as "uncompress -c dbEST.FASTA.dailydump.Z | formatdb -i
> stdin -p F -o T -n dbEST". This works because both the compressed files
> and the formatted database files are (at the moment) smaller than 2GB,
> even though the flat file would be ~4.2GB. Also, I am not sure how large
> the temporary file is getting. How would I go about checking?
> 
> I now have two compelling reasons to store large files. First, we may
> purchase a proprietary database distributed in 10GB chunks. Second, I am
> testing a proprietary accelerated hardware system (massively parallel
> FPGA) which can use my linux boxes to "spool" the uncompressed databases
> in order to update without interrupting real time interface and batch
> jobs.
> --
> Eric Engelhard
> 

-- 
"Coffee... I've conquered the Borg on coffee!"               [EMAIL PROTECTED]
       -- Kathryn Janeway on the virtues of coffee           www.dirac.org/p

Reply via email to