On Sun, 2 Feb 2003, Tom Oehser wrote:

> Well, I have mentioned before that dd-lfs is the only thing
> on tomsrtbt that can handle files over 2GB.  This is fine
> for piping archives to and from a big archive file.  But what
> about archiving and restoring when files in the filesystems
> are over 2GB?  Short of upgrading to glibc2 or NewLib, or

Just some other warnings about cpio and tar you will need:

1) The old/standard binary cpio format has a 32 bit file lenght.
2) The cpio -c format is limited to 8589934591 byte files.
3) Depending on implementation the tar file format might be
   limited to just short of 1TB files or 100GB files.

Obviously the cpio format limit of just over 8GiB isn't enough the
higher tar limit should be enough for a little while but 100GB could
(possibly) be reached for a single file now.

BTW:
   Creating or extracting these file needs just two extra system calls
   (assuming the kernel copes), the open64() that disables the 2GiB limit
   (Just an extra flag IIRC) and the stat64 or llseek call to find the
   file length for the header. The 64 bit "long long" type has been
   available in GCC since before vsn 1.40 and I believe the *64 syscalls
   were added in later libc5 versions.

   Do you need anything except tar (pax) to look at big files ?

Later: Okay I've just looked and yes in the kernel's asm/fcntl.h

 #define O_LARGEFILE     0100000

And under debian 1.3 (libc5, 162Mb chroot image) "man llseek" gives:

_syscall5(int, _llseek, uint, fd, ulong, hi, ulong, lo,
            loff_t *, res, uint, wh);

int _llseek(unsigned int fd,  unsigned long
            offset_high, unsigned long offset_low,
            loff_t * result, unsigned int whence);


-- 
Rob.                          (Robert de Bath <robert$ @ debath.co.uk>)
                                       <http://www.cix.co.uk/~mayday>
Google Homepage:   http://www.google.com/search?btnI&q=Robert+de+Bath


Reply via email to