> Hmm... Perhaps you're still missing my original point?  I'm talking about
> a file with 96GB in addressable bytes (well, probably a bunch of files,
> given logical filesize limitations, but let's say for simplicity's sake
> that we have a single file).  It's actual "size" (in terms of allocated
> blocks) will be only a bit larger than 2.1GB.  (Directly proportional to
> the used size of the dataset.  Discrepancies only come into play when
> record size != block size, but that can be worked around somewhat)
>
> In other words, ls -ls will report the "size" as some ridiculously large
> number, will show a much smaller block count.  So, assuming four records
> are added to the file on block boundaries, the file will actually only use
> four blocks... nowhere near 96GB!
>
> In the UNIX filesystem (ya, I know.. just pick one :-), size of file !=
> space allocated for file.  Thus, my original questions were centered
> around filesystem holes.  I.e., non-allocated chunks in the middle of a
> file.  When trying to READ from within a hole, the kernel just sends back
> a buffer of zeros... which is enough to show that the record is not
> initialized.  Actually, something like an "exists" function for a record
> wouldn't touch the disk at all!  When writing to a hole, the kernel simply
> allocates the necessary block(s).  This is really fast, too, for creation,
> as the empty set can be written to disk with touch(1), and uses far less
> memory than virtual initialization or memory structures ;-)
>
What will happen, if somebody (possibly you, as mahordomo says), tries to
make a backup of that file.

Will the copy also be with holes, or would that file suddenly use all 96GB?
It will at least do so, if one does cat file>file.bak
Probably tar will do the same.

I'd be afraid to create something which could easily blow up by having
normal operations applied to it.

Leif





To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to