________________________________________
On Mon, 1 Jan 2001, Clarence Verge wrote:
> And no.
>
> That situation only exists at the moment you have just created the compressed
> disk. As soon as you manipulate a few files, delete, etc. you create the
> same holes in the "big" file that would have been made on the UNcompressed
> disk. These holes obviously "map" to the disk, but with a changed effective
> cluster size due to the compression.
>
> Eventually, your "big" file becomes just as fragmented as your HD would be.
No - but yes I think: As a simple example, say you put 30 little batch
files in a zip archive, and you have some utility, ``run'' that runs
each batch file by extracting it from the archive (batches.zip), passes
arguments to the batch file and then runs the batch file, e.,g.,
Run.bat-------------------------------
rem %1 is name of a batch file, e.g., run program.bat /A /B
pkunzip %1
%1 %2 %3 %4 %5 %6 %7 %8 %9
del %1
------------------------------
Ordinarily each of these batch files would take say 2K of space on
the disk (cluster size = 2K) even though each of them is perhaps only
80 bytes in size. By putting them together in the file, batches.zip
you get (assume) a file that is over 2K in size. Then the file can
be stored in 2 clusters, and you waste only the space from the second
cluster that is not fully taken up by the file.
Now you start editing, adding, and deleting batch files from batches.zip,
but the size of batches.zip remains roughly the same. Will not batches.zip
still be stored in SOME combination of 2 clusters? The particular clusters
on the disk may change. But I dont see how the fragmentation process can
result in batches.zip being stored in more than 2 clusters, thus wasting
more and more space. After each change, the disk must still store the
entire (say 2.1K) file.
It is true that the 2 clusters can become widely separated on the disk,
thus increasing read access time. But I think the wasted disk space problem
remains solved.
more ad