One more thing: the big deal about DoubleSpace/DriveSpace is not the
compression itself, which could of course increase your free space,
but mostly comes from the variable-sector-per-cluster stuff: you get
lots of extra free space by cutting the space occupied by the many,
say, less than 1KB text files in a 8KB/cluster FAT fs.

Just my 2c...
AItor



2008/3/18, Eric Auer <[EMAIL PROTECTED]>:
>
> Hi!
>
> > Maybe with adding compression it could make a replacement for
> > doublespace? (running it through the network redirector obviously)
>
> Nope, doublespace compresses the actual data on your disk,
> not the space which is free anyway ;-). I think it would be
> nice to have a driver for one of those free / open compressed
> embedded filesystems for DOS. Some ad-hoc suggestions for a
> new filesystem:
>
> Keep a normal FAT and boot sector and stuff, but wrap all the
> access to the data clusters. For each data cluster, keep some
> pointer to where it ACTUALLY starts (granularity: cluster, sector
> or byte) and allow the clusters to be compressed. You would need
> some offline "make compressed version of this filesystem" tool
> and you would not be able to write to the filesystem later.
> To get a relatively simple implementation, you can assume that
> each cluster is separately compressed, for example with LZSS.
> Then you only look at one cluster at a time for uncompressing.
>
> Max amount of extra metadata: 32 bits per cluster. Note that
> you do not store the size of the compressed cluster nor any
> "this cluster is not compressed flag" - you can easily derive
> those from the difference between this and next cluster start.
>
> To reduce the space taken by the extra metadata, you can use
> a more "granular" offset for the actual data location, and
> you can compress the offset array by only storing the size of
> each compressed cluster instead of its offset. To avoid too
> much speed loss, you could still keep the offset of every Nth
> cluster in some array in RAM to reduce the needed "counting".
> You can even store the size of each compressed cluster in a
> "granular" way, eg by saying that each compressed cluster has
> to start at a sector boundary and thereby is a multiple of a
> sector size in compressed size (ignore trailing garbage).
>
> That way, you can get by with for example 1 byte of extra
> metadata per cluster. Actually the whole "special tables for
> compressed filesystem" stuff could be stored at the space
> taken by the second FAT :-). Then a compressed filesystem
> can not take more space than the uncompressed version even
> in the worst case of "all filled with uncompressible files".
>
> Just some inspiration for the compressed FAT FS dreamers :-D.
>
> Eric
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2008.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> Freedos-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/freedos-devel
>

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Freedos-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to