> The negative effects are when you need to store large files,
> essentially the file needs to be broken up into smaller chunks, and so
takes
> longer to store/retrieve. I don't think the size of your FAT table matters
> anymore, way back when it was limited and so you were limited by disk size
> vs cluster size vs FAT table size.
>
> Or something like that.

Something like that.  The three FAT addressing schemes I know of in wide use
are FAT12, FAT16 and FAT32 which provide for 2^12, 2^16 and 2^32 clusters
(actually I think it's 2^12-64, 2^16-64 or 2^32-256, or something crazy
anyway).  FAT32 is only 'marginally' slower than FAT16 in practice, and
FAT32 is measurably *faster* than NTFS.
Generally you wouuld not want to use anything less than FAT32 for a freenet
datastore, and if you want a freenet datastore > 2GB you will need to use
FAT32, because the largest cluster size allowed by FAT12 and FAT16 is 32768
bytes.  The clincher is obviously that not all operating systems support
FAT32 (e.g. Windows NT does not "out of the can", but www.sysinternals.com
have a FAT32 driver for NT)

Not that this should make a big difference - if you're planning to use your
datastore in both linux and windows then any old FAT partition should do -
you may run into problems with an NTFS partition (I don't know how reliable
the NTFS support is in linux).

If you were planning to use Windows 2000 or XP exclusively, I might suggest
using a compressed directory as your freenet datastore - not because of the
disk space this saves (it would save very little, probably - see end of
message), but because the compressed clusters are shared within the
directory tree so only the last cluster allocated to the directory tree has
'wasted' space at the end.  Ideal if you care about space but don't care so
much about extra processing time.  Although obviously an NTFSv5 feature only
(and so not applicable to the original fat32 posting)

"OS-level" compressing of directory trees in pre-NTFSv5 operating systems
(i.e. before Windows 2000) can be achieved by using commercial software like
www.zipmagic.com/zipmagic, which (among other things) makes zip files appear
to the operating system as regular 'explorable' folders, or by using the
built-in DriveSpace / DoubleSpace utilities to set up a virtual compressed
drive.  Actually I would make a personal recommendation for zipmagic, it
really is rather good, and I'm sure there must be a less expensive
alternative available


On my NTFS partition, my store currently has about 3% wasted space from
cluster allocation.  Setting the NTFS Compress flag shrinks it so that my
store uses only .5% more space 'on disk' .  (Yes - the compressed datastore
still uses more than the 'on paper' amount of disk space, mainly due to the
encrypted nature of the datastore and its inherent incompressibility coupled
with the still necessary cluster allocation)
I would therefore expect similar results under FAT32 - that is, a .zip
datastore with zipmagic or similar using only about .5% more space than the
datastore size on paper.  However that is on likely to be true if you can
keep the .zip file fragments together...  I don't know what zipmagic's
fragmentation guarantees are, and I wouldn't be surprised if there were no
guarantees whatsoever.

It is possible to preallocate files under NTFS using tools such as Contig
(www.sysinternals.com) to ensure that they do not fragment - however this
only really works in practice for files which do not grow and shrink
unpredictably, such as files which are written to rarely but read often.
I'm guessing such a tool would be of only limited value for a freenet
datastore.  I have no idea if comparable utilities are available for use
with FAT partitions.

dave


_______________________________________________
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support

Reply via email to