Hi Clarence,
Quite a testscheme, but are you in fact comparing drives or cache
programs?
Could it be that hardware diskcache optimum sizes are just related to
bytes/cyl (=bytes read per rotation), and are affected by fragmentation,
only because a fragmented drive would stuff the cache with useless bytes,
while software caches follow the FATs and just derive from the hardware
cache those bytes that belong to the to-be-read file, and are only
delayed by extra head moves?
(Linux takes the trouble to display hddcaches during startup, but never
defrags as far as I know; just scans disk every now and then and reports
free space and fragmentation. But then again, why would a FS goto
fragment unless a drive is nearly full (or at least the last cyl has
been reached) In dos it would then make sense to use some delete
protection in order to prevent fragmentation, since deletes can then be
done (made permanent) only when needed and immediately followed by a
defrag.
Best way to fragment a drive, seems to be the making of a lot of small
files (copy your e-mail) and the delete all with even numbers, then make
one big file to fill up the spaces. For the last step I regularly use
fill.bat:
echo nn>fill
:rep
type fill>>fill
got rep
(to be interrupted with ctrl-c, only when one type has ended)
How do you do this?
I'm just not sure wether a cache program would select blocks of free
space in order of size to postpone the worst till last, assuming the
drive might get defragged sometime. Norton speedisk allows clicking on a
block on the diskmap to see which file it is; pctools gives a map per
file and compress gives names while running; ms never gives any info to
play with (and the rest sits on my pc in the other town).
I guess this won't help much, but I'd like to understand better myself.
(I also have a drive that gives trouble, but I count myself lucky that I
don't really need it for the moment)
Bart