Flip ter Biecht wrote:
> 
> Hi Clarence,
> 
> Quite a testscheme, but are you in fact comparing drives or cache
> programs?

Hi Bart;
Well, the intent is to test/optimize cache programs - but I ran into 
unreasonable and unresolvable difficulties with a Quantum Prodrive so 
I began a side trip into drive comparison - looking for a way to 
understand the strange response of this dumb drive. <G>

> Could it be that hardware diskcache optimum sizes are just related to
> bytes/cyl (=bytes read per rotation), and are affected by fragmentation,
> only because a fragmented drive would stuff the cache with useless bytes,
> while software caches follow the FATs and just derive from the hardware
> cache those bytes that belong to the to-be-read file, and are only
> delayed by extra head moves?

That is pretty much my logic, with the added factors introduced by multiple
heads and various sector sizes.  When you read a sector, I assume that you
actually read n sectors, where n is the number of heads. 
If you have 15 heads and 2k clusters you should actually get 11 sectors in
the HD internal buffers that you don't necessarily need for every cluster
that you DO need. If the cluster you want is at the beginning, this gives a 
potential lookahead of 2+3/4 clusters for free. i.e. NO rotational time !

If the cache software is smart enough to NEVER look in the next rotational 
sector, then setting the lookahead to MAX=12 means that sometimes you will
get three complete clusters in advance and other times less or none. 
But then the software also must be smart enough to discard any UNWANTED
advance clusters, or the cache and cache index fill with crap. :(

BTW, the above is based on logic, not known fact.  If I owned a company
that made Hard Disks with multiple heads, I'd fire anyone dumb enough to
design in a switching mechanism that read ONE sector into ONE buffer from
ONE head. ;-)

> Best way to fragment a drive, seems to be the making of a lot of small
> files (copy your e-mail) and the delete all with even numbers, then make
> one big file to fill up the spaces. For the last step I regularly use
> 
> fill.bat:
> 
> echo nn>fill
> :rep
> type fill>>fill
> got rep
> 
> (to be interrupted with ctrl-c, only when one type has ended)
> How do you do this?

My .bat writes a different sentence into each of 5 sequential files, thereby
using one cluster per sentence. The last of the 5 files has an extension of
.MTx.  I repeat the loop with different filenames and then copy the result
into several different directories.  After using up about 3500 clusters, I
erase all files with the .MT? extension leaving 714 empty clusters spread
uniformly and fairly thin. (every 5th cluster empty)

For anyone wacky enough to want to create a standardized fragmented drive,
the necessary can be found at:
http://www.hwcn.org/~ao773/myfiles/download/Clarences/fraggit.zip
Unfragging is just a matter of DELTREE /y .\!FRAGGED

> I'm just not sure wether a cache program would select blocks of free
> space in order of size to postpone the worst till last, assuming the
> drive might get defragged sometime.

Not likely. They don't seem to be particularly smart. <G>

-  Clarence Verge
--
-  Help stamp out FATWARE.  As a start visit: http://home.arachne.cz/
--

Reply via email to