On Wednesday 17 October 2001 02:59, Net Llama wrote:
> This brings up three points:
> 1) Is this only applicable to IDE drives?

No.

> 2) Does this suggest that having multiple smaller drives is better than
> having 1 large drive?

maybe. the answer ain't so obvious.

> 3) How does this play with RAID, where you have multipled physical disks
> acting as one large logical drive?  And is there a difference between
> SCSI RAID vs. IDE RAID?
>

scsi is pluggable. ide is not. Yes, you can hot-plug ide. Try it sometime.

scsi hard drives and ide hard drives are maintained by separate 
author-groups, ergo, one will be 'better', than the other. In theory at 
least, the ide module will perform better simply because it does not have the 
overhead implicit in yet-another author group maintaining the separate, scsi 
generic code. Scsi works through several code layers, ide does not.

Theory and reality may not be the same thing. Scsi has been the preferred 
choice for Linux since year one, hence more development effort has gone into 
it. Tannenbaum in his seminal minix OS, the true father of Linux, mentioned 
code layers, which he called shells, as the big unix bugbear. Since 1980's 
much development effort has been placed in all *nix flavours to defeat this 
effect, and is largely done by passing a pointer packet through those layers 
so that each intermediary 'layer' does very little processing. (TCP/IP stack 
is another example).

The balance is likely to continue to shift to ide favour because of the ever 
increasing udma-xxx which forces the developers to update their code. (scsi 
is stale by comparison)

For some time, there was a non-get-roundable problem with ide-dma which 
causes those drives (actually the channel) to flake under continous load, not 
heavy, but continous, load. So far, scsi-dma has been better at it's job, and 
is more refined. But it too has problems with ISA 16 bit channels causing 
development on scsi-ide to waver much like ide-dma. In summary though, ide 
code is 'tighter'. (For anyone interested, imagine getting this information 
out 'Windoze')

As for large versus multiple drives. First glance says lotsa small ones = 
fast. This holds true for the search algorithms used .eg the ones that 
predict which sectors _might_ be required next.  As someone noted these algos 
operate on the partition, not the drive. Bad (tm). Their effect is 
questionable and not repeatable accross hardware differences.

However, the nature of very large drives (eg 20meg +) is that there is no 
such thing as physical 'cylinder' 'head' 'track. It is one large amorphous 
mass which _might_ have one or 1000 heads, and a single platter. The drive 
paramaters as such are simply there to conform to what a bios (at least) can 
understand and typically there are 255 'tracks' x 255 'heads' x think of a 
number. This should alert anyone that the drive is telling a few white 
lies.

The result of that is the 'seek time' to move from cylinder 1 to cylinder 1 
million may be instantaneous simply because the drive isn't seeking anywhere 
let alone moving a very large chunk of metal at great velocity accross a 
rusty iron 'platter'. Then again, it might.

For instance, it is, (or was) 'common practice' for drive manufacturers to 
place cylinder 1++ at the CENTER of the drive simply because they calculated 
that dos systems would spend most of their time travelling back and forth to 
the fat directories.

So, bottom line? 

1) Almost all algorithims to improve search speed put the F into 'utility'. 
The phrase ymmv is most apt.

2) The nature of large drives == instantaneous access. This of course is not 
true in specifics, but true enough in general to say that therefore, the 
larger the drive you can put in your box, the faster the access is going to 
be.


-- 
http://linux.nf -- [EMAIL PROTECTED]

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

_______________________________________________
Linux-users mailing list
Archives, Digests, etc at http://linux.nf/mailman/listinfo/linux-users

Reply via email to