> >IIRC the 2940's device driver should print out
> > what speeds it gets to each device (recent versions say MB/sec
> > where previous would say 20 MHz, 16-bit instead of just 40MB/sec).
> > What's the driver say?
>
> What Linux command should I use to force it to speak?
dmesg should get it (I think, not sure), but wherever you send
kern.* in syslog should get it to. I send it to /var/log/kernel,
so here's a chunk of the Adaptec section from one of the machines.
Notice the "Synchronous at 20.0 Mbyte/sec" section.
scsi0 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.10/3.2.4
<Adaptec AIC-7895 Ultra SCSI host adapter>
scsi1 : Adaptec AHA274x/284x/294x (EISA/VLB/PCI-Fast SCSI) 5.1.10/3.2.4
<Adaptec AIC-7895 Ultra SCSI host adapter>
scsi : 2 hosts.
Vendor: IBM Model: DFHSS2W Rev: 4141
Type: Direct-Access ANSI SCSI revision: 02
Detected scsi disk sda at scsi0, channel 0, id 1, lun 0
(scsi0:0:1:0) Synchronous at 20.0 Mbyte/sec, offset 8.
Vendor: IBM Model: DFHSS2W Rev: 4141
Type: Direct-Access ANSI SCSI revision: 02
Detected scsi disk sdb at scsi0, channel 0, id 2, lun 0
(scsi0:0:2:0) Synchronous at 20.0 Mbyte/sec, offset 8.
scsi : detected 2 SCSI disks total.
SCSI device sda: hdwr sector= 512 bytes. Sectors= 4404489 [2150 MB] [2.2 GB]
SCSI device sdb: hdwr sector= 512 bytes. Sectors= 4404489 [2150 MB] [2.2 GB]
> Will RAID 0+1 READs much faster than RAID 5 using Mylex 150 hardware
> RAID adapter?
First, make sure you fully understand my point about higher rpm drives
being better performers. Ideally, different raid set-up should get the
SAME access to the SAME model drives. If that's not possible, at least
make the same rpm speeds available, or the results don't mean much (imho).
That being said, raid 0+1 vs. raid5 read should be very similar (I would
think, real testing would be required). Neither are going to get all
the way to raid0 read capabilities, for these two reasons.
- raid 0+1: While raid1 becomes essentially raid0 in the read cases due
to the ability to stripe read's across the drives, this means
you are a) doing multi-level raid which will add to latency,
code path, etc. and b) may be doing *some* writes for things
like atime updates (mount option "noatime" could help).
Neither should matter too much compared to drive latencies,
though, so pure read's should at least approach raid0.
- raid 5: Single level raid, but we lose a drive in read parallelism,
since we will be skipping the parity drive on each pass.
For a large enough number of drives the (n/(n-1)) multiplier
should be pretty close to 1, though, so again there should
be theoretically not much gap up to raid0 for the read case.
All this is predicated on firmware capabilities, and trust me when I say
*lots* of hardware raid controllers have performance bottlenecks in their
firmware (I know, since it's one of the things I do to help out IBM :)
so YMMV. Try to keep up-to-date with your controller's firmware to help
alleviate this.
Hope this helps, and good luck with your DB performance improvement efforts,
James Manning
--
Miscellaneous Engineer --- IBM Netfinity Performance Development