In case someone else here wants to build a larger IDE software raid5 in the near
future, here is what works for me very well right now:
- single processor P3
- 4 or more IBM IDE drives (I use 4)
- linux 2.2.13pre15 (but probably better: 2.2.13final)
- the raid 2.2.11 patch (just press enter a few times...)
- if you use >32GB drives, a patch for that or the UnifiedIDE patch
- if you use >32GB drives, hd[e,g,i,k etc.]=4560,255,63 boot parameter (or a future
UnifiedIDE)
- another small patch to get the promise66 going, or the UnifiedIDE patch
- "hdparm -d1 -X66 <dev>" in some startup script is needed without UnifiedIDE patch.
Later "hdparm -d1 -X66 -k1 -K1 -W1<dev>" can be used.
- use UDMA66 cables (80 wires), expensive but should improve signal quality
- but probably better operate the array in UDMA33 mode
- you have less physical problems with cable length if you use one drive per
controller (masters only, no slaves). This may also have advantages if a drive fails.
In my experience the reliability & data integrity however doesn�t suffer if you use
master+slave (in case no drive fails).
- you probably need to use mknod for /dev/hdi etc.
- stresstest the machine for >=24h (this is also a very good idea for SCSI arrays)
For me, this configuration (currently without UnifiedIDE, 4 IBM 37GB drives on 2
promise controllers, one non-raid drive on onboard controller) survived an intensive
40h stresstest without problems (high bandwidth random data with read-back
validation). Also 30h in normal operation now. I don�t expect it to give me future
problems.
You don�t want to mix IDE+SMP currently, since kernels up to 2.2.13pre14 have an SMP
issue in the IDE driver, and pre15 is unproven right now.
You need very solid hardware. I had to replace mainboard+cpu in a different raid
server, because the old ones couldn�t cope the stress (they preferred to deliver bit
errors).
You usually cannot use all drive bays, you need enough space between the drives (or
they will run very very hot...).
Performance:
- overkill read bandwidth, very good write bandwidth. But bandwidth is more or less
irrelevant these days.
- very good (average) seek performance, a factor of (number_of_drives + X). X due to
better locality. This makes your database happy.
- the performance of IDE master/slave configurations is somewhat lower, but not much.
I don�t see performance reasons to not use master/slave. It�s more a matter of cable
length and potential problems due to failing drives.
kernel patches:
Special ones are needed if Unified IDE is not used. Available on request.
stresstester:
If there is interest, I can clean it up a little and release it.
In order to enable you to happily crash your IDE+SMP machines too...
--
the online community service for gamers & friends - http://www.rivalnet.com
* unterst�tzt �ber 50 PC-Spiele im Multiplayer-Modus
* Dateien senden & empfangen bis 500 MB am St�ck
* Newsgroups, Mail, Chat & mehr