I'm trying to build a raid5 under using a 2.2.7 and the debian raidtools
(version .42) and I'm not having the best of luck. I'm able to build
teh raid itself without any problems, but as soon as I try to mke2fs the
md device (or even dd if=/dev/zero to it) the whole system locks up.
I'm trying to use 8 ide disks (using 2 onboard contorllers and an ide
pci card)
Here's my bootup stuff:
PIIX4: IDE controller on PCI bus 00 dev 21
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xd800-0xd807, BIOS settings: hda:DMA, hdb:DMA
ide1: BM-DMA at 0xd808-0xd80f, BIOS settings: hdc:DMA, hdd:DMA
AEC6210: IDE controller on PCI bus 00 dev 58
AEC6210: not 100% native mode: will probe irqs later
ide2: BM-DMA at 0xa000-0xa007, BIOS settings: hde:pio, hdf:pio
ide3: BM-DMA at 0xa008-0xa00f, BIOS settings: hdg:pio, hdh:pio
hda: WDC AC418000D, ATA DISK drive
hdb: WDC AC418000D, ATA DISK drive
hdc: WDC AC418000D, ATA DISK drive
hdd: WDC AC418000D, ATA DISK drive
hde: WDC AC418000D, ATA DISK drive
hdf: WDC AC418000D, ATA DISK drive
hdg: WDC AC418000D, ATA DISK drive
hdh: WDC AC418000D, ATA DISK drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
ide2 at 0xb400-0xb407,0xb002 on irq 10
ide3 at 0xa800-0xa807,0xa402 on irq 10 (shared with ide2)
hda: WDC AC418000D, 17206MB w/1966kB Cache, CHS=2193/255/63, UDMA
hdb: WDC AC418000D, 17206MB w/1966kB Cache, CHS=2193/255/63, UDMA
hdc: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, UDMA
hdd: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, UDMA
hde: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, (U)DMA
hdf: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, (U)DMA
hdg: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, (U)DMA
hdh: WDC AC418000D, 17206MB w/1966kB Cache, CHS=34960/16/63, (U)DMA
which is admittedly a little strange, but I haven't played with the
controller settings yet, and the PIO mode shouldn't matter. In any
case, I tried using just 3 disks (all connected ot the onboard
controllers) and I get the same problem. I also created a raid0 with
all 8 disks and that works just fine too, so it's just the raid4/5 that
refuses to behave. It crashes when i do a mke2fs AND when i did a dd
if=/dev/zero of=/dev/md0 somewhere between the first 4 and 5mb.
Here's my raid5.conf:
glom:/etc/raid# mkraid -f raid5.conf
mkraid version 0.36.4
parsing configuration file
handling MD device /dev/md2
analyzing super-block
disk 0: /dev/hdc1, 716656kB, raid superblock at 716544kB
disk 1: /dev/hdd1, 716656kB, raid superblock at 716544kB
disk 2: /dev/hde1, 716656kB, raid superblock at 716544kB
disk 3: /dev/hdf1, 716656kB, raid superblock at 716544kB
disk 4: /dev/hdg1, 716656kB, raid superblock at 716544kB
initializing raid set
clearing device /dev/hdc1
clearing device /dev/hdd1
clearing device /dev/hde1
clearing device /dev/hdf1
clearing device /dev/hdg1
writing raid superblock
MD ID: a92b4efc
Conforms to MD version: 0.36.4
Raid set ID: 9eb939a0
Creation time: Thu May 6 22:19:35 1999
Update time: Thu May 6 22:26:42 1999
State: 1 (clean)
Raid level: 5
Individual disk size: 699MB (716544kB)
Chunk size: 32kB
Parity algorithm: 1 (right-asymmetric)
Total number of disks: 5
Number of raid disks: 5
Number of active disks: 5
Number of working disks: 5
Number of failed disks: 0
Number of spare disks: 0
Disk 0: raid_disk 0, state: 6 (operational, active, sync)
Disk 1: raid_disk 1, state: 6 (operational, active, sync)
Disk 2: raid_disk 2, state: 6 (operational, active, sync)
Disk 3: raid_disk 3, state: 6 (operational, active, sync)
Disk 4: raid_disk 4, state: 6 (operational, active, sync)
mkraid: completed
glom:/etc/raid# mdadd /dev/md2 /dev/hdc1 /dev/hdd1 /dev/hde1 /dev/hdf1
/dev/hdg1
glom:/etc/raid# mdrun -p5 /dev/md2
glom:/etc/raid# dmesg
[...]
REGISTER_DEV hdc1 to md2 done
REGISTER_DEV hdd1 to md2 done
REGISTER_DEV hde1 to md2 done
REGISTER_DEV hdf1 to md2 done
REGISTER_DEV hdg1 to md2 done
raid5: device 16:01 operational as raid disk 0
raid5: device 16:41 operational as raid disk 1
raid5: device 21:01 operational as raid disk 2
raid5: device 21:41 operational as raid disk 3
raid5: device 22:01 operational as raid disk 4
raid5: allocated 5299kB for 09:02
raid5: raid level 5 set 09:02 active with 5 out of 5 devices, algorithm
1
md: updating raid superblock on device 16:01, sb_offset == 716544
md: updating raid superblock on device 16:41, sb_offset == 716544
md: updating raid superblock on device 21:01, sb_offset == 716544
md: updating raid superblock on device 21:41, sb_offset == 716544
md: updating raid superblock on device 22:01, sb_offset == 716544
I'm pretty sure it's not the pci ide controller since it still craps out
using just the onboard controller, and the raid0 stuff works fine.
anyone have any ideas where i should go from here?
thanks--
sage