Sorry raid-ers if this is a RTMF (which I did) ... I just
subscribed to this list.
I use raidtools-19990421-0.90
I run 2.2.9 with (I believe) proper support for md and its
personalities:
root@tecra-brew:~ $ [bash] cat /proc/mdstat
Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive
I have an old 4pack Sun SCSI (4x1Gb) box which looks to me like
a great candidate to try out RAID0:
root@tecra-brew:~ $ [bash] cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: SEAGATE Model: ST11200N SUN1.05 Rev: 9500
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: SEAGATE Model: ST11200N SUN1.05 Rev: 9500
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 02 Lun: 00
Vendor: SEAGATE Model: ST11200N SUN1.05 Rev: 9500
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi0 Channel: 00 Id: 05 Lun: 00
Vendor: SEAGATE Model: ST11200N SUN1.05 Rev: 9500
Type: Direct-Access ANSI SCSI revision: 02
The disks work well (over PCMCIA as sd{a,b,c,d}). Here is the raidtab:
root@tecra-brew:~ $ [bash] cat /etc/raidtab
raiddev /dev/md0
raid-level 0
persistent-superblock 0
chunk-size 16
nr-raid-disks 4
nr-spare-disks 0
device /dev/sda1
raid-disk 0
device /dev/sdb1
raid-disk 1
device /dev/sdc1
raid-disk 2
device /dev/sdd1
raid-disk 3
But mkraid complains ...
root@tecra-brew:~ $ [bash] mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
I ran this on gdb and straced it and both show me that
the ioctl SET_ARRAY_INFO on fd of /dev/md0 fails. strace prints:
open("/dev/md0", O_RDONLY) = 4
ioctl(4, 0x40480923, 0x804fa90) = -1 EINVAL (Invalid argument)
Is this some kind of known issue?
Thanks for your valuable time spent reading this.
--
Greetings,
-- Cisco Systems CATS Team TAC Brussels --
Marc Duponcheel [EMAIL PROTECTED] tel: +32 2 704 52 40