Re: /dev/md* Device Files

2005-01-26 Thread Andrew Walrond
On Wednesday 26 January 2005 07:28, Gordon Henderson wrote:
 On Tue, 25 Jan 2005, Steve Witt wrote:
  I'm installing a software raid system on a new server that I've just
  installed Debian 3.1 (sarge) on. It will be a raid5 on 5 IDE disks using
  mdadm. I'm trying to create the array with 'mdadm --create /dev/md0 ...'
  and am getting an error: 'mdadm: error opening /dev/md0: No such file or
  directory'. There are no /dev/md* devices in /dev at the present time. I
  do have the md and raid5 kernel modules loaded. My question is: how do
  the /dev/md* files get created? Are they normal device file that are
  created with MAKEDEV?

 It's odd that they aren't there - they are with Debian 3.0, and have
 remained there when I've upgraded a few test servers to testing/Sarge.

  # cd /dev
  # ./MAKEDEV md

 should do the business.


A useful trick I discovered yesterday: Add --auto to your mdadm commandline 
and it will create the device for you if it is missing :)

Andrew Walrond
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Fun with onboard raid in dell poweredge 2650

2005-01-26 Thread Mauricio
At 09:07 +0100 1/25/05, Bene Martin wrote:
 	We have here a dell poweredge 2650 with the dell perc3/di
 onboard raid running suse enterprise linux 9. The install process
 went smoothly but we would like to have a way to check on the raid
 without having to physically see if a hard drive has crashed and
 thrown up the error lights. Initially, we have installed and tried
 multiple versions of afacli including 2.7, 2.8, and 4.1.  None of
 them see the array.  What else could we try on that?
Hi Mauricio,
Fairly similar setup here - except we're still running RH 8.0 on the
box, with a new kernel (2.4.27) though.
afaapps-2.7-1.i386.rpm works OK on the system.
	Thanks for the suggestion!  I will try it out today and keep 
you all updated.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software RAID 0+1 with mdadm.

2005-01-26 Thread Luca Berra
On Tue, Jan 25, 2005 at 02:28:21PM -0800, Brad Dameron wrote:
Everything seems ok after boot. But again no /dev/md0 in /proc/mdstat.
But then if I do a mdadm --assemble --scan it will then load /dev/md0. 
there is a bug in mdadm, see my mail patches for mdadm 1.8.0 or wait
for 1.9.0
L.
--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: /dev/md* Device Files

2005-01-26 Thread Steve Witt
On Wed, 26 Jan 2005, Andrew Walrond wrote:
On Wednesday 26 January 2005 07:28, Gordon Henderson wrote:
On Tue, 25 Jan 2005, Steve Witt wrote:
I'm installing a software raid system on a new server that I've just
installed Debian 3.1 (sarge) on. It will be a raid5 on 5 IDE disks using
mdadm. I'm trying to create the array with 'mdadm --create /dev/md0 ...'
and am getting an error: 'mdadm: error opening /dev/md0: No such file or
directory'. There are no /dev/md* devices in /dev at the present time. I
do have the md and raid5 kernel modules loaded. My question is: how do
the /dev/md* files get created? Are they normal device file that are
created with MAKEDEV?
It's odd that they aren't there - they are with Debian 3.0, and have
remained there when I've upgraded a few test servers to testing/Sarge.
 # cd /dev
 # ./MAKEDEV md
should do the business.
A useful trick I discovered yesterday: Add --auto to your mdadm commandline
and it will create the device for you if it is missing :)
Well, it seems that this machine is using the udev scheme for managing 
device files. I didn't realize this as udev is new to me, but I probably 
should have mentioned the kernel version (2.6.8) I was using. So I need to 
research udev and how one causes devices to be created, etc.

Thanks for the help!!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


booting from a HW RAID volume

2005-01-26 Thread Carlos Knowlton
Hello,
I'm using a 3Ware 9500 12 Hardware RAID controller that has 12 of 250GB 
S-ATA drives (total storage = 2.75TB).  To my 64bit FC3 box, this looks 
like a single huge SCSI disk (/dev/sda).  I used parted to create GPT 
partitions on it, (because nothing else would work on a volume that 
big).  This seems to work fine, except that grub doesn't seem to 
recognize GPT partitions.
So here's my question:  Does anyone know a way to boot from huge volumes 
(where huge =  2TB)?   even if it doesn't involve grub, or GPT, I'm 
open to suggestions.  Any clues?

Thanks!
-Carlos Knowlton
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: /dev/md* Device Files

2005-01-26 Thread Neil Brown
On Wednesday January 26, [EMAIL PROTECTED] wrote:
  A useful trick I discovered yesterday: Add --auto to your mdadm commandline
  and it will create the device for you if it is missing :)
 
 
 Well, it seems that this machine is using the udev scheme for managing 
 device files. I didn't realize this as udev is new to me, but I probably 
 should have mentioned the kernel version (2.6.8) I was using. So I need to 
 research udev and how one causes devices to be created, etc.

Beware udev has an understanding of how device files are meant to
work which is quite different from how md actually works.

udev thinks that devices should appear in /dev after the device is
actually known to exist in the kernel.  md needs a device to exist in
/dev before the kernel can be told that it exists.

This is one of the reasons that --auto was added to mdadm - to bypass
udev.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: booting from a HW RAID volume

2005-01-26 Thread Guy
Many hardware based RAID systems allow you to create more that 1 virtual
disk.  This is done with luns.  If your hardware supports it, you could
split your monster disk (2.75TB) into 2 or more virtual disks.  The first
would be very small, just for boot, or maybe the OS.

Or you could split the 2.75TB into virtual disks that are all smaller than
2TB.

Just some ideas.

Guy

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Carlos Knowlton
Sent: Wednesday, January 26, 2005 12:46 PM
To: linux-raid@vger.kernel.org
Subject: booting from a HW RAID volume

Hello,


I'm using a 3Ware 9500 12 Hardware RAID controller that has 12 of 250GB 
S-ATA drives (total storage = 2.75TB).  To my 64bit FC3 box, this looks 
like a single huge SCSI disk (/dev/sda).  I used parted to create GPT 
partitions on it, (because nothing else would work on a volume that 
big).  This seems to work fine, except that grub doesn't seem to 
recognize GPT partitions.
So here's my question:  Does anyone know a way to boot from huge volumes 
(where huge =  2TB)?   even if it doesn't involve grub, or GPT, I'm 
open to suggestions.  Any clues?


Thanks!
-Carlos Knowlton
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread J. Ryan Earl
This bug that's fixed in 1.9.0, is in a bug when you create the array?  ie
do we need to use 1.9.0 to create the array.  I'm looking to do the same but
my bootdisk currently only has 1.7.soemthing on it.  Do I need to make a
custom bootcd with 1.9.0 on it?

Thanks,
-ryan

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Luca Berra
Sent: Wednesday, January 26, 2005 9:17 AM
To: linux-raid@vger.kernel.org
Subject: Re: Software RAID 0+1 with mdadm.


On Tue, Jan 25, 2005 at 02:28:21PM -0800, Brad Dameron wrote:
Everything seems ok after boot. But again no /dev/md0 in /proc/mdstat.
But then if I do a mdadm --assemble --scan it will then load /dev/md0.
there is a bug in mdadm, see my mail patches for mdadm 1.8.0 or wait
for 1.9.0

L.

--
Luca Berra -- [EMAIL PROTECTED]
Communication Media  Services S.r.l.
 /\
 \ / ASCII RIBBON CAMPAIGN
  XAGAINST HTML MAIL
 / \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread Brad Dameron
On Tue, 2005-01-25 at 15:04, Guy wrote:
 For a more stable array, build a RAID0 out of 2 RAID1 arrays.
 
 Like this:
 
 mdadm --create /dev/md1 --level=1 --chunk=4 --raid-devices=2 /dev/sdb1
 /dev/sdc1
 mdadm --create /dev/md2 --level=1 --chunk=4 --raid-devices=2 /dev/sdd1
 /dev/sde1
 mdadm --create /dev/md0 --level=0 --chunk=4 --raid-devices=2 /dev/md1
 /dev/md2
 
 You can put a file system directly on /dev/md0
 
 Are all of the disks on the same cable?
 
 Not sure about your booting issue.
 
 Guy
 


Ya I did this setup as well. Still the same booting issue. Once it's
booted I can run mdadm --assemble --scan and it will find just the
stripe and then add it. I saw several people having this issue on a
google search. But never any solutions.

Brad Dameron
SeaTab Software
www.seatab.com



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: megaraid mbox: critical hardware error on new dell poweredge 1850, suse 9.2, kernel 2.6.8

2005-01-26 Thread Reggie Dugard
Hi Olivier,

 I'm trying to get a quite standard suse linux 9.2 setup working
 on a brand new dell poweredge 1850 with 2 scsi disks in raid1 setup.
 
 Installation went completely fine, everything is working. But now (and
 every time), after 2-3h of uptime and some high disk I/O load (rsync of
 some GB of data), it badly crashes with the following messages:

We're seeing something similar here on an 1850 with 2 disks under
hardware raid1 running RHEL rel. 3 with a 2.4.21-27 kernel.  It has
happened twice so far for us (about once a week or so).  It may have
been a backup of the raid (high disk i/o) that caused it to fail the
most recent time.  Below I've included data from our system
corresponding to what you've included, for comparison purposes.

Unfortunately, we have no leads as to the cause, but I thought I let you
know that you're not alone :) and we can share anything we find out.


megaraid: aborting-5781469 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781520 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781529 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781527 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781470 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781498 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781524 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781525 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781507 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781526 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781514 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781509 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781530 cmd=2a c=0 t=0 l=0
megaraid: 5781530:81, driver owner.
megaraid: aborting-5781530 cmd=2a c=0 t=0 l=0
megaraid: 5781530:81, driver owner.
megaraid: aborting-5781537 cmd=2a c=0 t=0 l=0
megaraid: 5781537:94, driver owner.
megaraid: aborting-5781537 cmd=2a c=0 t=0 l=0
megaraid: 5781537:94, driver owner.
megaraid: aborting-5781506 cmd=28 c=0 t=0 l=0
megaraid: aborting-5781532 cmd=2a c=0 t=0 l=0
megaraid: 5781532:98, driver owner.
megaraid: aborting-5781532 cmd=2a c=0 t=0 l=0
megaraid: 5781532:98, driver owner.
megaraid: reset-5781504 cmd=28 c=0 t=0 l=0
megaraid: 49 pending cmds; max wait 180 seconds
megaraid: pending 49; remaining 180 seconds
megaraid: pending 49; remaining 175 seconds
megaraid: pending 49; remaining 170 seconds
megaraid: pending 49; remaining 165 seconds
megaraid: pending 49; remaining 160 seconds
megaraid: pending 49; remaining 155 seconds
megaraid: pending 49; remaining 150 seconds
megaraid: pending 49; remaining 145 seconds
megaraid: pending 49; remaining 140 seconds
megaraid: pending 49; remaining 135 seconds
megaraid: pending 49; remaining 130 seconds
megaraid: pending 49; remaining 125 seconds
megaraid: pending 49; remaining 120 seconds
megaraid: pending 49; remaining 115 seconds
megaraid: pending 49; remaining 110 seconds
megaraid: pending 49; remaining 105 seconds
megaraid: pending 49; remaining 100 seconds
megaraid: pending 49; remaining 95 seconds
megaraid: pending 49; remaining 90 seconds
megaraid: pending 49; remaining 85 seconds
megaraid: pending 49; remaining 80 seconds
megaraid: pending 49; remaining 75 seconds
megaraid: pending 49; remaining 70 seconds
megaraid: pending 49; remaining 65 seconds
megaraid: pending 49; remaining 60 seconds
megaraid: pending 49; remaining 55 seconds
megaraid: pending 49; remaining 50 seconds
megaraid: pending 49; remaining 45 seconds
megaraid: pending 49; remaining 40 seconds
megaraid: pending 49; remaining 35 seconds
megaraid: pending 49; remaining 30 seconds
megaraid: pending 49; remaining 25 seconds
megaraid: pending 49; remaining 20 seconds
megaraid: pending 49; remaining 15 seconds
megaraid: pending 49; remaining 10 seconds
megaraid: pending 49; remaining 5 seconds
megaraid: critical hardware error!
megaraid: reset-5781504 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781473 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781472 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781512 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781471 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781535 cmd=2a c=0 t=0 l=0
megaraid: hw error, cannot reset
megaraid: reset-5781490 cmd=28 c=0 t=0 l=0
megaraid: hw error, cannot reset

Loaded modules:

sg 37388   0 (autoclean)
ext3   89992   2
jbd55092   2 [ext3]
megaraid2  38376   3
diskdumplib 5260   0 [megaraid2]
sd_mod 13936   6
scsi_mod  115240   3 [sg megaraid2 sd_mod]

$ uname -a
Linux kijang 2.4.21-27.0.1.ELsmp #1 SMP Mon Dec 20 18:47:45 EST 2004
i686 i686 i386 GNU/Linux

SCSI output from dmesg:

SCSI subsystem driver Revision: 1.00
megaraid: v2.10.8.2-RH1 (Release Date: Mon Jul 26 12:15:51 EDT 2004)
megaraid: found 0x1028:0x0013:bus 2:slot 14:func 0
scsi0:Found MegaRAID controller at 0xf8846000, IRQ:38
megaraid: [513O:H418] detected 1 logical drives.
megaraid: supports extended CDBs.
megaraid: channel[0] is raid.
scsi0 : LSI Logic MegaRAID 

hda: irq timeout: status=0xd0 { Busy }

2005-01-26 Thread Carlos Knowlton
Hi
I'm new to this list, but I have a lot of projects that involve RAID and 
Linux.  And consequently, a lot of questions.  (But maybe a few answers 
too :)

I have a 3 disk RAID5 array, and one of the member was recently 
rejected, and I'm trying to get to the bottom of it.  I reformatted the 
failed member, and started an fsck on it.  The fsck came up clean, but 
dmesg pulled up the following error:

hda: irq timeout: status=0xd0 { Busy }
ide: failed opcode was: 0xb0
hda: status error: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
hda: drive not ready for command
Is this a hardware issue, configuration, or driver?
I'm running kernel version 2.6.9-667 (FC3) on a 2.4GHz P4 Celeron.  Here 
more details:
---
Dev1 ~]# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 hdc3[2] hdb3[1]
 156376576 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

unused devices: none
---
Dev1 ~]# dmesg
...
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
...
   ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA
   ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
hda: IC35L080AVVA07-0, ATA DISK drive
hdb: IC35L080AVVA07-0, ATA DISK drive
Using cfq io scheduler
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: IC35L080AVVA07-0, ATA DISK drive
ide1 at 0x170-0x177,0x376 on irq 15
Probing IDE interface ide2...
ide2: Wait for ready failed before probe !
Probing IDE interface ide3...
ide3: Wait for ready failed before probe !
Probing IDE interface ide4...
ide4: Wait for ready failed before probe !
Probing IDE interface ide5...
ide5: Wait for ready failed before probe !
hda: max request size: 128KiB
hda: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hda: cache flushes supported
hda: hda1 hda2 hda3
hdb: max request size: 128KiB
hdb: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hdb: cache flushes supported
hdb: hdb1 hdb2 hdb3
hdc: max request size: 128KiB
hdc: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hdc: cache flushes supported
hdc: hdc1 hdc2 hdc3
ide-floppy driver 0.99.newide
...
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
...
Freeing unused kernel memory: 184k freed
SCSI subsystem initialized
libata version 1.02 loaded.
device-mapper: 4.1.0-ioctl (2003-12-10) initialised: [EMAIL PROTECTED]
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during recovery.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
...
inserting floppy driver for 2.6.9-1.667smp
floppy0: no floppy controllers found
...
md: Autodetecting RAID arrays.
md: autorun ...
md: considering hdc3 ...
md:  adding hdc3 ...
md: hdc1 has different UUID to hdc3
md:  adding hdb3 ...
md:  adding hda3 ...
md: created md0
md: bindhda3
md: bindhdb3
md: bindhdc3
md: running: hdc3hdb3hda3
md: kicking non-fresh hda3 from array!
md: unbindhda3
md: export_rdev(hda3)
raid5: automatically using best checksumming function: pIII_sse
  pIII_sse  :  2240.000 MB/sec
raid5: using function: pIII_sse (2240.000 MB/sec)
md: raid5 personality registered as nr 4
raid5: device hdc3 operational as raid disk 2
raid5: device hdb3 operational as raid disk 1
raid5: allocated 3162kB for md0
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
--- rd:3 wd:2 fd:1
disk 1, o:1, dev:hdb3
disk 2, o:1, dev:hdc3
md: considering hdc1 ...
md:  adding hdc1 ...
md: md0 already running, cannot run hdc1
md: export_rdev(hdc1)
md: ... autorun DONE.
...
EXT3 FS on hda1, internal journal
SGI XFS with ACLs, security attributes, large block numbers, no debug 
enabled
SGI XFS Quota Management subsystem
XFS mounting filesystem md0
Ending clean XFS mount for filesystem: md0
Adding 128512k swap on /dev/hda2.  Priority:-1 extents:1
...
hda: irq timeout: status=0xd0 { Busy }

ide: failed opcode was: 0xb0
hda: status error: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
hda: drive not ready for command
-
Thanks,
Carlos Knowlton
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread Neil Brown
On Wednesday January 26, [EMAIL PROTECTED] wrote:
 This bug that's fixed in 1.9.0, is in a bug when you create the array?  ie
 do we need to use 1.9.0 to create the array.  I'm looking to do the same but
 my bootdisk currently only has 1.7.soemthing on it.  Do I need to make a
 custom bootcd with 1.9.0 on it?

This issue that will be fixed in 1.9.0 has nothing to do with creating
the array.

It is only relevant for stacked arrays (e.g. a raid0 made out of 2 or
more raid1 arrays), and only if you are using
   mdadm --assemble --scan
(or similar) to assemble your arrays, and you specify the devices to
scan in mdadm.conf as
   DEVICES partitions
(i.e. don't list actual devices, just say to get them from the list of
known partitions).

So, no: no need for a custom bootcd.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software RAID 0+1 with mdadm.

2005-01-26 Thread Neil Brown
On Tuesday January 25, [EMAIL PROTECTED] wrote:
 Been trying for days to get a software RAID 0+1 setup. This is on SuSe
 9.2 with kernel 2.6.8-24.11-smp x86_64.
 
 I am trying to setup a RAID 0+1 with 4 250gb SATA drives. I do the
 following:
 
 mdadm --create /dev/md1 --level=0 --chunk=4 --raid-devices=2 /dev/sdb1
 /dev/sdc1
 mdadm --create /dev/md2 --level=0 --chunk=4 --raid-devices=2 /dev/sdd1
 /dev/sde1
 mdadm --create /dev/md0 --level=1 --chunk=4 --raid-devices=2 /dev/md1
 /dev/md2
 
 This all works fine and I can mkreiserfs /dev/md0 and mount it. If I am
 then to reboot /dev/md1 and /dev/md2 will show up in the /proc/mdstat
 but not /dev/md0. So I create a /etc/mdadm.conf like so to see if this
 will work:
 
 DEVICE partitions
 DEVICE /dev/md*
 ARRAY /dev/md2 level=raid0 num-devices=2
 UUID=5e6efe7d:6f5de80b:82ef7843:148cd518
devices=/dev/sdd1,/dev/sde1
 ARRAY /dev/md1 level=raid0 num-devices=2
 UUID=e81e74f9:1cf84f87:7747c1c9:b3f08a81
devices=/dev/sdb1,/dev/sdc1
 ARRAY /dev/md0 level=raid1 num-devices=2  devices=/dev/md2,/dev/md1
 
 
 Everything seems ok after boot. But again no /dev/md0 in /proc/mdstat.
 But then if I do a mdadm --assemble --scan it will then load
 /dev/md0. 

My guess is that you are (or SuSE is) relying on autodetect to
assemble the arrays.  Autodetect cannot assemble an array made of
other arrays.  Just an array made of partitions.

If you disable the autodetect stuff and make sure 
  mdadm --assemble --scan
is in a boot-script somewhere, it should just work.

Also, you don't really want the device=/dev/sdd1... entries in
mdadm.conf.
They tell mdadm to require the devices to have those names.  If you
add or remove scsi drives at all, the names can change.  Just rely on
the UUID.

 
 Also do I need to create partitions? Or can I setup the whole drives as
 the array?

You don't need partitions.

 
 I have since upgraded to mdadm 1.8 and setup a RAID10. However I need
 something that is production worthy. Is a RAID10 something I could rely
 on as well? Also under a RAID10 how do you tell it which drives you want
 mirrored?

raid10 is 2.6 only, but should be quite stable.
You cannot tell it which drives to mirror because you shouldn't care.
You just give it a bunch of identical drives and let it put the data
where it wants.

If you really want to care (and I cannot imagine why you would - all
drives in a raid10 are likely to get similar load) then you have to
build it by hand - a raid0 of multiple raid1s.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread Guy
Sorry, I did not intend this to be the solution to your problem.  Just a
much more stable method for creating the 1+0 array.  With this method,
losing 1 disk only requires re-syncing 1 disk.  With the array as a 0+1, if
you lose 1 disk, you lose the whole RAID0 array, which then requires
re-syncing 2 disks of data.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Dameron
Sent: Wednesday, January 26, 2005 3:33 PM
To: linux-raid@vger.kernel.org
Subject: RE: Software RAID 0+1 with mdadm.

On Tue, 2005-01-25 at 15:04, Guy wrote:
 For a more stable array, build a RAID0 out of 2 RAID1 arrays.
 
 Like this:
 
 mdadm --create /dev/md1 --level=1 --chunk=4 --raid-devices=2 /dev/sdb1
 /dev/sdc1
 mdadm --create /dev/md2 --level=1 --chunk=4 --raid-devices=2 /dev/sdd1
 /dev/sde1
 mdadm --create /dev/md0 --level=0 --chunk=4 --raid-devices=2 /dev/md1
 /dev/md2
 
 You can put a file system directly on /dev/md0
 
 Are all of the disks on the same cable?
 
 Not sure about your booting issue.
 
 Guy
 


Ya I did this setup as well. Still the same booting issue. Once it's
booted I can run mdadm --assemble --scan and it will find just the
stripe and then add it. I saw several people having this issue on a
google search. But never any solutions.

Brad Dameron
SeaTab Software
www.seatab.com



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: irq timeout: status=0xd0 { Busy }

2005-01-26 Thread Guy
Why would you fsck the failed member of a RAID5?
You said format, please elaborate!

You should verify the disk is readable.

It looks like your disk is bad.  But a read test would be reasonable.

Try this:
dd if=/dev/had of=/dev/null bs=64k

It should complete without errors.  It will do a full read test.
I expect it will fail.

Do you have 2 disks on the same data cable?  If so, can you re-configure so
that each disk has a dedicated data cable?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Carlos Knowlton
Sent: Wednesday, January 26, 2005 2:52 PM
To: linux-raid@vger.kernel.org
Subject: hda: irq timeout: status=0xd0 { Busy }

Hi

I'm new to this list, but I have a lot of projects that involve RAID and 
Linux.  And consequently, a lot of questions.  (But maybe a few answers 
too :)

I have a 3 disk RAID5 array, and one of the member was recently 
rejected, and I'm trying to get to the bottom of it.  I reformatted the 
failed member, and started an fsck on it.  The fsck came up clean, but 
dmesg pulled up the following error:

hda: irq timeout: status=0xd0 { Busy }
ide: failed opcode was: 0xb0
hda: status error: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
hda: drive not ready for command

Is this a hardware issue, configuration, or driver?

I'm running kernel version 2.6.9-667 (FC3) on a 2.4GHz P4 Celeron.  Here 
more details:
---
Dev1 ~]# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 hdc3[2] hdb3[1]
  156376576 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

unused devices: none
---
Dev1 ~]# dmesg
...
Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
...
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:DMA
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
hda: IC35L080AVVA07-0, ATA DISK drive
hdb: IC35L080AVVA07-0, ATA DISK drive
Using cfq io scheduler
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: IC35L080AVVA07-0, ATA DISK drive
ide1 at 0x170-0x177,0x376 on irq 15
Probing IDE interface ide2...
ide2: Wait for ready failed before probe !
Probing IDE interface ide3...
ide3: Wait for ready failed before probe !
Probing IDE interface ide4...
ide4: Wait for ready failed before probe !
Probing IDE interface ide5...
ide5: Wait for ready failed before probe !
hda: max request size: 128KiB
hda: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hda: cache flushes supported
 hda: hda1 hda2 hda3
hdb: max request size: 128KiB
hdb: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hdb: cache flushes supported
 hdb: hdb1 hdb2 hdb3
hdc: max request size: 128KiB
hdc: 160836480 sectors (82348 MB) w/1863KiB Cache, CHS=65535/16/63, 
UDMA(100)
hdc: cache flushes supported
 hdc: hdc1 hdc2 hdc3
ide-floppy driver 0.99.newide
...
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
...
Freeing unused kernel memory: 184k freed
SCSI subsystem initialized
libata version 1.02 loaded.
device-mapper: 4.1.0-ioctl (2003-12-10) initialised: [EMAIL PROTECTED]
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during recovery.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
...
inserting floppy driver for 2.6.9-1.667smp
floppy0: no floppy controllers found
...
md: Autodetecting RAID arrays.
md: autorun ...
md: considering hdc3 ...
md:  adding hdc3 ...
md: hdc1 has different UUID to hdc3
md:  adding hdb3 ...
md:  adding hda3 ...
md: created md0
md: bindhda3
md: bindhdb3
md: bindhdc3
md: running: hdc3hdb3hda3
md: kicking non-fresh hda3 from array!
md: unbindhda3
md: export_rdev(hda3)
raid5: automatically using best checksumming function: pIII_sse
   pIII_sse  :  2240.000 MB/sec
raid5: using function: pIII_sse (2240.000 MB/sec)
md: raid5 personality registered as nr 4
raid5: device hdc3 operational as raid disk 2
raid5: device hdb3 operational as raid disk 1
raid5: allocated 3162kB for md0
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 1, o:1, dev:hdb3
 disk 2, o:1, dev:hdc3
md: considering hdc1 ...
md:  adding hdc1 ...
md: md0 already running, cannot run hdc1
md: export_rdev(hdc1)
md: ... autorun DONE.
...
EXT3 FS on hda1, internal journal
SGI XFS with ACLs, security attributes, large block numbers, no debug 
enabled
SGI XFS Quota Management subsystem
XFS mounting filesystem md0
Ending clean XFS mount for filesystem: md0
Adding 128512k swap on /dev/hda2.  Priority:-1 extents:1
...
hda: irq timeout: status=0xd0 { Busy }

ide: failed opcode was: 0xb0
hda: status error: status=0x58 { DriveReady SeekComplete