Re: raid issues after power failure

2006-07-03 Thread Ákos Maróy
Francois Barre wrote: Well, Neil, I'm wondering, It seemed to me that Akos' description of the problem was that re-adding the drive (with mdadm not complaining about anything) would trigger a resync that would not even start. But as your '--force' does the trick, it implies that the resync

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread Mario 'BitKoenig' Holbe
Jason Lunz [EMAIL PROTECTED] wrote: there's a udevd you can check for. I don't know whether that's a better test or not. It's not. For example, the Debian package does also start this udevd on package upgrades, even when a 2.4 kernel is running which definitely has no udev support :) regards

Re: raid issues after power failure

2006-07-03 Thread Francois Barre
# mdadm --stop /dev/md0 # mdadm -A /dev/md0 will result in the array started with 3 drives out of 4 again. what am I doing wrong? Akos AFAIK, mdadm -A raid device will use /etc/mdadm.conf to know what underlying partitions you mean with your /dev/md0. So, try # mdadm --stop /dev/md0 #

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread David Greaves
Neil Brown wrote: I guess I could test for both, but then udev might change again I'd really like a more robust check. Maybe I could test if /dev was a mount point? IIRC you can have diskless machines with a shared root and nfs mounted static /dev/ David -- - To unsubscribe from this

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread Frank Blendinger
On Mon, Jul 03, 2006 at 09:14:38AM +1000, Neil Brown wrote: I'm worried that this test is not very robust. On my Debian/unstable system running used, there is no /dev/.udevdb though there is a /dev/.udev/db I guess I could test for both, but then udev might change again I'd really

Re: raid issues after power failure

2006-07-03 Thread Ákos Maróy
Francois Barre wrote: AFAIK, mdadm -A raid device will use /etc/mdadm.conf to know what underlying partitions you mean with your /dev/md0. So, try # mdadm --stop /dev/md0 # mdadm -A /dev/md0 /dev/sd[abcd]1 And then have a look on your /etc/mdadm.conf, especially the line starting by

RAID 5 crash, is there any way to recover some data ?

2006-07-03 Thread Sevrin Robstad
I *had* a RAID 5 consisting of 6 x 200GB drives. After a power failure, my motherboard failed and I replaced it with some old crap I had. After some lockups on this platform I suddenly had more than one disk marked as bad. And then, after some googling around - I tried mdadm --assemble

Linux SATA Support Question - Is the ULI M1575 chip supported?

2006-07-03 Thread Justin Piszcz
In the source: enum { uli_5289= 0, uli_5287= 1, uli_5281= 2, uli_max_ports = 4, /* PCI configuration registers */ ULI5287_BASE= 0x90, /* sata0 phy SCR registers */

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread Luca Berra
On Mon, Jul 03, 2006 at 09:14:38AM +1000, Neil Brown wrote: However + + /* if we are using udev and auto is not set, mdadm will almost +* certainly fail, so we force it here. +*/ + if (autof == 0 access(/dev/.udevdb,F_OK) == 0) + autof=2; + I'm

Re: How does md determine which partitions to use in RAID1 when DEVICE partitions is specified

2006-07-03 Thread Neil Brown
On Monday July 3, [EMAIL PROTECTED] wrote: I have Fedora Core 5 installed with mirroring on the Boot partition and root partition. I created a Logical Volume Group on the mirrored root partition. How does md figure out which partitions are actually specified. It says it stores the uuid in

Re: Linux SATA Support Question - Is the ULI M1575 chip supported?

2006-07-03 Thread Jeff Garzik
Justin Piszcz wrote: In the source: enum { uli_5289= 0, uli_5287= 1, uli_5281= 2, uli_max_ports = 4, /* PCI configuration registers */ ULI5287_BASE= 0x90, /* sata0 phy SCR