On 25/08/2014 10:22, Peter Humphrey wrote:
On Sunday 24 August 2014 19:22:40 Kerin Millar wrote:
On 24/08/2014 14:51, Peter Humphrey wrote:
--->8
So I decided to clean up /etc/mdadm.conf by adding these lines:

DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md5 devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md7 devices=/dev/sda7,/dev/sdb7
ARRAY /dev/md9 devices=/dev/sda9,/dev/sdb9

Perhaps you should not include /dev/md5 here.

I wondered about that.

As you have made a point of building the array containing the root
filesystem with 0.99 metadata, ...

...as was instructed in the howto at the time...

I would assume that it is being assembled in kernelspace as a result of
CONFIG_MD_AUTODETECT being enabled.

Yes, I think that's what's happening.

Alternatively, perhaps you are using an initramfs.

Nope.

Either way, by the time the mdraid init.d script executes, the /dev/md5
array must - by definition - be up and mounted. Does it make a
difference if you add the following line to the config?

    AUTO +1.x homehost -all

That will prevent it from considering arrays with 0.99 metadata.

No, I get the same result. Just a red asterisk at the left end of the line
after "Starting up RAID devices..."

It since dawned upon me that defining AUTO as such won't help because you define the arrays explicitly. Can you try again with the mdraid script in the default runlevel but without the line defining /dev/md5?


Now that I look at /etc/init.d/mdraid I see a few things that aren't quite
kosher. The first is that it runs "mdadm -As 2>&1", which returns null after
booting is finished (whence the empty line before the asterisk). Then it tests

Interesting. I think that you should file a bug because the implication is that mdadm is returning a non-zero exit status in the case of arrays that have already been assembled. Here's a post from the Arch forums suggesting the same:

https://bbs.archlinux.org/viewtopic.php?pid=706175#p706175

Is the exit status something other than 1? Try inserting eerror "$?" immediately after the call to mdadm -As. Granted, it's just an annoyance but it looks silly, not to mention unduly worrying.

for the existence of /dev/md_d*. That also doesn't exist, though /dev/md*
does:

# ls -l /dev/md*
brw-rw---- 1 root disk 9, 0 Aug 25 10:03 /dev/md0
brw-rw---- 1 root disk 9, 5 Aug 25 10:03 /dev/md5
brw-rw---- 1 root disk 9, 7 Aug 25 10:03 /dev/md7
brw-rw---- 1 root disk 9, 9 Aug 25 10:03 /dev/md9

/dev/md:
total 0
lrwxrwxrwx 1 root root 6 Aug 25 10:03 5_0 -> ../md5
lrwxrwxrwx 1 root root 6 Aug 25 10:03 7_0 -> ../md7
lrwxrwxrwx 1 root root 6 Aug 25 10:03 9_0 -> ../md9


I think this has something to do with partitionable RAID. Yes, it is possible to superimpose partitions upon an md device, though I have never seen fit to do so myself. For those that do not, the md_d* device nodes won't exist.

Looks like I have some experimenting to do.

I forgot to mention in my first post that, on shutdown, when the script runs
"mdadm -Ss 2>&1" I always get "Cannot get exclusive access to /dev/md5..."
I've always just ignored it until now, but perhaps it's important?

I would guess that it's because a) the array hosts the root filesystem b) you have the array explicitly defined in mdadm.conf and mdadm is being called with -s/--scan again.


On a related note, despite upstream's efforts to make this as awkward as
possible, it is possible to mimic the kernel's autodetect functionality
in userspace with a config such as this:

    HOMEHOST <ignore>
    DEVICE partitions
    AUTO +1.x -all

Bear in mind that the mdraid script runs `mdadm --assemble --scan`.
There is no need to specifically map out the properties of each array.
This is what the metadata is for.

Thanks for the info, and the help. The fog is dispersing a bit...


Reply via email to