IMHO, this is the *new* expected behavior.  If both the raid members
left the array in a good state (i.e. you unplugged one while the system
was off) then you need to zero the superblock to get it back into the
array.

I suspect your test case would work with a disk that only had the
structures, and not the clean data inside; Perhaps doing a live pull on
the cable (simulate a controller failure) for your test, in an
environment where you don't care about the data.

In that case, upon restart, I would expect the "dirty" and "old" md disk
to be automatically rebuilt.

In one of my use cases, where I use mdadm slightly differently across
two computers, it solves a problem where the older disk is sometimes
mounted when both md members are clean;  In this case the new data is
overwritten by the old, which can be a real issue caused by the old
behavior.

Factors that influence use cases where old data could overwrite new data
previously are related to individual disk spin up times, and
availability of disks at boot (especially with remote block devices),
which is probably the reason for this 'feature'.   My observations are
in the dupe below.

The use case for you should probably include a real 'spare' rather then
using an old member in a good state (which should probably be not
overwritten by default)

https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/945786

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/925280

Title:
  Software RAID fails to rebuild after testing degraded cold boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/925280/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to