** Description changed: 12.04.1 regression (worked before, maybe without the mentioned safety check, but it worked) (re-add refers to speeding up the sync using a bitmap) The --incremental (udev) call refuses to (re)add a temporarily diconnected member back to an already restarted (active) raid. (Even thought the event count clearly shows its state is equal or older than at the time the array was started (run) degraded.) This seems to have come by an attempt to "fix" bug #557429, without considering the discussion bejond comment 68 https://bugs.launchpad.net/mdadm/+bug/557429/comments/68 And when attempting to do it manually: # mdadm /dev/md2 --re-add /dev/sda6 mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible (missing info: "bacause the array has no write intent bitmap") # mdadm /dev/md2 --add /dev/sda6 mdadm: /dev/sda6 reports being an active member for /dev/md2, but a --re-add fails. mdadm: not performing --add as that would convert /dev/sda6 in to a spare. mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first. This is not how things are supposed to work in times of hotpluging. A warning/question if the device that is to be added contains newer data than at its time of failure is appropriate, but otherwise, get the job of adding that device back to its raid done! To avoid regression: Let the "mdadm --incremental" command (re)add members to running arrays again. Not doing so does not guard against running from alternating (diverging) parts of an array anyway (if they come up in random order). True fix to prevent diverging array parts: - Store the state of the event counter at the time of degradation in the superblock. + Store the state of the event counter at the time of the degradation for each degraded device in the superblocks on the remaining member devices. --incremental should continue to (re)add a device automatically (only) if the event count shows the state of the member device that is to be (re)added is equal or older than at the time the array degraded. (Otherwise, fail and print an error message that the device contains conflicting changes.)
** Description changed: 12.04.1 regression (worked before, maybe without the mentioned safety check, but it worked) (re-add refers to speeding up the sync using a bitmap) The --incremental (udev) call refuses to (re)add a temporarily diconnected member back to an already restarted (active) raid. (Even thought the event count clearly shows its state is equal or older than at the time the array was started (run) degraded.) This seems to have come by an attempt to "fix" bug #557429, without considering the discussion bejond comment 68 https://bugs.launchpad.net/mdadm/+bug/557429/comments/68 And when attempting to do it manually: # mdadm /dev/md2 --re-add /dev/sda6 mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible (missing info: "bacause the array has no write intent bitmap") # mdadm /dev/md2 --add /dev/sda6 mdadm: /dev/sda6 reports being an active member for /dev/md2, but a --re-add fails. mdadm: not performing --add as that would convert /dev/sda6 in to a spare. mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first. This is not how things are supposed to work in times of hotpluging. A warning/question if the device that is to be added contains newer data than at its time of failure is appropriate, but otherwise, get the job of adding that device back to its raid done! To avoid regression: - Let the "mdadm --incremental" command (re)add members to running arrays again. Not doing so does not guard against running from alternating (diverging) parts of an array anyway (if they come up in random order). + Let the "mdadm --incremental" command (re)add members to running arrays automatically again. Not doing so does not guard against running from alternating (diverging) parts of an array anyway (if the devices come up in random order, the both parts get started degraded, when they come up first). True fix to prevent diverging array parts: - Store the state of the event counter at the time of the degradation for each degraded device in the superblocks on the remaining member devices. + Store the state of the event counter at the time of the degradation for each missing device in the superblocks on the remaining member devices. --incremental should continue to (re)add a device automatically (only) if the event count shows the state of the member device that is to be (re)added is equal or older than at the time the array degraded. (Otherwise, fail and print an error message that the device contains conflicting changes.) ** Description changed: 12.04.1 regression (worked before, maybe without the mentioned safety check, but it worked) (re-add refers to speeding up the sync using a bitmap) The --incremental (udev) call refuses to (re)add a temporarily diconnected member back to an already restarted (active) raid. (Even thought the event count clearly shows its state is equal or older than at the time the array was started (run) degraded.) This seems to have come by an attempt to "fix" bug #557429, without considering the discussion bejond comment 68 https://bugs.launchpad.net/mdadm/+bug/557429/comments/68 And when attempting to do it manually: # mdadm /dev/md2 --re-add /dev/sda6 mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible (missing info: "bacause the array has no write intent bitmap") # mdadm /dev/md2 --add /dev/sda6 mdadm: /dev/sda6 reports being an active member for /dev/md2, but a --re-add fails. mdadm: not performing --add as that would convert /dev/sda6 in to a spare. mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first. This is not how things are supposed to work in times of hotpluging. A warning/question if the device that is to be added contains newer data than at its time of failure is appropriate, but otherwise, get the job of adding that device back to its raid done! To avoid regression: Let the "mdadm --incremental" command (re)add members to running arrays automatically again. Not doing so does not guard against running from alternating (diverging) parts of an array anyway (if the devices come up in random order, the both parts get started degraded, when they come up first). True fix to prevent diverging array parts: Store the state of the event counter at the time of the degradation for each missing device in the superblocks on the remaining member devices. - --incremental should continue to (re)add a device automatically (only) if the event count shows the state of the member device that is to be (re)added is equal or older than at the time the array degraded. (Otherwise, fail and print an error message that the device contains conflicting changes.) + --incremental should continue to (re)add a device automatically (only) if the event count shows the state of the member device that is to be (re)added is equal or older than at the time the device failed. (Otherwise, abort and print an error message that the device contains conflicting changes.) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1088532 Title: pluging in a missing raid member does not (re)add it to array To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1088532/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs