Roger Searle wrote, On 29/10/09 11:34:
Roger Searle wrote, On 29/10/09 10:47:
From my reading of man mdadm, it suggests doing a fail and remove of
the faulty drive, possibly at the same time as adding a new device,
like:
mdadm /dev/md0 --add /dev/sda1 --fail /dev/sdb1 --remove /dev/sdb1
Is this a good process to follow or is it redundant/unnecessary?
Craig Falconer wrote:
Sounds silly actually - remove the only good drive as you add the
blank one?
Perhaps I have confused things by quoting that line direct from the man
page rather than changing to reflect my actual devices - it is just
saying that in one line you can add a new device, the example being sda1
and removing a failed one that is sdb1. I'd be adding sdd. does that
sound better? The question really being more about the need to fail and
remove the bad drive?
You'll have to power off the box to change the drive anyway, unless you
are feeling really adventurous and want to hot swap.
I suggest you down the box, swap out the drive, then bring it all back
up. The raid will assemble degraded and then you can go from there.
--
Craig Falconer