Michael Tokarev wrote:
Neil Brown wrote:
On Monday December 31, [EMAIL PROTECTED] wrote:
I'm hoping that if I can get raid5 to continue despite the errors, I
can bring back up enough of the server to continue, a bit like the
remount-ro option in ext2/ext3.

If not, oh well...
Sorry, but it is "oh well".

And another thought around all this.  Linux sw raid definitely need
a way to proactively replace a (probably failing) drive, without removing
it from the array first.  Something like,
  mdadm --add /dev/md0 /dev/sdNEW --inplace /dev/sdFAILING
so that sdNEW will be a mirror of sdFAILING, and once the "recovery"
procedure finishes (which may use data from other drives in case of
I/O error reading sdFAILING - unlike described scenario of making a
superblock-less mirror of sdNEW and sdFAILING),
  mdadm --remove /dev/md0 /dev/sdFAILING,
which does not involve any further reconstructions anymore.

I really like that idea, it addresses the same problem as the various posts regarding creating little raid1 arrays of the old and new drive, etc.

I would like an option to keep a drive with bad sectors in an array if removing the drive would prevent the array from running (or starting). I don't think that should be default, but there are times when some data is way better than none. I would think the options are fail the drive, set the array r/o, and return an error and keep going.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to