I'm going to go ahead and close this since it doesn't appear to be a bug
in dmraid. If you can reproduce this starting with clean disks that
have not had any errors, please reopen it.
** Changed in: dmraid (Ubuntu)
Status: New = Invalid
--
dmraid is unable to disable a disk after
Yes, the only reason to use it is to dual boot with Windows.
The dmsetup status output lists the state of the disks as RD which
indicates read and write errors respectively.
--
dmraid is unable to disable a disk after faillure
https://bugs.launchpad.net/bugs/562962
You received this bug
Hi Phillip,
Where do you see that other disk is failing. I can only see part of the
ary pdc_cdgeffcac failing that corresponds to disk sdc.
About DMRAID, Yes, thank you, you are right. I really realized about
this when I saw that BIOS Raid was not able to handle this as needed.
I'm waiting for
gadLinux, dmraid is handy for people dual-booting with for instance
Windows where it works a bit better, and there is no other common
solution. But for linux-only systems there is no reason to use dmraid.
--
dmraid is unable to disable a disk after faillure
https://bugs.launchpad.net/bugs/562962
I'm sorry,
forgot to comment that's it's strange that dmraid shows mirror as ok.
It should work in degraded mode as one disk is missing.
*** Active Set
name : pdc_cdgeffcac
size : 976562432
stride : 128
type : mirror
status : ok
subsets: 0
devs : 1
spares : 0
--
dmraid is unable to disable a
Hi Phillip,
I post dmsetup. See that it holds also LVM setup info...
data_vg-homes: 0 41943040 linear
pdc_cdgeffcac: 0 976562432 mirror 2 8:48 8:32 7442/7451 1 RD 1 core
pdc_bfhccajjfe: 0 976562432 mirror 2 8:16 8:0 7451/7451 1 AA 1 core
data_vg-www--main: 0 83886080 linear
pdc_bfhccajjfe1:
It looks like you have had read errors on one disk, and write errors on
the other. There should be warnings to this effect in your syslog. I
suggest that you immediately shut down and back up what you can.
Unfortunately dmraid does not really handle errors, and so is not
recommended for use.