When I reinstalled dmraid for test purposes and rounded up the backup disks 
from offsite, the disk resets could not be reproduced. Strange, given that no 
updates came in Saturday or Sun that should have changed this and now neither 
one did, and hardware configuration matches that in the last tests on Saturday. 
If nobody else can reproduce the disk resets on a disk not marked as a 
chipset raid component, it might have been some kind of corruption of 
the file in unpacking, therefore not present when re-installed? The 
package opened fine.  

 Still had to leave one half of the mdadm raid out, then insert in in
the initramfs shell to open the RAID, though. ( Otherwise I got
Cryptsetup's error on not finding the device of "/dev/md0 does not exist
or access denied".)  That RAID is on partittions, which I have read
activating chipset RAID would hide under some conditions (don't know the
defaults, though) Once booted I was able to capture logs of the one disk
at a time, activating the mdadm RAID process.

The portion of the kernel log prior to password entry to open the
activated mdadm raid had been attached, as has the contents of dmesg.
Also a photo of the contents to /dev/disk/by-id when both WDC Caviar
Greens were present, from the initramfs, showing no partitions and
showing the entries "dm-name-pdc-dfbfdfiij" and "dm-uuid-DMRAID--
dfbfdfiij."  I never saw those entries in /dev when trying to set up a
dmraid chipset raid earlier in the week.  They also do NOT show up in
/dev if I keep half the RAID out at boot and insert it to enable the
mdadm RAID and proceed with the boot

Does this mean dmraid is now, in fact, supporting this motherboard? If
so, that's good news for Window dual-booters.  Strange, given that the
BIOS is set to "AHCI" instead of "RAID"

Unfortunately I do not own another pair of hard drives to test chipset
RAID function on this motherboard, and do not have the time to erase two
disks and restore from the current /home in encrypted raid, even copying
raid to raid at 120 mB/sec for any one instance and a totol maximum of
180 mB/sec for all copy processes. I would be extremely leery of messing
with the metadata on my main /home RAID (won't risk this) due to the 5
hour plus copy time to restore data if it gets killed.  If I could get
dmraid to not block the partitions from being read but recognize the
chipset RAID, a read-only benchmark of the volume, a head command, or
opening it in parted would verify that it exists and that if these were
empty disks I could use the chipset RAID.

Will try a boot into just the OD disk and see if I can get the volume up
read-only...


** Attachment added: "Kern_Log_Partial.txt"
   
https://bugs.launchpad.net/bugs/958334/+attachment/2900397/+files/Kern_Log_Partial.txt

** Attachment added: "Dmesg.txt"
   https://bugs.launchpad.net/bugs/958334/+attachment/2900398/+files/Dmesg.txt

** Attachment added: "Dev-disk-by-id.png"
   
https://bugs.launchpad.net/bugs/958334/+attachment/2900399/+files/Dev-disk-by-id.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/958334

Title:
  dmraid_1.0.0.rc16-4.1ubuntu7  blocks reading some disks, resets others

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dmraid/+bug/958334/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to