I'm having very similar simptoms after installing 10.04.1 from scratch
with two 500Gb disks (WD and ST).
The system installs and boots correctly if the raid1 array is created manually
from CLI before partitions detection.
But after some hours of uptime, errors start appearing in logs and the array 
becomes degraded:

Sep 19 13:36:19 deepthought kernel: [  278.248022] ata3.00: qc timeout (cmd 
0x27)
Sep 19 13:36:19 deepthought kernel: [  278.248027] ata3.00: failed to read 
native max address (err_mask=0x4)
Sep 19 13:36:19 deepthought kernel: [  278.248033] ata3.00: disabled
Sep 19 13:36:19 deepthought kernel: [  278.248039] ata3.00: device reported 
invalid CHS sector 0
Sep 19 13:36:19 deepthought kernel: [  278.248049] ata3: hard resetting link
Sep 19 13:36:20 deepthought kernel: [  279.128035] ata3: SATA link up 1.5 Gbps 
(SStatus 113 SControl 310)
Sep 19 13:36:20 deepthought kernel: [  279.128048] ata3: EH complete
Sep 19 13:36:20 deepthought kernel: [  279.128057] sd 2:0:0:0: [sdb] Unhandled 
error code
Sep 19 13:36:20 deepthought kernel: [  279.128059] sd 2:0:0:0: [sdb] Result: 
hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Sep 19 13:36:20 deepthought kernel: [  279.128062] sd 2:0:0:0: [sdb] CDB: 
Write(10): 2a 00 3a 38 5f 88 00 00 08 00
Sep 19 13:36:20 deepthought kernel: [  279.128082] md: super_written gets 
error=-5, uptodate=0
Sep 19 13:36:20 deepthought kernel: [  279.128105] sd 2:0:0:0: [sdb] Unhandled 
error code
Sep 19 13:36:20 deepthought kernel: [  279.128106] sd 2:0:0:0: [sdb] Result: 
hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Sep 19 13:36:20 deepthought kernel: [  279.128109] sd 2:0:0:0: [sdb] CDB: 
Read(10): 28 00 06 a2 3c 80 00 00 20 00
Sep 19 13:36:20 deepthought kernel: [  279.205366] RAID1 conf printout:
Sep 19 13:36:20 deepthought kernel: [  279.205369]  --- wd:1 rd:2
Sep 19 13:36:20 deepthought kernel: [  279.205371]  disk 0, wo:0, o:1, dev:sda
Sep 19 13:36:20 deepthought kernel: [  279.205373]  disk 1, wo:1, o:0, dev:sdb
Sep 19 13:36:20 deepthought kernel: [  279.212009] RAID1 conf printout:
Sep 19 13:36:20 deepthought kernel: [  279.212011]  --- wd:1 rd:2
Sep 19 13:36:20 deepthought kernel: [  279.212013]  disk 0, wo:0, o:1, dev:sda

also in dmesg this message is present at every boot:

[    3.022033] md1: p5 size 976269312 exceeds device capacity, limited
to end of disk

These are the partitions as seen from sfdisk:

~$ sudo sfdisk -l /dev/sda

Disk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/track
Warning: The partition table looks like it was made
  for C/H/S=*/81/63 (instead of 30401/255/63).
For this listing I'll assume that geometry.
Units = cylinders of 2612736 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1          0+  95707-  95708- 244197560   83  Linux
                end: (c,h,s) expected (1023,80,63) found (705,80,63)
/dev/sda2          0       -       0          0    0  Empty
/dev/sda3          0       -       0          0    0  Empty
/dev/sda4          0       -       0          0    0  Empty

~$ sudo sfdisk -l /dev/sdb

Disk /dev/sdb: 60801 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+     31-     32-    249856   fd  Linux raid autodetect
/dev/sdb2         31+  60801-  60770- 488134657    5  Extended
/dev/sdb3          0       -       0          0    0  Empty
/dev/sdb4          0       -       0          0    0  Empty
/dev/sdb5         31+  60801-  60770- 488134656   fd  Linux raid autodetect

This is /proc/mdstat output instead:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
[raid10] 
md0 : active raid1 sdc1[1] md1p1[0]
      249792 blocks [2/2] [UU]
      
md1 : active raid1 sdb[0]
      488134592 blocks [2/1] [U_]
      bitmap: 114/233 pages [456KB], 1024KB chunk

I'm at the third/fourth reinstall attempt (previously I've had the bad idea of 
using raid1+luks+lvm now I've
switchet to plaintext) but I'm still having stability issues.
Can somebody confirm whether I'm hitting this bug?

-- 
mount: mounting /dev/md0 on /root/ failed: Invalid argument
https://bugs.launchpad.net/bugs/569900
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to