Bug#784070: The same bug exists in Dracut

2015-07-05 Thread Alexander E. Patrakov
I have hit this bug in a Jessie-based virtual machine that I created in 
order to teach myself how to deal with MD RAID1.


In order to work around it, I have replaced initramfs-tools with dracut 
(040+1-1). Result: the bug is still reproducible. The logic to run a 
degraded RAID1 array if, after some timeout, we cannot find all RAID 
components, is missing.


--
Alexander E. Patrakov


--
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#784070: The same bug exists in Dracut

2015-07-05 Thread Alexander E. Patrakov

05.07.2015 21:43, Alexander E. Patrakov wrote:

I have hit this bug in a Jessie-based virtual machine that I created in
order to teach myself how to deal with MD RAID1.

In order to work around it, I have replaced initramfs-tools with dracut
(040+1-1). Result: the bug is still reproducible. The logic to run a
degraded RAID1 array if, after some timeout, we cannot find all RAID
components, is missing.



Attaching the SOS report generated by the dracut-based initramfs.

--
Alexander E. Patrakov
+ cat /lib/dracut/dracut-040-207-g7252cde
dracut-040-207-g7252cde
+ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.16.0-4-amd64 
root=UUID=5d55878f-b13e-4061-a773-85a84ea5ea4e ro quiet
+ '[' -f /etc/cmdline ']'
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f '/etc/cmdline.d/*.conf' ']'
+ break
+ cat /proc/self/mountinfo
0 0 0:1 / / rw shared:1 - rootfs rootfs rw
14 0 0:14 / /sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
15 0 0:3 / /proc rw,nosuid,nodev,noexec,relatime shared:7 - proc proc rw
16 0 0:5 / /dev rw,nosuid shared:8 - devtmpfs devtmpfs 
rw,size=500116k,nr_inodes=125029,mode=755
17 14 0:15 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - 
securityfs securityfs rw
18 16 0:16 / /dev/shm rw,nosuid,nodev shared:9 - tmpfs tmpfs rw
19 16 0:11 / /dev/pts rw,nosuid,noexec,relatime shared:10 - devpts devpts 
rw,gid=5,mode=620,ptmxmode=000
20 0 0:17 / /run rw,nosuid,nodev shared:11 - tmpfs tmpfs rw,mode=755
21 20 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime shared:12 - tmpfs tmpfs 
rw,size=5120k
22 14 0:19 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs 
ro,mode=755
23 22 0:20 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:5 - 
cgroup cgroup 
rw,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
24 14 0:21 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:6 - pstore 
pstore rw
25 22 0:22 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:13 - 
cgroup cgroup rw,cpuset
26 22 0:23 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime 
shared:14 - cgroup cgroup rw,cpu,cpuacct
27 22 0:24 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:15 - 
cgroup cgroup rw,devices
28 22 0:25 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:16 - 
cgroup cgroup rw,freezer
29 22 0:26 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime 
shared:17 - cgroup cgroup rw,net_cls,net_prio
30 22 0:27 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:18 - 
cgroup cgroup rw,blkio
31 22 0:28 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime 
shared:19 - cgroup cgroup rw,perf_event
+ cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=500116k,nr_inodes=125029,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup 
rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup 
rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup 
rw,nosuid,nodev,noexec,relatime,perf_event 0 0
+ blkid
/dev/vda1: UUID=d66bdf7d-d6db-1d38-328e-f5f0f8722809 
UUID_SUB=a2bfdf6b-45f1-ab7a-fa0e-548995a276e0 LABEL=debian:0 
TYPE=linux_raid_member
/dev/vda2: UUID=364c1067-ee45-08dc-9b1a-8a99c30e2aea 
UUID_SUB=b4652541-2053-3b71-209b-639f68ae80f6 LABEL=debian:1 
TYPE=linux_raid_member
+ blkid -o udev
ID_FS_UUID=d66bdf7d-d6db-1d38-328e-f5f0f8722809
ID_FS_UUID_ENC=d66bdf7d-d6db-1d38-328e-f5f0f8722809
ID_FS_UUID_SUB=a2bfdf6b-45f1-ab7a-fa0e-548995a276e0
ID_FS_UUID_SUB_ENC=a2bfdf6b-45f1-ab7a-fa0e-548995a276e0
ID_FS_LABEL=debian:0
ID_FS_LABEL_ENC=debian:0
ID_FS_TYPE=linux_raid_member

ID_FS_UUID=364c1067-ee45-08dc-9b1a-8a99c30e2aea
ID_FS_UUID_ENC=364c1067-ee45-08dc-9b1a-8a99c30e2aea
ID_FS_UUID_SUB=b4652541-2053-3b71-209b-639f68ae80f6
ID_FS_UUID_SUB_ENC=b4652541-2053-3b71-209b-639f68ae80f6
ID_FS_LABEL=debian:1
ID_FS_LABEL_ENC=debian:1
ID_FS_TYPE=linux_raid_member
+ ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root 0 9 Jul  5 16:44 

Bug#784070: The same bug exists in Dracut

2015-07-05 Thread NeilBrown
On Sun, 5 Jul 2015 21:43:33 +0500 Alexander E. Patrakov
patra...@gmail.com wrote:

 I have hit this bug in a Jessie-based virtual machine that I created in 
 order to teach myself how to deal with MD RAID1.
 
 In order to work around it, I have replaced initramfs-tools with dracut 
 (040+1-1). Result: the bug is still reproducible. The logic to run a 
 degraded RAID1 array if, after some timeout, we cannot find all RAID 
 components, is missing.
 

Actually the logic is there, but it doesn't work very well.
Dracut-042 contains some important patches - I wouldn't try to
debug booting degraded md with dracut without first updating at least
to verions 042 - and I notice there is 043 out, so best to start with
that.

NeilBrown


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org