close 784874 3.3.4-1
thanks
Hi,
this bug has been fixed in upstreams 3.3.3 version, the next debian
upload was 3.3.4-1, thus closing this bug accordingly.
Regards,
Daniel
The raid was a degraded raid 6 array of 5 disks with one missing disk, so
only one parity disk, [_] in /proc/mdstat
There was a malfunction in the hard drive enclosure so that it momentarily
lost contact with the 4 active disks, and then /proc/mdstat said [_],
meaning no disks at all.
On Wed, 13 May 2015 11:22:47 +0200 Christoffer Hammarström
christoffer.hammarst...@linuxgods.com wrote:
Yes, i'm sorry, i thoughtlessly ran
mdadm --stop /dev/md/storage
before reporting the bug.
Well that explains some of it. I'd really like to know what the state of the
array was
Yes, i'm sorry, i thoughtlessly ran
mdadm --stop /dev/md/storage
before reporting the bug.
I managed to reassemble the raid later with --assemble.
/ C
On Sun, 10 May 2015 01:37:09 +0200 Christoffer Hammarström
christoffer.hammarst...@linuxgods.com wrote:
Package: mdadm
Version: 3.3.2-5
Severity: normal
I was trying to re-add some disks to a RAID, and mdadm crashed with a
segmentation fault.
While that is clearly a bug an should be
Package: mdadm
Version: 3.3.2-5
Severity: normal
I was trying to re-add some disks to a RAID, and mdadm crashed with a
segmentation fault.
So i rebuilt mdadm with DEB_BUILD_OPTIONS=nostrip and backtraced the core dump
in gdb, which yielded the following output:
# gdb mdadm core
GNU gdb (Debian
6 matches
Mail list logo