So yeah, a three-year old bug cut me off from my headless server today.
Awesome.
The default behaviour to require actual user interaction in case of
degraded RAID array is just ... wrong. It's SO MUCH MORE important to
get the system up and running so we can diagnose and fix the problems.
You
This whole automatic recovery mode by default is such a BUG, NOT A
FEATURE. Please rollback.
Most of the servers with RAID on this planet are headless. Stuck in the
boot process by any means is asking for trouble. If it doesn't boot,
then there's no easy way to fix anything not to mention
Had som holiday-fun with my debian-server and installed the Trusty14.04 alfa
(is qurious about the clouds).
Sadly the same bug bit me, albeit on some old lazy 250G disks. my OCZ
revodriveX2 is apparently still fast enough to not lose out in this
racecondition.
I was dropped to busybox on my RAID1 Ubuntu 12.04 AMD64 system uptodate. cat
/proc/mdstat reported [U_] on all 3 md devices (/, swap /home), but I was
pretty sure the disk (sdb) was functional.
On reset I pressed ESC to get to the grub menu to select rescue, I could see
the message array
Still a problem in Saucy (13.10) beta
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems with softraid
To manage notifications about this bug go to:
For what it's worth: Just had the same issue with Ubuntu 12.04:
The mainproblem for me is that the keyboard is not working there, i can
not press y to say boot it anyway.
This is very annoying.
I can continue boot by typing return 0 into the maintenance console.
But there is no way to keep the
For those that have this problem with a non-degraded raid mistakenly
marked as degraded, please see/subscribe
https://bugs.launchpad.net/mdadm/+bug/920647
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
@Marcus Overhagen: Thanks for your message. You may update the
ReliableRaid wiki page.
I found it worthwhile that I migrated my raid systems to debian, now I
am doing the same with the debian desktop howto, and welcoming the
debian tanglu project.
--
You received this bug notification because
I just made a full update on a 180 days old 12.04.2 system. Everything
was fine before, but on reboot I'm dropped into this rescue shell,
because /dev/sdc1 is detected as belonging to a degraded raid.
However, the raid is not degraded! cat /proc/mdstat shows it as healty
(keyboard is working),
I am very shocked. I just had a disc (between RAID 5) which crashes, and
you are saying me, that it is not possible to boot and to access data ?
During the install (ubuntu 13.10), I have selected boot degraded, and
the system does not boot. Black screen, no error message, no shell
command after 20
I updated the kernel on my 32-bit 12.10 setup, just to find out that it
won't boot due to the symptoms described in this bug report. I tried
compiling my own 3.8.5 from kernel.org, but the results are the same. I
can't use the keyboard, even though kernel messages are displayed about
USB keyboard
Please fix at least the default behavior. The system must start up with a
degraded raid whenever possible without the need to answer questions during
system startup.
As mentioned above several times the current implementation doesn't work and
compromises the good impression the Ubuntu Server
I'm also facing this bug.
My problem is the same as Michaels:
The mainproblem for me is that the keyboard is not working there, i can not
press y to say boot it anyway.
This is very annoying.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
still facing this bug, see also
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1127947
The mainproblem for me is that the keyboard is not working there, i can
not press y to say boot it anyway.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
This is completely unbelievable. Somebody not competent enough broke the
debian raid setup for ubuntu years ago, and the issues has still not
been resolved?
Man, fix up ubuntu mdadm to issue proper notifications (bug #535417).
Get rid of that bogus boot_degraded question (bug #539597), and
Since it has been a year, and nobody has even been assigned to this
problem, please consider marking this severe.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when
boot_required is simply deduced by interrogating fstab, to find mount
points which are 'required' to boot, (ie have a pass value not 0).
The script logs anything (and all) of the strings which it thinks might
enable it to identify at boot time, a device which might be subsequently
required.
Ok, think I've fixed this. I've changed the /usr/share/initramfs-
tools/hooks/mdadm function, so that as well as putting mdadm.conf in the
/etc/mdadm folder when generating an initramfs, it also generates a
/etc/mdadm/boot_required file, which is a list of devices and UUID which
the hook deduced
diff -u -r initramfs-tools/hooks/mdadm initramfs-tools-mdadm-changes/hooks/mdadm
--- initramfs-tools/hooks/mdadm 2012-08-04 07:54:25.0 +0100
+++ initramfs-tools-mdadm-changes/hooks/mdadm 2012-10-03 16:24:49.0
+0100
@@ -61,12 +61,112 @@
done
# copy the mdadm configuration
Please explain how you deduce boot_required list in the case of lvm on
top of crypt on top of mdadm raid device?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when
@ Comment 59 Leonardo Silva Amaral (leleobhz)
It looks like you are using external metadata (isms) instead of Linux
Raid format. Booting of isms raid volues with mdadm is currently not
supported. You can/should use dmraid instead, until mdadm/isms support
is integrated into initramfs-tools. There
The current design is:
* if a disk is really not present and the array is truly degraded, we should
not boot unless boot_degraded is true
* if disks are actually present and healthy, yet the array is detected
as degraded = please file a new bug about your case, it should be
fixed.
The reason is
A quick hint, to those that didn't know (like me) at the (initramfs)
prompt, if the failed raid array isn't needed to boot, then you can
simply type return 0 to have it continue normally with the boot
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
I submitted this in reference to 990913 but now think that relates to
the mdadm/udev racing condition discussed in 917250. My issue is that
a attached, degraded, devices which should not be required, are
preventing my setup from booting:-
I have a similar problem, but suspect the issue I'm
I thought it helpful to link to
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/917250 which seems
related, although pertains specifically to racing conditions leading to
disks being erroneously degraded rather than legitimately degraded, but
irrelevant disk causing inappropriate boot
sorry bad link, try
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/990913
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems with softraid
To
My case happened with 12.04: I almost never reboot my machine and since
ive installed it, i never rebooted until now. i fall into a degraded
state without possibility to boot in degraded mode and even using super
grub disk to set the bootdegraded=true. Array start in read only mode
and notting in
try mdadm-3.2.5-1ubuntu from PPA. It seems to help me
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems with softraid
To manage notifications about
On 16/07/12 08:40, vak wrote:
try mdadm-3.2.5-1ubuntu from PPA. It seems to help me
Please do not use PPAs, but wait for a fix to be applied in
precise-proposed.
--
Regards,
Dmitrijs.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
This is so stupid, it's just the epitome of asshole.
RAID controller cards go out ALL THE TIME.
This TAKES OUT MULTIPLE DRIVES without a doubt.
Kernel updates frequently don't work with older RAID controllers that were
working perfectly for years.
Stuff breaks, particularly when it's rebooted
I reinstalled, rebooted, and tried the fix given here:
http://ubuntuforums.org/showpost.php?p=11388915postcount=18
The shit doesn't work. Server still comes up wrong. These kinds of
cavalier games and changes are life changing and extremely damaging.
After fucked by 10.04 LTS fiasco with
Whoever marked this bug:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/917520 as a
duplicate is mistaken.
In ubuntu 12.04 there is a major regression in mdadm where a delay in
registration of RAID component (by udev) during init results in an
array being MISTAKENLY tagged as corrupt. This
I'm experiencing the same problem. I've recently installed a fresh copy
of 12.04 on my home server and created a RAID 1 array with two identical
disks using the disk utility for data storage (boot, system, home
directories on other separate disks).
To try out my RAID array, I shut down and remove
I'm experience the same issue, having 2 disk in soft RAID when one fails
trying replace it and booting into normal mode i'm unable to input 'y' after
question of booting into degraded mode (strange enough i've boot degraded set
to true!)
But i'm able to boot to system and rebuild raid when
I agree. The fact my device has been renumbered is not relative to this
bug, and also not directly relative to the upgrade.
If I understand well, you complain about the fact ubuntu won't boot with a
degraded array and don't allow any possibility to continue the boot process
without mounting the
Oh, so you also have a failed disk?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems with softraid
To manage notifications about this bug go to:
Not realy. I have multiple disk, and as they renumbered, mdadm tries to
assemble the arrays with the wrong disk so it can't.
The disk is not marked as faulty, but the arrays are not assembled.
I think mdadm could not do anything more as the hole system is read only at
this stage. So when i could
Hi,
In my experience, one device used in an md array has been renumbered after
upgrading from oneiric to precise.
At the first reboot, I've got the message 'Continue boot y/N' but can't enter
anything.
After few seconds, I've got a busybox/initramfs prompt on TTY1, but without any
further
draco, you seem to misunderstand the purpose of this bug. It deals with
booting with a failed disk, not problems upgrading distributions.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
ApportVersion: 2.0.1-0ubuntu7
Architecture: amd64
DistroRelease: Ubuntu 12.04
MDadmExamine.dev.sda:
/dev/sda:
MBR Magic : aa55
Partition[1] : 65496942 sectors at 63 (type 05)
Partition[2] :325219860 sectors at 65497005 (type fd)
MDadmExamine.dev.sda2:
/dev/sda2:
Had an extremely frustrating experience with this bug while attempting
to re-install a Debian system with Ubuntu 12.02 x86_64, and while trying
to debug 11.10. The main issue was that by the time I was dropped to a
shell the prompt asking if I wanted to boot degraded was long and gone
from my
So far everything I've read beyond the original description shows mdadm
working as it is intended.
The question on whether or not to boot degraded is not just about the
root filesystem. The idea is to be very careful and not boot with *any*
disks in a degraded array, as we want to avoid the
Clint the issue is that it does this regardless of whether the device is
a boot device or not and this makes the system unbootable if you have 1
disk from a RAID 5 array attached to it.
1. Set boot degraded to false, result: you get a prompt asking if you want to
try boot degraded
1.a Answer no,
I have one HDD with 11.10 installed on and booting fine.
If I now connect a disk which is part of a RAID the boot fails with
mdadm complaining about a degraded root.
This is not satisfying as the RAID is not needed to boot the system, and
the system hangs during boot when it was uncontrolled
passing 'bootdegraded=true' is no solution
so far I have found no way to boot my system.
As long as a disk which is part of an incomplete RAID is connected I will
always end up in a BusyBox recovery shell, one way or another.
this is really annoying and can by far not be an intended feature…
it
A potential solution to one of the two problems effecting the raid boot
has been posted on one of the discussion threads on the topic:
http://ubuntuforums.org/showpost.php?p=11388915postcount=18 I rolled
back to 11.04 on the box with this issue and since it is a race type
condition getting it
I am attaching the dmesg output from a failed boot, unfortunately the
mdadm messages don't go to the dmesg log file, I have not been able to
find them recorded anywhere but on the screen at this point. The boot
dropped into the shell right after the line:
[4.201060] md0: unknown partition
A follow-up on that comment I am using the bootdegraded=true parameter
and each of the two arrays has 5 disks, if I get a full boot all 5 disks
come up, it seems like all 10 have just not responded by the time this
test comes in and decides it is tired of waiting. I will post dmesg of
this in a
Looks like i didn't grab the output correctly, can run the test again if
it would be helpful, but it is similar to the last dmesg only with
different drive counts and only a single array, much of it is in the
screen cap.
--
You received this bug notification because you are a member of Ubuntu
I was able to reproduce one of the issues in a vmware instance. I am
using ubuntu 11.10 server both 64 and 32 bit work fine. Desktop may
work as well but I have seen several reports of the initiramfs shell
getting blocked by the desktop UI so I through it was simpler to just
start without
I also went back and set up the same test on 11.04 and did not
experience any issue with boot with only 1 of 3 drives.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when
Ok, mdadm is the package for software raid, dmraid is for fake hardware
raid support. Reassigning to the correct package. Also, I tried to
reproduce the error in a VM and don't get any prompt about continuing to
boot; I just get dropped to the busybox shell, just like in previous
Ubuntu
The message is on the second line visible on the screenshot. Are you
testing this with 1 disk of a 3 disk raid5? I am able to reproduce
reliably in a vmware fusion instance, if one of three raid discs is
attached the system will not boot, drops into intramfs if you say N to
start degraded and if
Just reproduced again, better screen grab of the message is attached.
** Attachment added: Screen shot 2011-10-22 at 10.47.59 PM.png
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/872220/+attachment/2567447/+files/Screen%20shot%202011-10-22%20at%2010.47.59%20PM.png
--
You received
Some more info, for me at least, just attaching the attached disk to a
fusion instance of 11.10 makes it unbootable, the only way I have found
to get it bootable is to remove the disk. Like I said the effect would
seem to be it is impossible to put a disc that has at some time in the
past been a
I am having the same issue with some additional information, I have 2
arrays, when one or the other is connected I am getting phantom errors
reported on every boot, the boot degraded option gets me past this, but
when both array are connected the system is sometimes able to boot and
sometimes it
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: dmraid (Ubuntu)
Status: New = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
I don't think any such feature has been added. I certainly can't find
it anywhere. Can you post a screen shot? What normally happens if you
don't use the degraded boot argument is that the array will fail to
activate. If you have filesystems that can't be mounted at boot, the
system asks if
In my case, it is a mirrored setup, and one disk is missing. So it's
fully functional, but degraded.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems
This is incorrectly detecting a broken softraid system. Not only does it
incorrectly flag a perfectly good working raid setup, it does so without
disabling the blank screen that is displayed on std boot, so it appears
that the system had hung.
Configuring the system to boot with a degraded array
Bill: I'm not sure that you are describing the same problem; I don't
think John's raid was 'a perfectly good working raid' - I think it was a
broken one (please correct me John if you believe your RAID was fully
intact).
Bill: If it is indeed a separate issue with your RAID incorrectly being
You really want to be able to survive when one of your drives dies; it's
not a good time to have to rescue it - so marking high; it'll be a
severe impact to a hopefully small percentage
** Package changed: ubuntu = dmraid (Ubuntu)
** Changed in: dmraid (Ubuntu)
Importance: Undecided = High
As a workaround, you can add bootdegraded=true to the kernel boot
options.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/872220
Title:
Fails to boot when there's problems with softraid
To manage
63 matches
Mail list logo