I just upgraded an Ubuntu Server 10.04 LTM to 12.04 LTM and was affected
by this bug.
# mdadm --version
mdadm - v3.2.5 - 18th May 2012
# uname -a
Linux srv001 3.2.0-70-gerneric-pae #105-Ubuntu SMP ...
My system is a Dell PowerEdge 2900 with four 250 GB WD Harddrives.
Three of them are
I all, I have the same problem with a fresh installation of Ubuntu
12.04.3 server, but I found a workaround that seems to work, without
reinstallation.
Problem:
I firstly installed the os on 2 hd configured as raid1 with swap (md0) and /
(md1) partitions.
Secondly I added 2 more disks and via
I started experiencing this problem on 12.04, it persisted after upgrading to
12.10 and even persisted after a clean install (on an empty partition) of
13.04. It also happens when I just boot from a USB stick with 13.04.
This is always with the same partitions and I'm guessing that might be
Tom Mercelis,
I have wiped the array all the way down to the individual disk partition
tables multiple times and the issue still persisted for me. I have
finally solved this problem by switching to CentOS last night. Looks
like that's going to be the only solution to this for a while, since
this
Looks like exactly the same issue affects 13.04 with 3.8 kernels. I
have a system with OS installed on SSD and also it has a 2-disk 1TB
RAID1 with data. After upgrading to 13.04 I've started to get a
message about failed RAID on every boot like described in this ticket.
The RAID in fact was not
I was experiencing this issue on 12.04. I rebuilt the system with 12.10
when that came out. The issue persisted throughout all updates during
the period between release of 12.10 and 13.04. I did a dist-upgrade to
13.04 and the issue still persists.
AMD64 plaform, server dist. I am using the
I have also noticed this bug on an Ubuntu 12.04 server. The workaround I've
come up with is:
* install the backported Quantal kernel (3.5.x) by installing the
linux-generic-lts-quantal package
* add the following patch to /usr/share/initramfs-tools/scripts/mdadm-functions:
---
I added the AUTO -all to my mdadm.conf and updated the initramfs. The
result is that no array is assembled automatically at startup and I
enter the recovery shell. Then I perform an mdadm -A -s and all arrays
are assembled correctly. This assembly takes about 2-3 seconds and
correctly assembles
That's correct, one of the 3 devices is another array. But this setup
worked in previous Ubuntu releases. If I remember correctly I created
the array in Ubuntu 10.04, and it has since worked in 11.04 and 11.10.
It's always been the same array, I know that at least once it was a
clean new install
fwiw i found a workaround. first i thought it might involve containers
but i couldn't get those working at all. so i cheated.
eg: (compressed process, took place more incrementally)
mdadm --create /dev/md/vol0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
mdadm --create /dev/md/vol1 --level=0
couple of minor elaborations to the above:
update-initramfs -u
so the edited mdadm.conf gets into the initramfs
and in /etc/rc.local:
mdadm --assemble --config=partitions --no-degraded --scan
so it doesn't try to start the array if it would be degraded, thus
probably requiring a rebuild - as
Just having the arrays listed in mdadm.conf ( and updating the initramfs
) is not enough without modifying rc.local?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into
fairly sure I tried that, and no, it didn't work. Either the order of
the arrays in mdadm.conf isn't significant, or they initialise in too-
quick-succession and the one that depends on others is started too
early, and thus fails. That's how I ended up here. :-)
--
You received this bug
thinking about it, when I tried that before i may have missed two
elements, both of which were probably necessary: The AUTO -all line to
prevent auto-assembly by scanning, and the update-initramfs step, as I
learnt about both of those things after I'd given up trying to do it
that way and was
of course, see attachment
** Attachment added: md5 info
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/990913/+attachment/3448634/+files/mdadm_blkid_output.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
You appear to have a raid5 built out of two disks and another raid
device, which I think is the problem.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into degrade mode on
No, adding rootdelay=180 doesn't solve the problem. I now have
configured the system not to start with a degraded array. Which results
in the boot process being interrupted by a initramfs/rescue shell. In
that shell I stop the raid (mdadm --stop /dev/md5) and re-assemble the
arrays: mdadm -A
Can you post the output of blkid and mdadm -D /dev/md5?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into degrade mode on every boot 12.04 LTS server
To manage
This would be an issue with the mdadm init scripts, not the kernel.
Does adding rootdelay=180 solve the problem for everyone?
** Package changed: linux (Ubuntu) = mdadm (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
What is the status of this bug supposed to be in Ubuntu 12.04 with
kernel 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012
x86_64?
I updated to this version (coming from 11.10) last weekend, and now my
nested raid always starts in degraded mode.
What I can reproduce every time:
Power
Sorry, previous post wasn't completed. To this post attached: the dmesg
output. It seems md finds md4 before it starts with md5. Yet,
/proc/mdstat returns another order, and most importantly: md5 is started
in degraded mode.
What i tried:
- added rootdelay=6 as kernel option in grub
- added this
Have you tested with mdadm from precise-updates / quantal? And can you still
reproduce this issue?
2012-08-13 had mdadm 3.2.5-1ubuntu0.2
It now waits for udev settle before dropping to degraded arrays.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
For the past 2 months or so I haven't had a degraded array as a result
of a reboot. In this time I have rebooted my computer at least 25 times
for various reasons. If there was an update, I didn't notice it in the
list of updates from apt. Though, for all intensive purposes, my issues
have gone
I have a similar problem, but suspect the issue I'm having means it must
be either down to code in the kernel or options unique to my
ubuntu/kernel config. /proc/version reports: 3.2.0-30-generic.
In my case, I have 5 disks in my system, 4 are on a backplane,
connected directly to the
I thought it helpful to link to
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/872220 which seems
related, although pertains specifically to inappropriate boot failures
due to irrelevant disks being degraded rather than racing conditions
coming from disks being erroneously degraded.
It's
+1
graemec@tosser:~$ uname -a
Linux tosser 3.0.0-12-generic-pae #20-Ubuntu SMP Fri Oct 7 16:37:17 UTC 2011
i686 athlon i386 GNU/Linux
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
I have the same issue.
The problem is that the /usr/share/initramfs-tools/scripts/mdadm-functions
Is called before all drives has been initialized.
I have 6 drives in RAID array. 2 of them are onboard SATA and 4 are on mpt2sas
(SAS2008) card.
Apparantly mdadm tries to initialize the array
I'm having this issue too. RAID 5 across 4 disks. All OS partitions
are on a separate USB drive. Every boot since upgrading to Precise will
hang with a degraded RAID and drop to root shell. I have to rebuild the
array to get it to boot again. I don't reboot that often so sometimes
forget that
dep method from Andreas Heinze (andreas-heinze) didn't help me. as for
list method I'm not sure what modules would be right for my case.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
Precise is using a 3.2.0 kernel. There is a known MD bug that affects
some 3.2.x and 3.3.x kernels, that seems like it might be relevant to
this problem. See:
http://www.spinics.net/lists/raid/msg39004.html
and the rest of that thread. Note the mention of possible racing in
scripts.
Same for me.
System: Supermicro X9SCL-f CPU XEON E31220 RAM 16GB ECC 2xAdaptec 1430SA with
7xWD20EARS md-RAID5
Nearly every reboot ends in an degraded RAID with initramfs Prompt. Resuming
boot apears good.
It seems to me a timing problem loading the needed modules.
So for my system helps:
Since my last comment, an updated kernel arrived via Update Manager.
Its changelog included the following:
* md: fix possible corruption of array metadata on shutdown.
- LP: #992038
This seems possibly relevant. I updated, and have now rebooted several
times. The RAID degradation is
I have now installed Precise on my system. (I had intended to install
as a multiboot, along with the existing Oneiric, but apparently the
alternate installer could not recognize my existing /boot RAID1
partition, so now I can't boot Oneiric. But that's another story...)
Note that the title of
I am having similar problems. I am running Oneiric. I am NOT using
LUKS or LVM.
Symptoms vary in severity a lot. Sometimes it simply drops a spare, and
it's listed in palimpsest as not attached. One click of the button
and it's reattached, and shown as spare.
But then sometimes it gets
I am also having the same issue, mdadm raid 5 (no lvm). Mine is compounded by
the fact that the screen is entirely purple... if I reboot while booting and it
asks me to choose a kernel, I see the screen 'boot degraded y/n' and then
drops to the busybox shell. Interacting with the purple screen
(Ooops, apparently hit the wrong key... continuing the previous
comment)
...shut down with Alt-SysRq REISUB. This has no effect whatsoever. The
screen doesn't change; the drive activity light does nothing.
Finally, after stewing for a while longer, I hold down the power switch
until I hear
I am having same issue, mdadm RAID 5 degraded on almost every reboot
(90%). I am free to test anything you would like.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into
I am having the same symptoms, though if the issue is related it is not
described sufficiently here.
What is happening is that the kernel initiates the md components, and then the
init scripts continue before all controllers and disks are up. This happens
about 5 seconds into the initramfs boot
Can you test the following kernel:
http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.3-precise/
** Changed in: linux (Ubuntu)
Status: Triaged = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Also was there a previous kernel version that did not have this issue?
Maybe you can test some prior kernels, such as the Oneiric kernel?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
This issue appears to be an upstream bug, since you tested the latest
upstream kernel. Would it be possible for you to open an upstream bug
report at bugzilla.kernel.org [1]? That will allow the upstream
Developers to examine the issue, and may provide a quicker resolution to
the bug.
If you
Not sure how this would be a upstream (kernel) bug. My tests do not
indicate that.
I just now tried a Gentoo live CD and the drives come up fine. I do have
to manually
run # mdadm -As at prompt, but that is because the live disk does not
autostart the raid
The Gentoo live disk is using Kernel
** Package changed: ubuntu = initramfs-tools (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into degrade mode on every boot 12.04 LTS server
To manage notifications
Do you know if this issue happened in a previous version of Ubuntu, or
is this a new issue?
Would it be possible for you to test the latest upstream kernel? Refer
to https://wiki.ubuntu.com/KernelMainlineBuilds . Please test the latest
v3.4kernel[1] (Not a kernel in the daily directory). Once
Installed linux-
image-3.4.0-030400rc4-generic_3.4.0-030400rc4.201204230908_amd64.deb
upstream kernel and still have same issue.
** Tags removed: needs-upstream-testing
** Tags added: kernel-bug-exists-upstream
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
Today I have found a way to bypass this problem but know there still is a
problem. On the second R512
I changed the partition for the raid drives and root / boot. It was something
like this: (it will bott good everytime on this box now)
/dev/sda1 /boot ext2
/dev/sda2 / ext4
** Changed in: linux (Ubuntu)
Status: Incomplete = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913
Title:
RAID goes into degrade mode on every boot 12.04 LTS server
To manage
I too am experiencing this bug on 12.04 with a SuperMicro LSI2008 card.
This was working perfectly on Ubuntu 10.x and 11.10, it is only with the
upgrade to 12.04 that this problem has occurred.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
48 matches
Mail list logo