LP Janitor did not change the status of the initramfs-tools. Changing it
to Invalid. If this is wrong, please change it to Fix Released, etc.
Thanks.
** Changed in: initramfs-tools (Ubuntu)
Status: New = Invalid
--
boot-time race condition initializing md
[Expired for initramfs-tools (Ubuntu Feisty) because there has been no
activity for 60 days.]
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
There is a problem with the solution outlined by Scott James Remnant
above. (https://bugs.launchpad.net/ubuntu/+source/initramfs-
tools/+bug/75681/comments/84)
What happens when one of the raid devices has failed and it isn't
getting detected? Take Scott's example a raid with (sda1 and sdb1):
So, there is something wrong with that patch. Actually it seems to be
working great, but when I disconnect a drive to fail it, it boots up
immediately instead of trying mdadm after the timeout. So I'm guessing
that the mdadm script is getting called without the from-udev parameter
somewhere else.
den [EMAIL PROTECTED] writes:
Not everything is OK as I expected. So I use /sbin/udevsettle
--timeout=10 at the end of
/etc/initramfs-tools/scripts/init-premount/udev script! Sometimes
system boots fine but once I have rebooted, I get cat /proc/mdstat:
Personalities : [raid1]
As said, this
Hello!
Not everything is OK as I expected. So I use /sbin/udevsettle --timeout=10 at
the end of /etc/initramfs-tools/scripts/init-premount/udev script! Sometimes
system boots fine but once I have rebooted, I get cat /proc/mdstat:
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
I think I can try. But I put the system in production. If you could tell
me what packages are exactly involved (kernel, initramfs-tools ...) and
how to get them from gutsy repository without full dist-upgrade. I can
check that on that system and report.
--
boot-time race condition initializing
Thanks Dan!
That's true, I did managed to solve the problem just the same way.
But why /sbin/udevsettle --timeout=10 is not in the distribution itself,
perhaps something is wrong with that?
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug
den [EMAIL PROTECTED] writes:
But why /sbin/udevsettle --timeout=10 is not in the distribution
itself, perhaps something is wrong with that?
That's not the right fix. It is a workaround that seems to work on some
machines, though...
Scott, do you think that workaround is worth an upload to
Okay, I uploaded a package with the udevsettle line to my ppa. Add this
to your /etc/apt/sources.list:
deb http://ppa.dogfood.launchpad.net/siretart/ubuntu/ feisty main
do a 'apt-get update apt-get dist-upgrade'. Please give feedback if
that package lets your system boot.
** Also affects:
Hello Reinhard!
My system boots fine with your updated initramfs-tools package!
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
So there is still no cure!?? The workaround with MD_DEGRADED_ARGS
helps to assemble array only when all of raid devices found by udev. But
there is still very BAD behavior, if we unplug one of raid disks the
system hangs during a boot process.
PS. Just installed feisty server, and updated.
I
I was having a similar issue, but a solution in this bug report fixed it
for me:
https://bugs.launchpad.net/ubuntu/+source/initramfs-
tools/+bug/99439/comments/1
[Quote]
Try adding /sbin/udevsettle --timeout=10 in /usr/share/initramfs-tools/init
before line:
log_begin_msg Mounting root file
I suspect that this is the issue I'm experiencing with my desktop. My
root partition is /dev/md0, but after upgrading to feisty, my system is
no longer boots and throws me to busybox. I noticed in busybox that my
sata drives are missing from /dev, which explains why the array isn't
being
On 3/28/07, Ian Jackson [EMAIL PROTECTED] wrote:
Every hard drive has a very similar partitionning scheme:
* boot partition (two installs, each has a backup of its main
/boot, not exactly up to date as I have been too lazy to write a
small script)
* swap partition or small temporary
Did somebody already report a new bug on this?
If not, there are still two other open bugs with the same issue, one of them
new: bug #83231 and bug #102410
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a
Hi everyone,
yesterday evening (here in Germany), I also got into trouble using the
latest lvm2 upgrade available for feisty. I upgraded the feisty install
on my Desktop PC and during the upgrade, everything just ran fine. The
pc is equipped with two 160 GB Samsung Sata-Drives on an Nforce4 Sata
i forgot to mention, that I do NOT use any software RAID prior to the
LVM. The LVM just spans my two diskt that are joined together in one
volume group.
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member
could you try removing the evms package and regenerating your initramfs?
This fixed the problem for me.
Scott has a fix for evms in the pipe, bug feel free to file another bug
to document this issue.
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received
okay, I can try to remove the evms-package and rerun update-initramfs this
evening. I'm not shure if evms is installed but maybe it is ;)
The biggest problem is, that the ubuntu installation on my desktop pc was
dapper at first, where I installed a lot of extra software. Three weeks after
the
My Feisty is up to date. I've the same BusyBox problem at boot up. Can
you tell me how to track the issue ? I don't have raid but I have a
Adaptec 19160 SCSI controller on which reside the root partition.
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You
This isn't fixed for me. I opened a new report (Bug # 103177) as
requested by Scott James Remnant.
My problem still has the exact same symptoms that I described above.
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because
I don't use LVM yet am seeing the same problem with software RAID. I
just dist-upgraded, reran the update-initramfs just to be sure, and saw
the failure at boot. Package list confirmed below matching the most
recent versions mentioned above:
dpkg -l dmsetup libdevmapper1.02 lvm-common lvm2 mdadm
Manoj: as noted above, please file a new bug
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing list
Scott James Remnant wrote:
We believe that this problem has been corrected by a series of uploads
today. Please update to ensure you have the following package versions:
dmsetup, libdevmapper1.02 - 1.02.08-1ubuntu6
lvm-common - 1.5.20ubuntu12
lvm2 - 2.02.06-2ubuntu9
mdadm
Exactly same problem for me here too - no LVM in sight, just a md0 and
md1 as /boot and / respectively, fixed by using break=mount and mounting
manually.
--
boot-time race condition initializing md
https://bugs.launchpad.net/bugs/75681
You received this bug notification because you are a member
We believe that this problem has been corrected by a series of uploads
today. Please update to ensure you have the following package versions:
dmsetup, libdevmapper1.02 - 1.02.08-1ubuntu6
lvm-common - 1.5.20ubuntu12
lvm2 - 2.02.06-2ubuntu9
mdadm - 2.5.6-7ubuntu5 (not applicable
I think the solution is in:
Bug report: 83231
https://launchpad.net/ubuntu/+source/initramfs-tools/+bug/83231
the udevsettle --timeout 10 at the end of /usr/share/initramfs-
tools/scripts/init-premount/udev works too.
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
Aurelien, sorry to ask you to do this tedious test again, but I've
been looking at this logfile and I think I should have told you to use
`' rather than `' when writing the log. Also, I really need to
know more clearly what the fault was and what you did to fix it (if
anything). And while I'm at
Reinhard Tartler writes (Re: [Bug 75681] Re: boot-time race condition
initializing md):
Yes, I tried that with the effect that reproducibly none if the raid
devices come up at all :(
I find this puzzling. I've double-checked your initramfs again
following other people's comments
octothorp [EMAIL PROTECTED] writes:
I'm very interested in finally testing 105, and the missing
udev_105.orig.tar.gz is a bit of a challenge, but at least there's a
diff.
I'm terribly sorry, I've just uploaded the forgotten orig.tar.gz
--
boot-time race condition initializing md
OK, I really appreciate that you want to fix this and I really want to help,
but this has been driving me nuts.
I can reproduce it *way too* reliably with a normal boot (i.e. it happens 8
times out of 10) but when trying to get a log it suddenly refused to happen :/
I started to think that the
it doesn't upgrade the system. It shows me following:
Failed to fetch
http://uz.archive.ubuntu.com/ubuntu/dists/edgy-updates/Release.gpg Соединение
разорвано
Failed to fetch http://uz.archive.ubuntu.com/ubuntu/dists/edgy/Release.gpg
Соединение разорвано
Failed to fetch
my setup (taken from a booted system):
[EMAIL PROTECTED]:~
cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sda5[0] sdb5[1]
1951744 blocks [2/2] [UU]
md0 : active raid1 sda2[0] sdb2[1]
489856 blocks [2/2] [UU]
md3 : active raid0 sda7[0] sdb7[1]
perhaps this is related to http://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=416654 ?
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
Reinhard, I tried your Ubuntu deb (105-4), and it brought the bug back
for me consistently. Then I reverted to debian unstable's version
(0.105-4), and now I'm booting consistently correctly again.
This is interesting. Clearly what is causing my bug must be in the
Ubuntu packaging? I've
Jeff250 [EMAIL PROTECTED] writes:
Reinhard, I tried your Ubuntu deb (105-4), and it brought the bug back
for me consistently. Then I reverted to debian unstable's version
(0.105-4), and now I'm booting consistently correctly again.
This is interesting. Clearly what is causing my bug must
Ian Jackson [EMAIL PROTECTED] writes:
Reinhard, you may remember that on irc I asked you to try moving
/usr/share/initramfs-tools/scripts/local-top/mdrun
aside and rebuilding your initramfs. Did you try this in the end ?
Yes, I tried that with the effect that reproducibly none if the raid
Here is my output for udevd. I needed several boots to get it, so I hope
it helps ;)
** Attachment added: output of udevd --verbose
http://librarian.launchpad.net/7031494/udevd.out
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
Aurelien Naldi writes ([Bug 75681] Re: boot-time race condition initializing
md):
Here is my output for udevd. I needed several boots to get it, so I hope
it helps ;)
Thanks, that's going to help a lot I think. Can you please put up a
copy of your initramfs (/initrd.img) too ? It'll probably
/28/07, Ian Jackson [EMAIL PROTECTED] wrote:
Aurelien Naldi writes ([Bug 75681] Re: boot-time race condition initializing
md):
Here is my output for udevd. I needed several boots to get it, so I hope
it helps ;)
Thanks, that's going to help a lot I think. Can you please put up a
copy
Aurelien Naldi writes (Re: [Bug 75681] Re: boot-time race condition
initializing md):
I am not on this system right now so I can't upload my initramfs yet,
I will probably do it later tonight.
Thanks.
Every hard drive has a very similar partitionning scheme:
* boot partition (two installs
Aurelien Naldi writes (Re: [Bug 75681] Re: boot-time race condition
initializing md):
With older versions (of mdadm I think) the RAID was assembled, but
degraded or kind of assembled but not startable (when only 1 or 2 disk
were present at first). a --no-degraded option was added in the
mdadm
One more comment: I have tested some other suggested workarround:
- removing the mdrun script did not help (the first thing done by this
script is to check the presence of the mdadm one and then to exit)
I think you must have a newer version than the one I was looking at.
He's right
On 3/28/07, Ian Jackson [EMAIL PROTECTED] wrote:
Aurelien Naldi writes (Re: [Bug 75681] Re: boot-time race condition
initializing md):
With older versions (of mdadm I think) the RAID was assembled, but
degraded or kind of assembled but not startable (when only 1 or 2 disk
were present
Oliver Brakmann [EMAIL PROTECTED] writes:
Same here. I added a sleep 3 to the mdadm script and my system now
consistently boots. Previously, I got an error message from mdadm that
it couldn't find the devices that made up my RAID.
Ditto here, running up-to-date Feisty on amd64. I use
my initramfs can be found here:
http://gin.univ-mrs.fr/GINsim/download/initrd.img-2.6.20-13-generic
NOTE: it is note the one on which I booted this morning but I think I reverted
all changes and built it again, so it should be pretty similar ;)
--
boot-time race condition initializing md
Aurelien Naldi writes (Re: [Bug 75681] Re: boot-time race condition
initializing md):
On 3/28/07, Ian Jackson [EMAIL PROTECTED] wrote:
OK, so the main symptom in the log that I'm looking at was that you
got an initramfs prompt ?
a normal boot would have given me a busybox yes, but I added
Le mercredi 28 mars 2007 à 22:55 +, Ian Jackson a écrit :
I see. Err, actually, I don't see. In what way was the assembly of
the raid incorrect ? You say it wasn't degraded. Was it assembled at
all ? Was it half-assembled ?
My memory does not deserve me right, sorry!
In previous
The fact that I have been able to boot twice since yesterday's updates
is tacit confirmation that something got either mitigated or fixed, and
I believe it to have been a bit of a reorganization of the scripting
ubuntu uses to help udev initialize things.
I noticed an initramfs-tools update just
I'm very interested in finally testing 105, and the missing
udev_105.orig.tar.gz is a bit of a challenge, but at least there's a
diff.
As I recall, from my initial attempt at building a deb for 105, there
was superficially only one reject. I was just unsure as to how to
properly resolve it.
--
Reinhard, you may remember that on irc I asked you to try moving
/usr/share/initramfs-tools/scripts/local-top/mdrun
aside and rebuilding your initramfs. Did you try this in the end ?
(Please move it somewhere not under /usr/share/initramfs-tools/.)
My supposition is that it is this script which
My bugreport might be related to this one, Bug #95280.
(https://launchpad.net/ubuntu/+bug/95280)
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
I have just tested upgrading to the debian udev package and (beside being
painfull) it completly broked my system.
I think that people here are experiencing two different bugs, as _for_me_
/dev/sd* files are created normally, just with a few seconds of delay, making
mdadm blocked after trying
in order to verify the assumption that a new udev upstream version
would/could fix this issue, I packaged the newer upstream version 105
for edgy. You can find the sourcepackage here:
http://siretart.tauware.de/udev-edgy-test/udev_105-0ubuntu1.dsc.
To install it: `dget -x
Reinhard Tartler, am I correct in assuming that you forgot to upload
udev_105.orig.tar.gz? Otherwise, where are we expected to get it?
Thanks.
To mirror others' sentiments, I do suspect that there are a plurality of
bugs with similar symptoms being voiced here.
--
boot-time race condition
I tried upgrading to 0.105-3 from debian unstable, and it fixed this bug
for me (I used prevu to accomplish this). Devices for my drives are
now created correctly, and /dev/md0 is now created and mounted correctly
at boot-up. Unfortunately, as a side-effect, network-manager is no
longer seeing
I doubt that this is the ideal place to say this, but in case anyone
else goes down this path, getting my wireless working again involved
copying /lib/udev/firmware-helper from the old feisty udev package to
the same location once the new udev from debian unstable is installed.
Apparently the
after adding a 'sleep 5' after udevtrigger, I noticed that mdadm
segfaults on my machine sometimes, resulting that not all volumes start
up.Starting /scripts/local-top/mdadm from-udev manually resulted in
having all 4 raid volumes up however.
even in the cases where all 4 volumes do come up as
** Changed in: mdadm (Ubuntu Feisty)
Target: 7.04-beta = ubuntu-7.04
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
I also confirm this problem on a non SATA System.
On booting it dumps me into a busybox. Like other users writing I do not
see any harddrive devices in /dev - the raid array could not be created.
I also tried to use the solution above (adding div. modules to /etc
/initramfs-tools/modules) but it
-11 and -12 are working for me now (3 SATA drives configured as RAID5 on
a DG965 Intel motherboard), but one thing I did differently that *may*
be relevant is that I enabled the menu and added a 15-second delay in
/boot/grub/menu.lst. I did this so that I have time to change to an
earlier kernel
Additional Information that´s maybe interesting to get the problem solved:
Because I use raid1 with LVM Ubuntu uses lilo as Bootloader. So I don´t think
that it makes a difference if grub or lilo is used.
Furthermore I used my system already with edgy - without additional
delay or something
I guess I'll give it a shot, but the fact that linux does not rely on the
BIOS for SATA or IDE detection and initialization, and the fact that
different versions of software have given me differing results would suggest
your problem might be different, maybe power related (spinup)?
I cannot draw
FWIW, I have the problem, exactly as described, with a IDE-only system.
I attempted this with both an Edgy-Feisty upgrade (mid Februrary), and
a fresh install of Herd5.
I'm running straight RAID1 for all my volumes:
md0 - hda1/hdc1 - /boot
md1 - hda5/hdc5 - /
md2 - hda6/hdc6 - /usr
md3 -
Just wanted to confirm that the -11 kernel update did not fix the
problem for my black macbook. The boot hang first started with -10 and
-9 still works fine.
As described previously, the boot stops with ATA errors and dumps me
into an initramfs shell.
I hope this is fixed soon :(
--
boot-time
I also still have the issue. Intel QX6700 on a gigabyte GA-965P-DS3.
On 3/17/07, John Mark [EMAIL PROTECTED] wrote:
Just wanted to confirm that the -11 kernel update did not fix the
problem for my black macbook. The boot hang first started with -10 and
-9 still works fine.
As described
Just a note that 2.6.20-11-server appears to have solved this issue for
me. The server is booting normally after today's update.
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
Huh. I'll try.
On 3/16/07, Eamonn Sullivan [EMAIL PROTECTED] wrote:
Just a note that 2.6.20-11-server appears to have solved this issue for
me. The server is booting normally after today's update.
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
boot-time
I got hit with something that sounds very similar to this when upgrading
to the 2.6.20-10-server kernel. My system works fine on -9. I ended up
stuck in busybox with no mounted drives. I'm using a DG965 Intel
motherboard with three SATA hard disks. The following are details on my
RAID5 setup and
pjwigan writes ([Bug 75681] Re: boot-time race condition initializing md):
udevd-event[2029]: run_program: '/sbin/modprobe' abnormal exit
I think this is probably a separate problem. Are you using LILO ?
Can you please email me your lilo.conf ? (Don't attach it to the bug
report since I want
display exactly
the same text until X starts.
--- Ian Jackson [EMAIL PROTECTED] wrote:
pjwigan writes ([Bug 75681] Re: boot-time race
condition initializing md):
udevd-event[2029]: run_program: '/sbin/modprobe'
abnormal exit
I think this is probably a separate problem. Are
you using
He has already discovered that his problem likely has a different bug
already in the system.
On 3/14/07, Ian Jackson [EMAIL PROTECTED] wrote:
pjwigan writes ([Bug 75681] Re: boot-time race condition initializing
md):
udevd-event[2029]: run_program: '/sbin/modprobe' abnormal exit
I think
I suggest breaking in an sh -xing the script that loads the modules,
tacking -v onto any modprobe lines, and getting kernel messages captures
using a serial console or something.
You'll need to identify, since there's a hard failure at a consistent
location, where it's failing.
Probably should
** Changed in: mdadm (Ubuntu Feisty)
Target: None = 7.04-beta
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
It seems it's all related to SATA initialization.
udev does not build like other packages, and I'd hate to miss something
about building it and then wreck my system totally.
On 3/13/07, Reinhard Tartler [EMAIL PROTECTED] wrote:
** Changed in: mdadm (Ubuntu Feisty)
Target: None =
I've just tried Herd 5 x86 and hit a similar issue; only this box has no
RAID capability.
Setup is:
- 1 disk (/dev/sda1)
- 1 DVD+RW (/dev/scd0)
Attempting to boot from the live CD consistently gives me:
udevd-event[2029]: run_program: '/sbin/modprobe' abnormal exit
BusyBox v1.1.3
I don't think by what I've seen that this is the same. In your case it's
modprobe's fault by extension, and really probably caused by a module not
loading for whatever reason.
Caveat: I have not done bug hunting for your bug. I just don't think it's
this one.
On 3/13/07, pjwigan [EMAIL
Thanks for the tip. Having dug deeper, 84964 is a perfect match.
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
I would surely test a newer udev packages if it fixes this problem.
Using apt-get source to rebuild the debian package should be fairly
easy, but I guess ubuntu maintains a set of additionnal patches, and
merging them might be non-trivial. Is any of the ubuntu dev willing to
upload a test package
Is anyone listening???
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
I have been dealing with this for a while now, and it's a udev problem.
Check out the following:
https://launchpad.net/bugs/90657
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=403136
I have four identical SATAs, and sometimes the sd[a,b] device nodes fail
to show up, and sometimes they don't
This is very much udev's fault. A fix is reportedly in debian's 105-2
version.
** Also affects: udev (Ubuntu)
Importance: Undecided
Status: Unconfirmed
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
Can confirm this with a fresh installation from daily/20070301/feisty-
server-amd64.iso.
Setup:
- only one disk
- md0 raid1 mounted as / (/dev/sda1 + other mirror missing, the installation ui
actually permits this)
- md1 raid1 unused (/dev/sda3 + other mirror missing)
On the first boot I got to
** Summary changed:
- initramfs script: race condition between sata and md
+ boot-time race condition initializing md
--
boot-time race condition initializing md
https://launchpad.net/bugs/75681
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
86 matches
Mail list logo