Not sure this is leading somewhere, but comparing the console log of a Maverick instance with the failed boots here I saw the following:
[ 0.242692] blkfront: sda1: barriers enabled (drain) [ 0.243443] Setting capacity to 20971520 [ 0.243461] sda1: detected capacity change from 0 to 10737418240 [ 0.244067] blkfront: sdb: barriers enabled (drain) [ 0.264727] sdb: unknown partition table [ 0.264928] Setting capacity to 880732160 [ 0.264940] sdb: detected capacity change from 0 to 450934865920 [ 0.265508] blkfront: sdc: barriers enabled (drain) [ 0.266328] sdc: unknown partition table [ 0.266507] Setting capacity to 880732160 [ 0.266519] sdc: detected capacity change from 0 to 450934865920 I have not yet been looking deeper but there were no change messages in Maverick. Which could just mean the message had not been printed there or there was really a change to the blkfront driver to initialize in a removable block device way as 0 size and then change to the real size. That may need some udev helper to recognize the change event and trigger a rescanning of partitions. -- natty kernel fails to mount root on ec2 https://bugs.launchpad.net/bugs/669496 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
