The snippet from comment #5 actually has a major flaw. Since there is no
waiting in the loop after doing the sfdisk --re-read and so we do
actually run into races with the udev triggered partition checks
(blkid/cdrom_id). Adding a udevadm settle after the sfdisk call does
prevent all failures I saw before.

Similarly the go-fsdisk-crazy script. Not sure why I had no issues
before but this morning I saw it fail. Looking closer the failures
started before the grow part, though. So it usually had issues on the
partition setup and then all the rest went wrong too. And that went away
after adding a settle after the resize.

There also was something very wrong with the CHS output which seemed to
change (even later on the successful runs). Not yet sure I understand
what is happening there. But overall I think we may be hunting in the
wrong direction. Looking at the initial failing output it seems like
there is just the message about sfdisk failing. Then (and I think that
is in the recovery) re-read of the partition table fails. Would it be
possible that sfdisk reported failure but triggered a partition table
update anyway, then in the recovery the call fails because we still race
against udev commands triggered by the previous call?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/937352

Title:
  root partition in may not be grown

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-initramfs-tools/+bug/937352/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to