[Bug 1819869] Re: linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents of /usr/sbin

2022-02-16 Thread Tobias Koch
** Changed in: linux (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819869

Title:
  linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents
  of /usr/sbin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1819869/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936833] Re: postinst fails when cloud-init is installed but never ran

2021-07-27 Thread Tobias Koch
27.2.1 is in bionic, focal and hirsute. According to @raof it was
SRU'ed, yesterday. Seems this bug slipped through.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936833

Title:
  postinst fails when cloud-init is installed but never ran

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1936833/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-08-30 Thread Tobias Koch
I see two errors, both of which are unrelated to this update:

1) A call to "snap prepare-image" fails because download of a model assertion 
fails.
2) A FileNotFoundError in test_does_not_fit

None of these involve a snap-tool invocation.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-08-28 Thread Tobias Koch
Tested and confirmed functionality on Disco and Bionic.

** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-08-28 Thread Tobias Koch
** Tags removed: verification-needed-disco
** Tags added: verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-08-26 Thread Tobias Koch
MPs for disco:
https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-rootfs/+merge/371794

and bionic:
https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-rootfs/+merge/371795

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-26 Thread Tobias Koch
> (Odds are that whatever causes it to be recreated later in boot would be
> blocked by cloud-init waiting.)

But that's not happening. The instance does boot normally, the only
service degraded is cloud-init and there is no significant delay either.

So conversely, if I put a loop into cloud-init and just waited on the
symlink to appear and if that worked with minimal delay, would that
refute the above?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-23 Thread Tobias Koch
I may be missing the point, but the symlink in question is eventually
recreated, does that tell us anything? This here

> Dan had put a udevadm settle in this spot like so
>
> def get_size(filename)
>util.subp(['udevadm', 'settle'])
>os.open()

looks to me like the event queue should be empty now, but how do you
know userspace has acted on what came out of it? Is it strictly required
that any event is cleared only after the corresponding action has
completed? If yes, we can probably blame udev. If not, cloud-init should
wait on the link to appear.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-23 Thread Tobias Koch
** Changed in: cloud-init
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-23 Thread Tobias Koch
@daniel-thewatkins, I'm not convinced that this bug is invalid for
cloud-init. After reading through all of this again, I still don't
understand, what guarantee there is that when `udevadm settle` is
called, all relevant events have already been queued.

Copying udevadm over, and with that suppressing the error, suggests that
maybe the event queue handling is spurious, but on the other hand, it
might just be that previous versions were slower at something and the
10ms window discussed above is always exceeded because of that anyway.

I'm not saying there cannot also be a bug somewhere else. But if there
is no specification that says there cannot under any circumstances be a
race condition in what cloud-init is doing, then cloud-init should
handle this more robustly.

I'm not an expert on that level, but somehow in a world of multi-core
CPUs and fancy schedulers, invoking command line tools in a certain
order does not seem to preclude the possibility of a race in how the
event is submitted, routed and queued without there being an explicit
locking mechanism.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-07 Thread Tobias Koch
When I say unorthodox, I mean I copied the binary ;) So that should
narrow it down quite a bit. Unless there is more funniness involved.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-05 Thread Tobias Koch
Oh, good observation, Dan. Yes, it was happening on Cosmic and later.
But I cannot say when it started. Now in a moment of despair I tried
something very unorthodox: I copied udevadm from Bionic to the Eoan
image and ran the tests again. Look and behold, all pass.

** No longer affects: linux-azure (Ubuntu)

** Also affects: systemd (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834875] Re: cloud-init growpart race with udev

2019-08-02 Thread Tobias Koch
** Also affects: linux-azure (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834875

Title:
  cloud-init growpart race with udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1834875/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] [NEW] Add retry logic to snap-tool to make downloads more resilient

2019-07-25 Thread Tobias Koch
Public bug reported:

[Impact]

* livecd-rootfs builds will fail immediately when a snap-tool invocation
fails to contact the snapstore because of ephemeral connection problems
or a transient error on the server side.

* The snap-tool script included with livecd-rootfs in Eoan has been
enhanced to retry on connection errors and 5xx server errors reducing
the likelihood of image builds breaking due to a flaky connection or a
server hiccup.

[Test Cases]

* Download core, core18 snaps using both `snap download ` and
`snap-tool download ` and make sure the downloads are
identical.

* Invoke `snap-tool info ` for a few snaps, e.g. review-
gator, lpshipit, azure-cli and verify that all fields carry correct
information.

* Test the backoff/retry logic using the following procedure:

Make netcat listen on port 12345

netcat -l -p 12345

Create a symlink from snaptool.py to snap-tool and import the
ExpBackoffHTTPClient class from a Python session:

ln -s snap-tool snaptool.py

python3
from snaptool import ExpBackoffHTTPClient
http_client = ExpBackoffHTTPClient()
request = http_client.get("http://127.0.0.1:12345/;)
request.text()

Go back to the terminal where you invoked netcat and stop it. snap-tool
should print the following and then fail:

WARNING: failed to open URL 'http://127.0.0.1:12345/': Remote end closed 
connection without response
Retrying HTTP request in 2 seconds...
WARNING: failed to open URL 'http://127.0.0.1:12345/': 
Retrying HTTP request in 4 seconds...

Repeat the procedure above but instead of stopping netcat, paste the
following response:

HTTP/1.1 503 Error

and hit enter twice. You should see

WARNING: failed to open URL 'http://127.0.0.1:12345/': HTTP Error 503: Error
Retrying HTTP request in 2 seconds...

Repeat the above pasting "HTTP/1.1 404 Not found" instead. The snap-tool
should fail immediately.

[Regression Potential]

 * Tool logic and http request headers/body are unchanged, only the way
connections are built has been modified. The expectation is that this
will be more robust and testing in devel hasn't surfaced any bugs, but
there is a slight risk that the tool's behavior has changed in unobvious
corner cases that were missed during testing.

** Affects: livecd-rootfs (Ubuntu)
 Importance: Undecided
 Status: In Progress


** Tags: id-5d0a349876579b42ed84d920

** Summary changed:

- Backport snap-tool backoff/retry logic
+ Add retry logic to snap-tool to make downloads more resilient

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-07-25 Thread Tobias Koch
** Patch added: "Patch Bionic"
   
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+attachment/5279186/+files/backport-bionic.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1837871] Re: Add retry logic to snap-tool to make downloads more resilient

2019-07-25 Thread Tobias Koch
** Patch added: "Patch Disco"
   
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+attachment/5279187/+files/backport-disco.diff

** Changed in: livecd-rootfs (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1837871

Title:
  Add retry logic to snap-tool to make downloads more resilient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1837871/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-18 Thread Tobias Koch
I have verified (bionic 2.525.28 and disco 2.578.6) that the changes to
snap_preseed do not break existing functionality and that an error is
thrown when the helper _snap_preseed is called with an invalid snap
name, resulting in snap-info failing to retrieve the base.

** Tags removed: verification-needed-bionic verification-needed-disco
** Tags added: verification-done-bionic verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-17 Thread Tobias Koch
I believe that comment #27 is correct, because after that the
autopkgtests for ubuntu-image succeeded (triggered by a qemu update)
using the previous revision of livecd-rootfs on disco. But I'm not able
to tell how the changes made to livecd-rootfs affect the
operation/testing of ubuntu-image.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-16 Thread Tobias Koch
** Patch added: "Backport for Disco"
   
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+attachment/5277326/+files/disco.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-16 Thread Tobias Koch
** Description changed:

+ [Impact]
+ 
+ * Failing to do error checks on `snap-tool info` calls while determining
+ the bases of snaps can lead to inconsistently-seeded images as reported
+ in the original bug report below.
+ 
+ * This affects all stable releases which use automatic detection of the
+ base snap during image builds.
+ 
+ * The changes proposed introduce extra error checking which should
+ result in a build error, if the `snap-tool info` call fails, thus
+ preventing distribution of a broken image.
+ 
+ [Test Case]
+ 
+ * It may be tricky to reproduce the exact error on demand. But it should
+ be clear by looking at the code path in
+ 
+ https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-
+ rootfs/+merge/370096
+ 
+ that the scenario described below may arise.
+ 
+ * A way to test that the changes have no *negative* impact on
+ functionality is to run a script such as
+ 
+ https://pastebin.ubuntu.com/p/wQknctr6ys/
+ 
+ and compare it against the expected result
+ 
+ https://pastebin.ubuntu.com/p/N8PXRJJV9b/
+ 
+ (In this case both core and core18 should have been seeded.) and
+ 
+ [Regression Potential]
+ 
+ * The regression potential for these changes is low, as they only add
+ extra error checking but do not change the existing logic.
+ 
+ --- Original Bug Report ---
+ 
  Alway fails in both 18.04.2 and 19.04
  Reporting the bug failed too yesterday.
- As a consequence LivePatch and everything else fails that is based on snapd. 
+ As a consequence LivePatch and everything else fails that is based on snapd.
  This makes system unusable. This is a supported configuration.
  
  
https://blog.ubuntu.com/2018/09/17/optimised-ubuntu-desktop-images-available-in-microsoft-hyper-v-gallery
  
https://blog.ubuntu.com/2019/05/07/19-04-disco-dingo-now-available-as-optimised-desktop-image-for-hyper-v
  
  xx@xenial-Virtual-Machine:~$ sudo snap install vscode
- [sudo] password for peterg: 
+ [sudo] password for peterg:
  error: too early for operation, device not yet seeded or device model not 
acknowledged
  xx@xenial-Virtual-Machine:~$ journalctl -b -u snapd --full --no-pager
  -- Logs begin at Thu 2019-05-09 20:06:58 AEST, end at Fri 2019-05-10 13:20:46 
AEST. --
  May 10 13:19:59 xenial-Virtual-Machine systemd[1]: Starting Snappy daemon...
  May 10 13:20:00 xenial-Virtual-Machine snapd[566]: AppArmor status: apparmor 
is enabled and all features are available
  May 10 13:20:00 xenial-Virtual-Machine snapd[566]: helpers.go:717: cannot 
retrieve info for snap "gnome-calculator": cannot find installed snap 
"gnome-calculator" at revision 352: missing file 
/snap/gnome-calculator/352/meta/snap.yaml
  May 10 13:20:00 xenial-Virtual-Machine snapd[566]: daemon.go:379: started 
snapd/2.38+18.04 (series 16; classic) ubuntu/18.04 (amd64) 
linux/4.15.0-48-generic.
  May 10 13:20:00 xenial-Virtual-Machine systemd[1]: Started Snappy daemon.
  May 10 13:20:46 xenial-Virtual-Machine snapd[566]: api.go:1071: Installing 
snap "vscode" revision unset
  
  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: snapd 2.38+18.04
  ProcVersionSignature: Ubuntu 4.15.0-48.51-generic 4.15.18
  Uname: Linux 4.15.0-48-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.6
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Fri May 10 13:21:25 2019
  ProcEnviron:
-  LANGUAGE=en_AU:en
-  PATH=(custom, no user)
-  XDG_RUNTIME_DIR=
-  LANG=en_AU.UTF-8
-  SHELL=/bin/bash
+  LANGUAGE=en_AU:en
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=
+  LANG=en_AU.UTF-8
+  SHELL=/bin/bash
  SourcePackage: snapd
  UpgradeStatus: No upgrade log present (probably fresh install)

** Patch added: "Backport for Bionic"
   
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+attachment/5277325/+files/bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-15 Thread Tobias Koch
Maybe also useful: modifications to snap-tool to make it more resilient
to connection failures of all sorts:

https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-
rootfs/+merge/370139

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-15 Thread Tobias Koch
My mistake that I didn't do proper error checking while determining the
base of a snap. This here

https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-
rootfs/+merge/370096

will hopefully prevent this from happening again.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1828500] Re: snapd fails always in Optimised Ubuntu Desktop images available in Microsoft Hyper-V gallery

2019-07-12 Thread Tobias Koch
I ran the following test on the current version of livecd-rootfs for
Bionic:

https://pastebin.ubuntu.com/p/wQknctr6ys/

with a positive result:

https://pastebin.ubuntu.com/p/N8PXRJJV9b/

So I think in principle it should work, but maybe there are silent
download failures. We definitely need mvo's sanity checks on seed.yaml.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1828500

Title:
  snapd fails always in  Optimised Ubuntu Desktop images available in
  Microsoft Hyper-V gallery

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1828500/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1835909] [NEW] grub-probe fails to recognize ext4 partition

2019-07-09 Thread Tobias Koch
Public bug reported:

On a recent Bionic Azure image, the command

grub-probe --device /dev/sda1 --target=fs_uuid

fails with

grub-probe: error: not a directory.

The reason for this seems to be that grub-probe finds the minixfs
superblock magic 0x138F at offset

0410  8f 13 3a 00 00 00 00 00  02 00 00 00 02 00 00 00
|..:.|

then assumes it has found a minixfs-formatted partition and exits with
an error when it cannot access it, instead of trying other filesystems
first.

I don't know what the ext filesystem has stored at that position, and
whether it is straight-forward to fabricate this manually.

To reproduce this, you may launch the Azure Bionic image with urn

Canonical:UbuntuServer:18.04-LTS:18.04.201906271

** Affects: grub2 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1835909

Title:
  grub-probe fails to recognize ext4 partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1835909/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1829944] Re: magic-proxy does not send along headers for errors

2019-05-22 Thread Tobias Koch
** Description changed:

  [Impact]
  
  When there is an error with the apt proxy the headers are not returned
  which makes debugging hard.
  
  [Test Case]
  
  This was tested on xenial when the release lacked the InRelease file by-
- hash.  It's hard to test otherwise unless there are problems with the
- archive.
+ hash.
+ 
+ This can be verified manually as follows:
+ 
+ * Start the proxy via
+ 
+ ./magic-proxy -t `python -c "import time; print(int(time.time() -
+ 300))"`
+ 
+ * In an empty directory do
+ 
+ mkdir -p ubuntu/dists/xenial/by-hash/SHA256
+ touch ubuntu/dists/xenial/InRelease
+ python3 -m http.server
+ 
+ * In a separate terminal telnet to port 8080 and paste
+ 
+ GET /ubuntu/dists/xenial/InRelease HTTP/1.1
+ HOST: localhost:8000
+ 
+ followed by an empty line. Without the patch, the response will be the
+ string "No InRelease file found for given mirror, suite and timestamp".
+ 
+ With the patch applied, the response will start with a HTTP status line
+ "HTTP/1.0 404 Not Found" and a set of HTTP headers.
  
  [Regression Potential]
  
  Low.  This was tested on xenial when the release lacked the InRelease
  file by-hash.  Also, this code is not yet in use.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1829944

Title:
  magic-proxy does not send along headers for errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1829944/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822270] [NEW] Debconf readline frontend does not show options

2019-03-29 Thread Tobias Koch
Public bug reported:

AFFECTED RELEASE:

Bionic

PACKAGE VERSION:

debconf - 1.5.66

DESCRIPTION:

When upgrading the kernel on a recent Bionic minimal image, the user is
prompted to resolve a conflict in the file /boot/grub/menu.lst.

The minimal images do not have dialog/whiptail installed, so debconf
falls back to using the readline frontend.

The user sees the prompt: "What would you like to do about menu.lst?"
but is not presented with the list of options to choose from.

If a valid option is typed in, debconf will continue processing
correctly and the list of options  appears on the screen. See also
https://pastebin.ubuntu.com/p/8xvSn88SKG/

STEPS TO REPRODUCE:

Launch the minimal Bionic image with serial 20190212 http://cloud-
images.ubuntu.com/minimal/releases/bionic/release-20190212/ubuntu-18.04
-minimal-cloudimg-amd64.img

for example via multipass and run `apt-get update` and `apt-get dist-
upgrade`.

** Affects: debconf (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822270

Title:
  Debconf readline frontend does not show options

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/debconf/+bug/1822270/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1820063] Re: [Hyper-V] KVP daemon fails to start on first boot of disco VM

2019-03-27 Thread Tobias Koch
** Tags added: bionic cosmic xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1820063

Title:
  [Hyper-V] KVP daemon fails to start on first boot of disco VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820063/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1820063] Re: [Hyper-V] KVP daemon fails to start on first boot of disco VM

2019-03-21 Thread Tobias Koch
This also happens in Bionic minimal images and seems to be a simple race
where the daemon is launched when the device is not yet visible.

It seems Fedora had the same or a similar issue:

https://bugzilla.redhat.com/show_bug.cgi?id=1195029

>From the conversation in that bug report, my understanding is that this
daemon supports features that can be enabled/disabled while the system
is live and that the daemon should therefore be launched/stopped
dynamically when the /dev/vmbus/hv_kvp device node comes or goes.

Since this race condition happening or not happening may depend on host,
hypervisor and boot speed, it could pop up also on existing releases and
thus should probably be treated with a certain amount of urgency.

** Bug watch added: Red Hat Bugzilla #1195029
   https://bugzilla.redhat.com/show_bug.cgi?id=1195029

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1820063

Title:
  [Hyper-V] KVP daemon fails to start on first boot of disco VM

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1820063/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1820840] Re: [Disco] Ubuntu Desktop fails to build - snap-tool download: failed to get details for 'core18' in 'stable/ubuntu-19.04' on 'amd64': No revision was found in the Store.

2019-03-20 Thread Tobias Koch
I would like to propose this change to snap-tool to partially mitigate
the problems discussed here:

https://code.launchpad.net/~tobijk/livecd-rootfs/+git/livecd-
rootfs/+merge/364800

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1820840

Title:
  [Disco] Ubuntu Desktop fails to build - snap-tool download: failed to
  get details for 'core18' in 'stable/ubuntu-19.04' on 'amd64': No
  revision was found in the Store.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1820840/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819869] Re: linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents of /usr/sbin

2019-03-14 Thread Tobias Koch
** Tags removed: verified-trusty
** Tags added: verified-done-trusty

** Tags removed: verified-done-trusty
** Tags added: verification-done-trusty

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819869

Title:
  linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents
  of /usr/sbin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1819869/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819869] Re: linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents of /usr/sbin

2019-03-14 Thread Tobias Koch
I have verified that version 3.13.0-167.217 contains the missing files
and that after installation hv_kvp_daemon and hv_vss_daemon are active.

** Tags removed: verification-needed-trusty
** Tags added: verified-trusty

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819869

Title:
  linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents
  of /usr/sbin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1819869/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819869] Re: linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents of /usr/sbin

2019-03-13 Thread Tobias Koch
This is a packaging issue, so logs should be irrelevant.

** Changed in: linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819869

Title:
  linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents
  of /usr/sbin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1819869/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1819869] [NEW] linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents of /usr/sbin

2019-03-13 Thread Tobias Koch
Public bug reported:

Version 3.13.0-165.215 of linux-cloud-tools-common has the following
contents

/.
/usr
/usr/sbin
/usr/sbin/hv_vss_daemon
/usr/sbin/hv_kvp_daemon
/usr/sbin/hv_fcopy_daemon
/usr/sbin/hv_get_dhcp_info
/usr/sbin/hv_set_ifconfig
/usr/sbin/hv_get_dns_info
/usr/share
/usr/share/man
/usr/share/man/man8
/usr/share/man/man8/hv_kvp_daemon.8.gz
/usr/share/doc
/usr/share/doc/linux-cloud-tools-common
/usr/share/doc/linux-cloud-tools-common/changelog.Debian.gz
/usr/share/doc/linux-cloud-tools-common/copyright
/etc
/etc/init
/etc/init/hv-vss-daemon.conf
/etc/init/hv-fcopy-daemon.conf
/etc/init/hv-kvp-daemon.conf

whereas version 3.13.0-166.216 only contains

/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/linux-cloud-tools-common
/usr/share/doc/linux-cloud-tools-common/copyright
/usr/share/doc/linux-cloud-tools-common/changelog.Debian.gz
/etc
/etc/init
/etc/init/hv-vss-daemon.conf
/etc/init/hv-kvp-daemon.conf
/etc/init/hv-fcopy-daemon.conf

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1819869

Title:
  linux-cloud-tools-common 3.13.0-166.216 in Trusty is missing contents
  of /usr/sbin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1819869/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1804445] Re: Update rax-nova-agent to 2.1.18

2018-12-10 Thread Tobias Koch
Images for Xenial, Bionic and Cosmic have been generated and tested. We
have confirmation that all images are working as expected with this
version of rax-nova-agent.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1804445

Title:
   Update rax-nova-agent to 2.1.18

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rax-nova-agent/+bug/1804445/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1792905] Re: [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

2018-09-20 Thread Tobias Koch
** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images available from http
  ://cloud-images.ubuntu.com.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
   * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
   * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  https://github.com/chrisglass/ubuntu-old-fashioned (note: you will need
  ubuntu-old-fashioned master for cosmic)
  
  * Re-build the images using the updated livecd-rootfs package.
  
  * Unpack the resulting livecd.ubuntu-cpc.squashfs artifact using
  unsquashfs again.
  
  * Inspect the unpacked root filesystem and find that '/lib/modules'
  exists.
  
  * It is pure luck that package purges which are done analogously in
  Cosmic image builds do not remove '/lib/modules', hence this fix is
  introduced there, as well.
  
  * Xenial is not affected.
  
  * Test builds were carried out for Cosmic and Bionic with the expected
  results.
  
  [Regression Potential]
  
   * This is a fix to a regression. The existence of the directory had
  previously been ensured, but the mkdir call got lost in recent re-
- factoring.
+ factoring. See also:
+ 
+ https://bazaar.launchpad.net/~ubuntu-core-dev/livecd-rootfs/bionic-
+ proposed/revision/1678
+ 
+ https://bazaar.launchpad.net/~ubuntu-core-dev/livecd-
+ rootfs/trunk/revision/1681
  
   * Packaging tools should not take offense at the existence of a
  directory, even if it was not part of a package. So potential for
  unforseeable regressions is very low.
  
  ===ORIGINAL BUG DESCRIPTION===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1792905

Title:
  [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1792905/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1792905] Re: [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

2018-09-20 Thread Tobias Koch
debdiff for Cosmic.

** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images available from http
  ://cloud-images.ubuntu.com.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
   * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
   * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  https://github.com/chrisglass/ubuntu-old-fashioned (note: you will need
  ubuntu-old-fashioned master for cosmic)
  
  * Re-build the images using the updated livecd-rootfs package.
  
  * Unpack the resulting livecd.ubuntu-cpc.squashfs artifact using
  unsquashfs again.
  
  * Inspect the unpacked root filesystem and find that '/lib/modules'
  exists.
  
- * Do the above for Bionic and Cosmic.
+ * It is pure luck that package purges which are done analogously in
+ Cosmic image builds do not remove '/lib/modules', hence this fix is
+ introduced there, as well.
+ 
+ * Xenial is not affected.
  
  [Regression Potential]
  
   * This is a fix to a regression. The existence of the directory had
  previously been ensured, but the mkdir call got lost in recent re-
  factoring.
  
   * Packaging tools should not take offense at the existence of a
  directory, even if it was not part of a package. So potential for
  unforseeable regressions is very low.
  
  ===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images available from http
  ://cloud-images.ubuntu.com.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
   * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
   * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  https://github.com/chrisglass/ubuntu-old-fashioned (note: you will need
  ubuntu-old-fashioned master for cosmic)
  
  * Re-build the images using the updated livecd-rootfs package.
  
  * Unpack the resulting livecd.ubuntu-cpc.squashfs artifact using
  unsquashfs again.
  
  * Inspect the unpacked root filesystem and find that '/lib/modules'
  exists.
  
  * It is pure luck that package purges which are done analogously in
  Cosmic image builds do not remove '/lib/modules', hence this fix is
  introduced there, as well.
  
  * Xenial is not affected.
  
  [Regression Potential]
  
   * This is a fix to a regression. The existence of the directory had
  previously been ensured, but the mkdir call got lost in recent re-
  factoring.
  
   * Packaging tools should not take offense at the existence of a
  directory, even if it was not part of a package. So potential for
  unforseeable regressions is very low.
  
- ===
+ ===ORIGINAL BUG DESCRIPTION===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images available from http
  ://cloud-images.ubuntu.com.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
   * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
   * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  

[Bug 1792905] Re: [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

2018-09-20 Thread Tobias Koch
debdiff for Bionic.

** Patch added: "debdiff for Bionic."
   
https://bugs.launchpad.net/cloud-images/+bug/1792905/+attachment/5190910/+files/livecd-rootfs-bionic.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1792905

Title:
  [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1792905/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1792905] Re: [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

2018-09-20 Thread Tobias Koch
** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
-  * TODO...
+  * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
+ cloudimg-amd64.squashfs
  
-  * these should allow someone who is not familiar with the affected
-    package to reproduce the bug and verify that the updated package fixes
-    the problem.
+  * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
+ 
+  * Inspect the unpacked root filesystem and find that '/lib/modules' is
+ missing.
+ 
+  * Install local build scripts as described at
+ https://github.com/chrisglass/ubuntu-old-fashioned (note: you will need
+ ubuntu-old-fashioned master for cosmic)
+ 
+ * Re-build the images using the updated livecd-rootfs package.
+ 
+ * Unpack the resulting livecd.ubuntu-cpc.squashfs artifact using
+ unsquashfs again.
+ 
+ * Inspect the unpacked root filesystem and find that '/lib/modules'
+ exists.
+ 
+ * Do the above for Bionic and Cosmic.
  
  [Regression Potential]
  
   * This is a fix to a regression. The existence of the directory had
  previously been ensured, but the mkdir call got lost in recent re-
  factoring.
  
   * Packaging tools should not take offense at the existence of a
  directory, even if it was not part of a package at that time. So
  potential for regressions from this fix is basically zero.
  
  ===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
   * This affects users of our stable release images.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
-  * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
+  * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
-  * Inspect the unpacked root filesystem and find that '/lib/modules' is
+  * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  https://github.com/chrisglass/ubuntu-old-fashioned (note: you will need
  ubuntu-old-fashioned master for cosmic)
  
  * Re-build the images using the updated livecd-rootfs package.
  
  * Unpack the resulting livecd.ubuntu-cpc.squashfs artifact using
  unsquashfs again.
  
  * Inspect the unpacked root filesystem and find that '/lib/modules'
  exists.
  
  * Do the above for Bionic and Cosmic.
  
  [Regression Potential]
  
   * This is a fix to a regression. The existence of the directory had
  previously been ensured, but the mkdir call got lost in recent re-
  factoring.
  
   * Packaging tools should not take offense at the existence of a
- directory, even if it was not part of a package at that time. So
- potential for regressions from this fix is basically zero.
+ directory, even if it was not part of a package. So potential for
+ unforseeable regressions is very low.
  
  ===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

** Description changed:

  [Impact]
  
   * Affects environments where the base image is read-only but kernel
  modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
-  * This affects users of our stable release images.
+  * This affects users of our stable release images available from http
+ ://cloud-images.ubuntu.com.
  
   * The attached fixes ensure /lib/modules always exists by creating it
  explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
   * Download http://cloud-images.ubuntu.com/bionic/current/bionic-server-
  cloudimg-amd64.squashfs
  
   * Unpack it via `sudo unsquashfs bionic-server-cloudimg-amd64.squashfs`
  
   * Inspect the unpacked root filesystem and find that '/lib/modules' is
  missing.
  
   * Install local build scripts as described at
  

[Bug 1792905] Re: [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

2018-09-20 Thread Tobias Koch
** Description changed:

- #! NB Foundations coding day task !#
- 
  [Impact]
  
-  * An explanation of the effects of the bug on users and
+  * Affects environments where the base image is read-only but kernel
+ modules are copied to a tempfs or other overlay mounted on /lib/modules.
  
-  * justification for backporting the fix to the stable release.
+  * This affects users of our stable release images.
  
-  * In addition, it is helpful, but not required, to include an
-explanation of how the upload fixes this bug.
+  * The attached fixes ensure /lib/modules always exists by creating it
+ explicitly instead of relying on it to come from a package.
  
  [Test Case]
  
-  * detailed instructions how to reproduce the bug
+  * TODO...
  
-  * these should allow someone who is not familiar with the affected
-package to reproduce the bug and verify that the updated package fixes
-the problem.
+  * these should allow someone who is not familiar with the affected
+    package to reproduce the bug and verify that the updated package fixes
+    the problem.
  
  [Regression Potential]
  
-  * discussion of how regressions are most likely to manifest as a result
- of this change.
+  * This is a fix to a regression. The existence of the directory had
+ previously been ensured, but the mkdir call got lost in recent re-
+ factoring.
  
-  * It is assumed that any SRU candidate patch is well-tested before
-upload and has a low overall risk of regression, but it's important
-to make the effort to think about what ''could'' happen in the
-event of a regression.
- 
-  * This both shows the SRU team that the risks have been considered,
-and provides guidance to testers in regression-testing the SRU.
- 
- [Other Info]
-  
-  * Anything else you think is useful to include
-  * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
-  * and address these questions in advance
+  * Packaging tools should not take offense at the existence of a
+ directory, even if it was not part of a package at that time. So
+ potential for regressions from this fix is basically zero.
  
  ===
  
  Let me first start with saying MAAS is *not* using iSCSI anymore and is
  *NOT* in this case either.
  
  For some reason now using enlistment, commissioning, and deploying the
  ephemeral environment will block for 1 min 30 seconds waiting for the
  iSCSI daemon to succeed, which it never does.
  
  This increases the boot time drastically.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1792905

Title:
  [2.5] iSCSI systemd services fails and blocks for 1 min 30 secconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1792905/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs