[Bug 1313550] Re: ping does not work as a normal user on trusty tarball cloud images.

2014-05-08 Thread Jason Gerard DeRose
This also affects the `gnome-keyring` package. The System76 imaging system (Tribble) uses a tar-based approach similar to the MAAS fast-path installer, and we've had to add a work-around for /usr/bin/gnome- keyring-daemon on our desktop images: $ getcap /usr/bin/gnome-keyring-daemon

[Bug 1313550] Re: ping does not work as a normal user on trusty tarball cloud images.

2014-05-08 Thread Jason Gerard DeRose
Stéphane, Gotcha, thanks for the feedback! So am I correct in thinking that the --xattrs option is currently broken in tar on 14.04? If so, is there any chance this could be fixed in an SRU? -- You received this bug notification because you are a member of Ubuntu Server Team, which is

[Bug 1313550] Re: ping does not work as a normal user on trusty tarball cloud images.

2014-05-08 Thread Jason Gerard DeRose
Clint, Ah, thanks for bringing up --xattrs-include=*, I didn't notice this option! I agree this is really a bug/misfeature in tar... if I use --xattrs both when creating and unpacking a tarball, I expect it to just work. -- You received this bug notification because you are a member of Ubuntu

[Bug 1429938] [NEW] systemd changes behavior of apt-get remove openssh-server

2015-03-09 Thread Jason Gerard DeRose
Public bug reported: On Trusty and Utopic, when you run `apt-get remove openssh-server` over an SSH connection, your existing SSH connection remains open, so it's possible to run additional commands afterward. However, on Vivid now that the switch to systemd has been made, `apt- get remove

[Bug 1427654] Re: FFE: switch system init to systemd [not touch] in 15.04

2015-03-09 Thread Jason Gerard DeRose
Being able to run a script like this over SSH: apt-get -y remove openssh-server shutdown -h now Can be extremely useful in automation tooling, but the switch to systemd breaks this: https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1429938 -- You received this bug notification because

[Bug 1429938] Re: systemd changes behavior of apt-get remove openssh-server

2015-03-09 Thread Jason Gerard DeRose
Also, just to clarify, this is definitely a change (or in my mind regression) introduced by systemd. Yesterday, the System76 image master tool worked fine and dandy with an up-to-date Vivid VM, as it has throughout the rest of the previous Vivid dev cycle. Today things broke. -- You received

[Bug 1429938] Re: stopping ssh.service closes existing ssh connections

2015-03-11 Thread Jason Gerard DeRose
Hmm, now I'm thinking this has nothing to do with openssh-server. I think the problem is actually that when I run this over SSH: # shutdown -h now My ssh client exists with status 255... whereas running the same thing prior to the flip-over to systemd would exit with status 0. -- You received

[Bug 1429938] Re: stopping ssh.service closes existing ssh connections

2015-03-11 Thread Jason Gerard DeRose
So interestingly, this isn't happening when I just type these commands into an SSH session. But if you create a script like this in say /tmp/test.sh: #!/bin/bash apt-get -y purge openssh-server ssh-import-id apt-get -y autoremove shutdown -h now And then execute this through an ssh call like

[Bug 1429938] Re: stopping ssh.service closes existing ssh connections

2015-03-11 Thread Jason Gerard DeRose
Also, on Vivid there will be this error: Connection to localhost closed by remote host. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openssh in Ubuntu. https://bugs.launchpad.net/bugs/1429938 Title: stopping ssh.service closes

[Bug 1429938] Re: stopping ssh.service closes existing ssh connections

2015-03-11 Thread Jason Gerard DeRose
Same problem when running `reboot`, which I'd say is even more important for automation. Port 2204 is forwarding to a qemu VM running Utopic, port 2207 is running Vivid: jderose@jgd-kudp1:~$ ssh root@localhost -p 2204 reboot jderose@jgd-kudp1:~$ echo $? 0 jderose@jgd-kudp1:~$ ssh root@localhost

[Bug 1429938] Re: stopping ssh.service closes existing ssh connections

2015-03-11 Thread Jason Gerard DeRose
Okay, here's a simple way to reproduce: $ ssh root@whatever shutdown -h now $ echo $? On Vivid, the exist status from the ssh client will be 255. On Trusty and Utopic it will be 0. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to

[Bug 1435428] Re: vivid: systemd breaks qemu-nbd mounting

2015-03-24 Thread Jason Gerard DeRose
** Summary changed: - vivid: mounting with qemu-nbd fails + vivid: systemd breaks qemu-nbd mounting ** Description changed: On Trusty and Utopic, this works: $ sudo modprobe nbd $ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2 $ sudo mount /dev/nbd0p1 /mnt $ sudo umount /mnt

[Bug 1435428] Re: vivid: systemd breaks qemu-nbd mounting

2015-03-25 Thread Jason Gerard DeRose
@didrocks - yup, it's working now! Thank you! -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1435428 Title: vivid: systemd breaks qemu-nbd mounting To manage notifications about this

[Bug 1435428] Re: vivid: systemd breaks qemu-nbd mounting

2015-03-25 Thread Jason Gerard DeRose
Hmm, and one more thing: qemu-nbd --disconnect (at least sometimes) doesn't seem to be working when booting with systemd: $ ls /dev/nbd0* /dev/nbd0 /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p5 $ sudo qemu-nbd --disconnect /dev/nbd0 /dev/nbd0 disconnected $ echo $? 0 $ ls /dev/nbd0* /dev/nbd0

[Bug 1435428] Re: vivid: systemd breaks qemu-nbd mounting

2015-03-25 Thread Jason Gerard DeRose
Hmmm, there may still be an issue, as I didn't encounter this yesterday when doing my task multiple times after booting with Upstart. I'm mounting these qcow2 disk images in order to export a tarball of the filesystem. First three tarballs exported swimmingly, but the fourth time it seemed to

[Bug 1435428] Re: vivid: systemd breaks qemu-nbd mounting

2015-03-25 Thread Jason Gerard DeRose
Hmm, maybe something else was going on. In an isolated test script, I haven't reproduced the disconnect problem again yet. I attached the script I'm using in case anyone else what's to give it ago. ** Attachment added: qemu-nbd-test.py

[Bug 1435428] [NEW] vivid: mounting with qemu-nbd fails

2015-03-23 Thread Jason Gerard DeRose
Public bug reported: On Trusty and Utopic, this works: $ sudo modprobe nbd $ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2 $ sudo mount /dev/nbd0p1 /mnt $ sudo umount /mnt But on Vivid, even though the mount command exists with 0, something goes awry and the mount point gets unmounted

[Bug 1435428] Re: vivid: mounting with qemu-nbd fails

2015-03-23 Thread Jason Gerard DeRose
** Description changed: On Trusty and Utopic, this works: $ sudo modprobe nbd $ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2 $ sudo mount /dev/nbd0p1 /mnt $ sudo umount /mnt - But on Vivid, even though the mount command exists with 0, something - goes awry and the mount point

[Bug 1435428] Re: disconnecting qemu-nbd leaves device node behind

2015-04-03 Thread Jason Gerard DeRose
Martin, After a lot more testing, both synthetic and normally using my day-to- day tools, I haven't been able to reproduce the disconnect problem, so I'm writing that off as a fluke or as some silly error on my part. As far as I can tell, the original qemu-nbd mounting bug has been solidly