Yep, fixed.
With wily-proposed enabled,
# apt-get install lxc
installs:
2015-11-15 11:29:28 status installed liblxc1:amd64 1.1.5-0ubuntu0.15.10.2
2015-11-15 11:29:29 status installed python3-lxc:amd64 1.1.5-0ubuntu0.15.10.2
2015-11-15 11:29:30 status installed lxc:amd64 1.1.5-0ubuntu0.15.10.2
I haven't seen this issue for ages, using primarily sid and wily guests.
Trying again on my desktop, which definitely used to have the issue, I
can't reproduce it, using:
2015-11-15 11:29:28 status installed liblxc1:amd64 1.1.5-0ubuntu0.15.10.2
2015-11-15 11:29:29 status installed
Public bug reported:
The package attempts to start itself as part of the post-install.
However, the default config, shipped with the application, prevents it
from starting. This makes apt Angry:
Setting up openhpid (2.14.1-1.3ubuntu2) ...
Job for openhpid.service failed. See "systemctl status
Public bug reported:
Once systemd 226 is installed in an unprivileged Debian Sid container,
lxc-attach no-longer functions:
% lxc-attach -n siddy
lxc-attach: cgmanager.c: lxc_cgmanager_enter: 698 call to
cgmanager_move_pid_abs_sync failed: invalid request
lxc-attach: cgmanager.c: cgm_attach:
Public bug reported:
Running lxc-stop on a container which doesn't exist actually creates the
container, and messes up its permissions, causing sequences like:
$ lxc-stop -n foo lxc-destroy -n foo lxc-clone clean-machine foo
..to fail with bad errors:
# it definitely doesn't exist to start
No, that proposed work around seems to make no difference.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1452601
Title:
vivid container's networking.service fails on boot with
Public bug reported:
When starting a Vivid container, it fails to get an IP address. It
believes networking.service was successful, but actually it dies with
SIGPIPE. Restarting networking.service gets an IP, as expected.
Starting networking used to work with pre-vivid containers. I'm
This also occurs for limited user containers, without asking for a btrfs
backing store. This is more of a problem as the limited user can't
delete (or even detect) the subvolume themselves.
On a standard whole-partition-as-btrfs system, as setup by the installer, this
looks like:
ID 257 gen
Public bug reported:
lxc is configured for limited user usage, i.e. the backing store is
always dir. Let's create a container, and snapshot it in its pristine
state:
% lxc-create -n restorebug -t download -- -d ubuntu -r utopic -a amd64
lxc-snapshot -n restorebug
[...]
lxc_container:
Good spot, thanks: /var/lock is on /, not a symlink to /var/run.
These machines are provisioned from OVH.com templates. I have raised a
support request with them to see if they are aware of this or are doing
anything strange on purpose.
--
$ cat /proc/self/mountinfo | fgrep lock
27 20 0:19 /
Public bug reported:
The presence of /var/lock/lxc-net causes service lxc-net start to
claim success but actually just do nothing useful.
When the system goes down hard, /var/lock/lxc-net is not removed, fair
enough. This means that systems require manual intervention after
booting.
You can
Public bug reported:
The OpenSSH client defaults to picking hmac-md5, which is based on the
demonstrably insecure MD5 algorithm:
faux@wilf:~% ssh -v localhost true 21 | grep hmac
debug1: kex: server-client aes128-ctr hmac-md5 none
debug1: kex: client-server aes128-ctr hmac-md5 none
MD5 has had
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openssh in Ubuntu.
https://bugs.launchpad.net/bugs/933480
Title:
Picks hmac-md5 over hmac-sha1
To manage notifications about this bug go to:
For precise, openssh-5.9 supports even more secure algorithms, so the line
should perhaps be:
MACs
hmac-sha2-512,hmac-sha2-256,hmac-sha1,hmac-ripemd160,umac...@openssh.com,hmac-md5,hmac-sha1-96,hmac-sha2-512-96,hmac-sha2-256-96,hmac-md5-96
--
You received this bug notification because you
14 matches
Mail list logo