This bug was fixed in the package lxc - 1.0.0~alpha1-0ubuntu2
---
lxc (1.0.0~alpha1-0ubuntu2) saucy; urgency=low
* Add allow-stderr to autopkgtst restrictions as the Ubuntu template
uses policy-rc.d to disable some daemons and that causes a message to
be printed on stderr
The go API bindings have been tested with a large number of simultaneous
create/start/destroys, so I believe the upstream API thread-safety work
must have fixed this at least upstream.
** Changed in: lxc (Ubuntu)
Status: Confirmed = Fix Committed
--
You received this bug notification
Note that the workaround helps significantly but is not always
sufficient. If I do not apply it, I can reliably encounter the problem
every single test run.
Another team is working on moving us to run on Precise in production,
which would let us use Precise containers. Precise containers do not
This was working very well for us, and the lxc-start-ephemeral TRIES=60
was working fine. Starting last week, we began seeing recurrence of
this problem, even with the workarounds applied and TRIES hacked to 180.
I intend to circle around with Serge and see if he has any other ideas.
--
You
Thanks, Gary, I'll try to reproduce this.
** Changed in: lxc (Ubuntu)
Importance: Undecided = Medium
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1014916
Title:
simultaneously
@Gary,
I wasn't able to reproduce this on a m1.xlarge.
Regarding needing disk space, note that /dev/vdb1 should have a large
amount of disk. You can unmount /mnt, pvcreate /dev/xvdb; vgcreate lxc
/dev/xvdb; then use -B lvm as an additional lxc-create flag to create
an lvm-backed container.
I wasn't able to reproduce this on a cc2.8xlarge either.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1014916
Title:
simultaneously started lucid containers pause while starting
** Changed in: lxc (Ubuntu)
Status: New = Confirmed
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1014916
Title:
simultaneously started lucid containers pause while starting
Ok I've been able to reproduce this. I've tried with lvm snapshot
containers - those did NOT do this. I don't understand why, unless it's
simply the timing.
I used the following script:
cat notit.sh
#!/bin/bash
maclist=[]
i=0
for c in /var/lib/lxc/*/config; do
mac=`grep lxc.network.hwaddr
Actually simply disabling udevtrigger.conf doesn't work, because in
lucid eth0 won't then come up. You also need to change the start on
for /etc/init/networking.conf to
start on (local-filesystems)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is
(lowering priority as there is a workaround)
** Changed in: lxc (Ubuntu)
Importance: Medium = Low
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1014916
Title:
simultaneously
Serge, thank you! The workaround appears to work very well for us. The
containers started quickly, and it should not only give us more reliable
starts but also seems to have taken at least three minutes off our
average run time, as you might expect.
--
You received this bug notification
** Description changed:
We are gathering more data, but it feels like we have enough to start a
bug report.
On Precise, on a 16 core/32 thread EC2 machine (instance type
cc2.8xlarge), when starting approximately eight or more Lucid containers
- simultaneously, after the first seven or
13 matches
Mail list logo