Public bug reported:

I've seen this issue fairly consistently when trying to launch multiple
LXC instances:

euca-run-instances -n 4 -k whatever ami-XXXXXX

Some instances start OK - some fail to start and sit 'pending' with this
error message in the nova-compute.log file.

>>>>>>>>>>>>>>>>>>>>>

2011-09-06 15:21:07,106 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
mount /dev/nbd12 /var/lib/nova/instances/instance-00000006//rootfs from 
(pid=1229) execute /usr/lib/pymodules/python2.7/nova/utils.py:165
2011-09-06 15:21:07,219 DEBUG nova.virt.libvirt_conn [-] instance 
instance-00000006: is running from (pid=1229) spawn 
/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:599
2011-09-06 15:21:07,220 DEBUG nova.compute.manager [-] Checking state of 
instance-00000006 from (pid=1229) _get_power_state 
/usr/lib/pymodules/python2.7/nova/compute/manager.py:188
2011-09-06 15:21:07,242 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/exception.py", line 98, in wrapped
(nova.exception): TRACE:     return f(*args, **kw)
(nova.exception): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 453, in 
run_instance
(nova.exception): TRACE:     self._run_instance(context, instance_id, **kwargs)
(nova.exception): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 431, in 
_run_instance
(nova.exception): TRACE:     current_power_state = 
self._get_power_state(context, instance)
(nova.exception): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 190, in 
_get_power_state
(nova.exception): TRACE:     return 
self.driver.get_info(instance['name'])["state"]
(nova.exception): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 1168, in 
get_info
(nova.exception): TRACE:     (state, max_mem, mem, num_cpu, cpu_time) = 
virt_dom.info()
(nova.exception): TRACE:   File "/usr/lib/python2.7/dist-packages/libvirt.py", 
line 1059, in info
(nova.exception): TRACE:     if ret is None: raise libvirtError 
('virDomainGetInfo() failed', dom=self)
(nova.exception): TRACE: libvirtError: internal error Unable to get cgroup for 
instance-00000006
(nova.exception): TRACE:
2011-09-06 15:21:07,262 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE:   File 
"/usr/lib/pymodules/python2.7/nova/rpc/impl_kombu.py", line 620, in 
_process_data
(nova.rpc): TRACE:     rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE:   File "/usr/lib/pymodules/python2.7/nova/exception.py", 
line 129, in wrapped
(nova.rpc): TRACE:     raise Error(str(e))
(nova.rpc): TRACE: Error: internal error Unable to get cgroup for 
instance-00000006
(nova.rpc): TRACE:
2011-09-06 15:21:07,281 INFO nova.virt.libvirt_conn [-] Instance 
instance-00000006 spawned successfully.

ProblemType: Bug
DistroRelease: Ubuntu 11.10
Package: nova-compute-lxc 2011.3~rc~20110901.1523-0ubuntu1
ProcVersionSignature: Ubuntu 3.0.0-10.16-server 3.0.4
Uname: Linux 3.0.0-10-server x86_64
Architecture: amd64
Date: Tue Sep  6 16:30:36 2011
NovaConf: Error: [Errno 13] Permission denied: '/etc/nova/nova.conf'
PackageArchitecture: all
ProcEnviron:
 LANGUAGE=en_GB:
 LANG=en_GB.UTF-8
 SHELL=/bin/bash
SourcePackage: nova
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: nova (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: amd64 apport-bug oneiric

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/842845

Title:
  problems starting multiple lxc instances concurrently

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/842845/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to