I'm using the filter scheduler.

On Wed, May 30, 2012 at 5:44 AM, Vaze, Mandar <mandar.v...@nttdata.com>wrote:

> > I'm a bit disappointed that the request even went through to the compute
> node to build the instance, as the scheduler *should* already know the
> memory exceeds the available memory on the box.
>
> FilterScheduler (and chanceScheduler?) checks this condition before
> sending to Compute,  SimpleScheduler does not.
>
> Leander : Which scheduler are you using ?
>
> -Mandar
>
> -----Original Message-----
> From: openstack-bounces+mandar.vaze=nttdata....@lists.launchpad.net[mailto:
> openstack-bounces+mandar.vaze=nttdata....@lists.launchpad.net] On Behalf
> Of Jay Pipes
> Sent: Wednesday, May 30, 2012 12:58 AM
> To: openstack@lists.launchpad.net
> Subject: Re: [Openstack] [Nova] Instances which use flavors with disk
> space fail to spawn
>
> Leander, I would submit a bug about this. The error message is cryptic (to
> say the least!) and I think it would be better if the scheduler determined
> if the flavor requested has a memory request greater than the total amount
> available on the server! I'm a bit disappointed that the request even went
> through to the compute node to build the instance, as the scheduler
> *should* already know the memory exceeds the available memory on the box.
>
> Best,
> -jay
>
> On 05/29/2012 11:07 AM, Leander Bessa Beernaert wrote:
> > For anyone interested, i've figured out that the instances were not
> > getting spawned because the amount of memory in the flavor was equal to
> > the maximum memory available through the underlying hardware.
> >
> > On Tue, May 29, 2012 at 11:10 AM, Leander Bessa Beernaert
> > <leande...@gmail.com <mailto:leande...@gmail.com>> wrote:
> >
> >     Hello,
> >
> >     I'm unable to boot any image with a flavor that has a disk space
> >     associated with it. It always fails at the spawning state. Below it
> >     the log output of nova-compute:
> >
> >             2012-05-28 16:20:25 ERROR nova.compute.manager
> >             [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c
> >             9494d025721c4d7bb28a16fa796f9414
> >             04282e9aff474d2383bb4d4417673e0a] [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7] Instance failed to
> spawn
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7] Traceback (most recent
> >             call last):
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 592, in _spawn
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]
> >             self._legacy_nw_info(network_info), block_device_info)
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
> >             "/usr/lib/python2.7/dist-packages/nova/exception.py", line
> >             114, in wrapped
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]     return f(*args,
> **kw)
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
> >
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
> >             line 922, in spawn
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]
> >             self._create_new_domain(xml)
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
> >
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
> >             line 1575, in _create_new_domain
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]
> >             domain.createWithFlags(launch_flags)
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
> >             "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in
> >             createWithFlags
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]     if ret == -1:
> >             raise libvirtError ('virDomainCreateWithFlags() failed',
> >             dom=self)
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7] libvirtError: Unable
> >             to read from monitor: Connection reset by peer
> >
> >             2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7]
> >
> >             2012-05-28 16:20:25 DEBUG nova.compute.manager
> >             [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c
> >             9494d025721c4d7bb28a16fa796f9414
> >             04282e9aff474d2383bb4d4417673e0a] [instance:
> >             10d7c8e0-e05b-4e57-b722-dab5771261b7] Deallocating network
> >             for instance from (pid=23518) _deallocate_network
> >             /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
> >
> >             2012-05-28 16:20:25 DEBUG nova.rpc.amqp
> >             [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c
> >             9494d025721c4d7bb28a16fa796f9414
> >             04282e9aff474d2383bb4d4417673e0a] Making asynchronous cast
> >             on network... from (pid=23518) cast
> >             /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
> >
> >             2012-05-28 16:20:26 ERROR nova.rpc.amqp
> >             [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c
> >             9494d025721c4d7bb28a16fa796f9414
> >             04282e9aff474d2383bb4d4417673e0a] Exception during message
> >             handling
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp Traceback (most
> >             recent call last):
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line
> >             252, in _process_data
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     rval =
> >             node_func(context=ctxt, **node_args)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/exception.py", line
> >             114, in wrapped
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     return f(*args,
> >             **kw)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 177, in decorated_function
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     sys.exc_info())
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 171, in decorated_function
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     return
> >             function(self, context, instance_uuid, *args, **kwargs)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 651, in run_instance
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     do_run_instance()
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945,
> >             in inner
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     retval =
> >             f(*args, **kwargs)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 650, in do_run_instance
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             self._run_instance(context, instance_uuid, **kwargs)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 451, in _run_instance
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             self._set_instance_error_state(context, instance_uuid)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 432, in _run_instance
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             self._deallocate_network(context, instance)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 429, in _run_instance
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     injected_files,
> >             admin_password)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
> >             line 592, in _spawn
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             self._legacy_nw_info(network_info), block_device_info)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/nova/exception.py", line
> >             114, in wrapped
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     return f(*args,
> >             **kw)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
> >             line 922, in spawn
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             self._create_new_domain(xml)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
> >             line 1575, in _create_new_domain
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp
> >             domain.createWithFlags(launch_flags)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
> >             "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in
> >             createWithFlags
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp     if ret == -1:
> >             raise libvirtError ('virDomainCreateWithFlags() failed',
> >             dom=self)
> >
> >             2012-05-28 16:20:26 TRACE nova.rpc.amqp libvirtError: Unable
> >             to read from monitor: Connection reset by peer
> >
> >
> >     Any suggestions?
> >
> >
> >     Regards,
> >
> >     Leander
> >
> >
> >
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~openstack
> > Post to     : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ______________________________________________________________________
> Disclaimer:This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data.  If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to