[Openstack] Floating IP is wasting IP resources

2013-05-02 Thread
Recently I'm test floating IP on version Grizzly, I found the mechanism of
floating IP is a little of wasting public IP addresses.

In some circumstance, like public cloud environment. there is only one user
in one project (tenant). If the user want to using floating IP,  he has to
create an router and set a gateway for it, this process  will occupy one
additional public IP address. So the whole process of floating IP will use
2 public address at least.

So my question is, are there any ways to avoid this?

Thanks
Ray
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How to hot-plug network interface for a running instance

2013-04-22 Thread
Hi, All

I'm testing hot-plug network interface, I can successfully hot-add an
interface to a running instance by using command 'nova interface-attach
--net-id xx instance_name', but the network of the interface must has
been specified to instance when you  create it. For a new network, I can't
attach it to the instance and got errors from nova-api.log:

==
2013-04-22 13:27:38.484 ERROR nova.api.openstack
[req-73c7fcb6-5137-43c0-aa4f-c3ba3509df64 3745e52df7864de79f912d7a9479e182
67b78b4656cf4affa49fc75b847e8914] Caught error:
u'e4eca07f-bf8e-435a-84e2-628f89067623' is not in list
Traceback (most recent call last):

  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
430, in _process_data
rval = self.proxy.dispatch(ctxt, version, method, **args)

  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
line 133, in dispatch
return getattr(proxyobj, method)(ctxt, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line
2969, in attach_interface
self.conductor_api)

  File /usr/lib/python2.7/dist-packages/nova/network/api.py, line 46, in
wrapper
res = f(self, context, *args, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 335, in allocate_port_for_instance
conductor_api=conductor_api)

  File /usr/lib/python2.7/dist-packages/nova/network/api.py, line 46, in
wrapper
res = f(self, context, *args, **kwargs)

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 285, in allocate_for_instance
nw_info = self._get_instance_nw_info(context, instance, networks=nets)

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 367, in _get_instance_nw_info
nw_info = self._build_network_info_model(context, instance, networks)

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 788, in _build_network_info_model
[n['id'] for n in networks])

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 945, in _ensure_requested_network_ordering
unordered.sort(key=lambda i: preferred.index(accessor(i)))

  File /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py,
line 945, in lambda
unordered.sort(key=lambda i: preferred.index(accessor(i)))

ValueError: u'e4eca07f-bf8e-435a-84e2-628f89067623' is not in list
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack Traceback (most
recent call last):
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py, line 81,
in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
req.get_response(self.application)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack application,
catch_exc_info=False)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in
call_application
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack app_iter =
application(self.environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
resp(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py,
line 450, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
self.app(env, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
resp(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
resp(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
resp(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in
__call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack response =
self.app(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack return
resp(environ, start_response)
2013-04-22 13:27:38.484 22571 TRACE nova.api.openstack   File

[Openstack] Lost copy-on-write feature when create volume from image with using Ceph RBD

2013-04-12 Thread
Hi, Guys

Recently I'm testing the new version of OpenStack Grizzly. I'm using Ceph
RBD as the backend storage both for Glance and Cinder.

In the old version of Folsom, when set the parameter
‘show_image_direct_url=True’ in the config file glance-api.conf, the
process of  create volume from image will be done immediately, because it
 just to create a snapshot from Ceph's image pool to volumes pool.

But now I found the feature is lost, it always perform a convert process
which will download the the whole image file from Ceph's pool of 'images'
and convert it to RAW format then upload to Ceph's pool of 'volumes' no
matter what the original format of image file is. This process take longer
time than the older version of 'Folsom'.


So is this a bug or I miss-configured something?

Thanks
Ray
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp