Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-04 Thread Mohammed Naser
On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann  wrote:
>
> On 11/2/2018 2:22 PM, Eric Fried wrote:
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set [compute]resource_provider_association_refresh to
> > zero and the resource tracker will never* refresh the report client's
> > provider cache. Philosophically, we're removing the "healing" aspect of
> > the resource tracker's periodic and trusting that placement won't
> > diverge from whatever's in our cache. (If it does, it's because the op
> > hit the CLI, in which case they should SIGHUP - see below.)
> >
> > *except:
> > - When we initially create the compute node record and bootstrap its
> > resource provider.
> > - When the virt driver's update_provider_tree makes a change,
> > update_from_provider_tree reflects them in the cache as well as pushing
> > them back to placement.
> > - If update_from_provider_tree fails, the cache is cleared and gets
> > rebuilt on the next periodic.
> > - If you send SIGHUP to the compute process, the cache is cleared.
> >
> > This should dramatically reduce the number of calls to placement from
> > the compute service. Like, to nearly zero, unless something is actually
> > changing.
> >
> > Can I get some initial feedback as to whether this is worth polishing up
> > into something real? (It will probably need a bp/spec if so.)
> >
> > [1]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
> > [2]https://review.openstack.org/#/c/614886/
> >
> > ==
> > Background
> > ==
> > In the Queens release, our friends at CERN noticed a serious spike in
> > the number of requests to placement from compute nodes, even in a
> > stable-state cloud. Given that we were in the process of adding a ton of
> > infrastructure to support sharing and nested providers, this was not
> > unexpected. Roughly, what was previously:
> >
> >   @periodic_task:
> >   GET/resource_providers/$compute_uuid
> >   GET/resource_providers/$compute_uuid/inventories
> >
> > became more like:
> >
> >   @periodic_task:
> >   # In Queens/Rocky, this would still just return the compute RP
> >   GET /resource_providers?in_tree=$compute_uuid
> >   # In Queens/Rocky, this would return nothing
> >   GET /resource_providers?member_of=...&required=MISC_SHARES...
> >   for each provider returned above:  # i.e. just one in Q/R
> >   GET/resource_providers/$compute_uuid/inventories
> >   GET/resource_providers/$compute_uuid/traits
> >   GET/resource_providers/$compute_uuid/aggregates
> >
> > In a cloud the size of CERN's, the load wasn't acceptable. But at the
> > time, CERN worked around the problem by disabling refreshing entirely.
> > (The fact that this seems to have worked for them is an encouraging sign
> > for the proposed code change.)
> >
> > We're not actually making use of most of that information, but it sets
> > the stage for things that we're working on in Stein and beyond, like
> > multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
> > etc., so removing/reducing the amount of information we look at isn't
> > really an option strategically.
>
> A few random points from the long discussion that should probably
> re-posed here for wider thought:
>
> * There was probably a lot of discussion about why we needed to do this
> caching and stuff in the compute in the first place. What has changed
> that we no longer need to aggressively refresh the cache on every
> periodic? I thought initially it came up because people really wanted
> the compute to be fully self-healing to any external changes, including
> hot plugging resources like disk on the host to automatically reflect
> those changes in inventory. Similarly, external user/service
> interactions with the placement API which would then be automatically
> picked up by the next periodic run - is that no longer a desire, and/or
> how was the decision made previously that simply requiring a SIGHUP in
> that case wasn't sufficient/desirable.
>
> * I believe I made the point yesterday that we should probably not
> refresh by default, and let operators opt-in to that behavior if they
> really need it, i.e. they are frequently making changes to the
> environment, potentially by some external service (I could think of
> vCenter doing this to reflect changes from vCenter back into
> nova/placement), but I don't think that should be the assumed behavior
> by most and our defaults should reflect the "normal" use case.
>
> * I think I've noted a few times now that we don't actually use the
> provider aggregates information (yet) in the compute service. Nova host
> aggregate membership is mirror to placement since Rocky [1] but that
> happens in the API, not the the compute. The only thing I can think of
> that relied on resource provider aggregate information in the compute is
> the shared storage providers concept, but that's n

Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-04 Thread Mohammed Naser
Ugh, hit send accidentally.  Please take my comments lightly as I have not been
as involved with the developments but just chiming in as an operator with some
ideas.

On Fri, Nov 2, 2018 at 9:32 PM Matt Riedemann  wrote:
>
> On 11/2/2018 2:22 PM, Eric Fried wrote:
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set [compute]resource_provider_association_refresh to
> > zero and the resource tracker will never* refresh the report client's
> > provider cache. Philosophically, we're removing the "healing" aspect of
> > the resource tracker's periodic and trusting that placement won't
> > diverge from whatever's in our cache. (If it does, it's because the op
> > hit the CLI, in which case they should SIGHUP - see below.)
> >
> > *except:
> > - When we initially create the compute node record and bootstrap its
> > resource provider.
> > - When the virt driver's update_provider_tree makes a change,
> > update_from_provider_tree reflects them in the cache as well as pushing
> > them back to placement.
> > - If update_from_provider_tree fails, the cache is cleared and gets
> > rebuilt on the next periodic.
> > - If you send SIGHUP to the compute process, the cache is cleared.
> >
> > This should dramatically reduce the number of calls to placement from
> > the compute service. Like, to nearly zero, unless something is actually
> > changing.
> >
> > Can I get some initial feedback as to whether this is worth polishing up
> > into something real? (It will probably need a bp/spec if so.)
> >
> > [1]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
> > [2]https://review.openstack.org/#/c/614886/
> >
> > ==
> > Background
> > ==
> > In the Queens release, our friends at CERN noticed a serious spike in
> > the number of requests to placement from compute nodes, even in a
> > stable-state cloud. Given that we were in the process of adding a ton of
> > infrastructure to support sharing and nested providers, this was not
> > unexpected. Roughly, what was previously:
> >
> >   @periodic_task:
> >   GET/resource_providers/$compute_uuid
> >   GET/resource_providers/$compute_uuid/inventories
> >
> > became more like:
> >
> >   @periodic_task:
> >   # In Queens/Rocky, this would still just return the compute RP
> >   GET /resource_providers?in_tree=$compute_uuid
> >   # In Queens/Rocky, this would return nothing
> >   GET /resource_providers?member_of=...&required=MISC_SHARES...
> >   for each provider returned above:  # i.e. just one in Q/R
> >   GET/resource_providers/$compute_uuid/inventories
> >   GET/resource_providers/$compute_uuid/traits
> >   GET/resource_providers/$compute_uuid/aggregates
> >
> > In a cloud the size of CERN's, the load wasn't acceptable. But at the
> > time, CERN worked around the problem by disabling refreshing entirely.
> > (The fact that this seems to have worked for them is an encouraging sign
> > for the proposed code change.)
> >
> > We're not actually making use of most of that information, but it sets
> > the stage for things that we're working on in Stein and beyond, like
> > multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
> > etc., so removing/reducing the amount of information we look at isn't
> > really an option strategically.
>
> A few random points from the long discussion that should probably
> re-posed here for wider thought:
>
> * There was probably a lot of discussion about why we needed to do this
> caching and stuff in the compute in the first place. What has changed
> that we no longer need to aggressively refresh the cache on every
> periodic? I thought initially it came up because people really wanted
> the compute to be fully self-healing to any external changes, including
> hot plugging resources like disk on the host to automatically reflect
> those changes in inventory. Similarly, external user/service
> interactions with the placement API which would then be automatically
> picked up by the next periodic run - is that no longer a desire, and/or
> how was the decision made previously that simply requiring a SIGHUP in
> that case wasn't sufficient/desirable.

I think that would be nice to have however at the current moment, based
from operators perspective, it looks like the placement service can really
get out of sync pretty easily.. so I think it'd be good to commit to either
really making it self-heal (delete stale allocations, create ones that should
be there) or remove all self-healing stuff

Also, for the self healing work, if we take that route and implement it fully,
it might make placement split much easier, because we just switch over
and wait for the computes to automagically populate everything, but that's
the type of operation that happens once in the lifetime of a cloud.

Just for information sake, a clean state cloud which had no reported issues
over maybe a 

[openstack-dev] [nova][cinder] Using externally stored keys for encryption

2018-11-04 Thread Mohammed Naser
Hi everyone:

I've been digging around the documentation of Nova, Cinder and the
encrypted disks feature and I've been a bit stumped on something which
I think is a very relevant use case that might not be possible (or it
is and I have totally missed it!)

It seems that both Cinder and Nova assume that secrets are always
stored within the Barbican deployment in the same cloud.  This makes a
lot of sense however in scenarios where the consumer of an OpenStack
cloud wants to operate it without trusting the cloud, they won't be
able to have encrypted volumes that make sense, an example:

- Create encrypted volume, keys are stored in Barbican
- Boot VM using said encrypted volume, Nova pulls keys from Barbican,
starts VM..

However, this means that the deployer can at anytime pull down the
keys and decrypt things locally to do $bad_things.  However, if we had
something like any of the following two ideas:

- Allow for "run-time" providing secret on boot (maybe something added
to the start/boot VM API?)
- Allow for pointing towards an external instance of Barbican

By using those 2, we allow OpenStack users to operate their VMs
securely and allowing them to have control over their keys.  If they
want to revoke all access, they can shutdown all the VMs and cut
access to their key storage management and not worry about someone
just pulling them down from the internal Barbican.

Hopefully I did a good job explaining this use case and I'm just
wondering if this is a thing that's possible at the moment or if we
perhaps need to look into it.

Thanks,
Mohammed

-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Placement requests and caching in the resource tracker

2018-11-04 Thread Jay Pipes

On 11/02/2018 03:22 PM, Eric Fried wrote:

All-

Based on a (long) discussion yesterday [1] I have put up a patch [2]
whereby you can set [compute]resource_provider_association_refresh to
zero and the resource tracker will never* refresh the report client's
provider cache. Philosophically, we're removing the "healing" aspect of
the resource tracker's periodic and trusting that placement won't
diverge from whatever's in our cache. (If it does, it's because the op
hit the CLI, in which case they should SIGHUP - see below.)

*except:
- When we initially create the compute node record and bootstrap its
resource provider.
- When the virt driver's update_provider_tree makes a change,
update_from_provider_tree reflects them in the cache as well as pushing
them back to placement.
- If update_from_provider_tree fails, the cache is cleared and gets
rebuilt on the next periodic.
- If you send SIGHUP to the compute process, the cache is cleared.

This should dramatically reduce the number of calls to placement from
the compute service. Like, to nearly zero, unless something is actually
changing.

Can I get some initial feedback as to whether this is worth polishing up
into something real? (It will probably need a bp/spec if so.)

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-11-01.log.html#t2018-11-01T17:32:03
[2] https://review.openstack.org/#/c/614886/

==
Background
==
In the Queens release, our friends at CERN noticed a serious spike in
the number of requests to placement from compute nodes, even in a
stable-state cloud. Given that we were in the process of adding a ton of
infrastructure to support sharing and nested providers, this was not
unexpected. Roughly, what was previously:

  @periodic_task:
  GET /resource_providers/$compute_uuid
  GET /resource_providers/$compute_uuid/inventories

became more like:

  @periodic_task:
  # In Queens/Rocky, this would still just return the compute RP
  GET /resource_providers?in_tree=$compute_uuid
  # In Queens/Rocky, this would return nothing
  GET /resource_providers?member_of=...&required=MISC_SHARES...
  for each provider returned above:  # i.e. just one in Q/R
  GET /resource_providers/$compute_uuid/inventories
  GET /resource_providers/$compute_uuid/traits
  GET /resource_providers/$compute_uuid/aggregates

In a cloud the size of CERN's, the load wasn't acceptable. But at the
time, CERN worked around the problem by disabling refreshing entirely.
(The fact that this seems to have worked for them is an encouraging sign
for the proposed code change.)

We're not actually making use of most of that information, but it sets
the stage for things that we're working on in Stein and beyond, like
multiple VGPU types, bandwidth resource providers, accelerators, NUMA,
etc., so removing/reducing the amount of information we look at isn't
really an option strategically.


I support your idea of getting rid of the periodic refresh of the cache 
in the scheduler report client. Much of that was added in order to 
emulate the original way the resource tracker worked.


Most of the behaviour in the original resource tracker (and some of the 
code still in there for dealing with (surprise!) PCI passthrough devices 
and NUMA topology) was due to doing allocations on the compute node (the 
whole claims stuff). We needed to always be syncing the state of the 
compute_nodes and pci_devices table in the cell database with whatever 
usage information was being created/modified on the compute nodes [0].


All of the "healing" code that's in the resource tracker was basically 
to deal with "soft delete", migrations that didn't complete or work 
properly, and, again, to handle allocations becoming out-of-sync because 
the compute nodes were responsible for allocating (as opposed to the 
current situation we have where the placement service -- via the 
scheduler's call to claim_resources() -- is responsible for allocating 
resources [1]).


Now that we have generation markers protecting both providers and 
consumers, we can rely on those generations to signal to the scheduler 
report client that it needs to pull fresh information about a provider 
or consumer. So, there's really no need to automatically and blindly 
refresh any more.


Best,
-jay

[0] We always need to be syncing those tables because those tables, 
unlike the placement database's data modeling, couple both inventory AND 
usage in the same table structure...


[1] again, except for PCI devices and NUMA topology, because of the 
tight coupling in place with the different resource trackers those types 
of resources use...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-04 Thread Monty Taylor

Heya,

I've floated a half-baked version of this idea to a few people, but 
lemme try again with some new words.


What if we added support for serving vendor data files from the root of 
a primary URL as-per RFC 5785. Specifically, support deployers adding a 
json file to .well-known/openstack/client that would contain what we 
currently store in the openstacksdk repo and were just discussing 
splitting out.


Then, an end-user could put a url into the 'cloud' parameter.

Using vexxhost as an example, if Mohammed served:

{
  "name": "vexxhost",
  "profile": {
"auth_type": "v3password",
"auth": {
  "auth_url": "https://auth.vexxhost.net/v3";
},
"regions": [
  "ca-ymq-1",
  "sjc1"
],
"identity_api_version": "3",
"image_format": "raw",
"requires_floating_ip": false
  }
}

from https://vexxhost.com/.well-known/openstack/client

And then in my local config I did:

import openstack
conn = openstack.connect(
cloud='https://vexxhost.com',
username='my-awesome-user',
...)

The client could know to go fetch 
https://vexxhost.com/.well-known/openstack/client to use as the profile 
information needed for that cloud.


If I wanted to configure a clouds.yaml entry, it would look like:

clouds:
  mordred-vexxhost:
profile: https://vexxhost.com
auth:
  username: my-awesome-user

And I could just

conn = openstack.connect(cloud='mordred-vexxhost')

The most important part being that we define the well-known structure 
and interaction. Then we don't need the info in a git repo managed by 
the publiccloud-wg - each public cloud can manage it itself. But also - 
non-public clouds can take advantage of being able to define such 
information for their users too - and can hand a user a simple global 
entrypoint for discover. As they add regions - or if they decide to 
switch from global keystone to per-region keystone, they can just update 
their profile file and all will be good with the world.


Of course, it's a convenience, so nothing forces anyone to deploy one.

For backwards compat, as public clouds we have vendor profiles for start 
deploying a well-known profile, we can update the baked-in named profile 
in openstacksdk to simply reference the remote url and over time 
hopefully there will cease to be any information that's useful in the 
openstacksdk repo.


What do people think?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [publiccloud-wg] Serving vendor json from RFC 5785 well-known dir

2018-11-04 Thread Mohammed Naser
On Sun, Nov 4, 2018 at 4:12 PM Monty Taylor  wrote:
>
> Heya,
>
> I've floated a half-baked version of this idea to a few people, but
> lemme try again with some new words.
>
> What if we added support for serving vendor data files from the root of
> a primary URL as-per RFC 5785. Specifically, support deployers adding a
> json file to .well-known/openstack/client that would contain what we
> currently store in the openstacksdk repo and were just discussing
> splitting out.
>
> Then, an end-user could put a url into the 'cloud' parameter.
>
> Using vexxhost as an example, if Mohammed served:
>
> {
>"name": "vexxhost",
>"profile": {
>  "auth_type": "v3password",
>  "auth": {
>"auth_url": "https://auth.vexxhost.net/v3";
>  },
>  "regions": [
>"ca-ymq-1",
>"sjc1"
>  ],
>  "identity_api_version": "3",
>  "image_format": "raw",
>  "requires_floating_ip": false
>}
> }
>
> from https://vexxhost.com/.well-known/openstack/client
>
> And then in my local config I did:
>
> import openstack
> conn = openstack.connect(
>  cloud='https://vexxhost.com',
>  username='my-awesome-user',
>  ...)
>
> The client could know to go fetch
> https://vexxhost.com/.well-known/openstack/client to use as the profile
> information needed for that cloud.

Mohammed likes this idea and would like to present this for you to hack on:
https://vexxhost.com/.well-known/openstack/client

> If I wanted to configure a clouds.yaml entry, it would look like:
>
> clouds:
>mordred-vexxhost:
>  profile: https://vexxhost.com
>  auth:
>username: my-awesome-user
>
> And I could just
>
> conn = openstack.connect(cloud='mordred-vexxhost')
>
> The most important part being that we define the well-known structure
> and interaction. Then we don't need the info in a git repo managed by
> the publiccloud-wg - each public cloud can manage it itself. But also -
> non-public clouds can take advantage of being able to define such
> information for their users too - and can hand a user a simple global
> entrypoint for discover. As they add regions - or if they decide to
> switch from global keystone to per-region keystone, they can just update
> their profile file and all will be good with the world.
>
> Of course, it's a convenience, so nothing forces anyone to deploy one.
>
> For backwards compat, as public clouds we have vendor profiles for start
> deploying a well-known profile, we can update the baked-in named profile
> in openstacksdk to simply reference the remote url and over time
> hopefully there will cease to be any information that's useful in the
> openstacksdk repo.
>
> What do people think?

I really like this idea, the only concern is fallbacks.  I can imagine
that in some
arbitrary world, things might get restructured in a web structure and that URL
magically disappears but shifting the responsibilities on the provider rather
than maintainers of this project is a much cleaner alternative, IMHO.

> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about use nfs driver to backup the volume snapshot

2018-11-04 Thread Rambo
Sorry , I mean use the NFS driver as the cinder-backup_driver.I see the 
remotefs code achieve the create_volume_from snapshot[1],in this function the 
snapshot.status must be available. But before this in the api part, the 
snapshot.status was changed to the backing_up status[2].Is there something 
wrong?Can you tell me more about this?Thank you very much.




[1]https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/remotefs.py#L1259
[2]https://github.com/openstack/cinder/blob/master/cinder/backup/api.py#L292
-- Original --
From:  "Eric Harney";
Date:  Fri, Nov 2, 2018 10:00 PM
To:  "jsbryant"; "OpenStack 
Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about use nfs driver to backup the 
volume snapshot

 
On 11/1/18 4:44 PM, Jay Bryant wrote:
> On Thu, Nov 1, 2018, 10:44 AM Rambo  wrote:
> 
>> Hi,all
>>
>>   Recently, I use the nfs driver as the cinder-backup backend, when I
>> use it to backup the volume snapshot, the result is return the
>> NotImplementedError[1].And the nfs.py doesn't has the
>> create_volume_from_snapshot function. Does the community plan to achieve
>> it which is as nfs as the cinder-backup backend?Can you tell me about
>> this?Thank you very much!
>>
>> Rambo,
> 
> The NFS driver doesn't have full snapshot support. I am not sure if that
> function missing was an oversight or not. I would reach out to Eric Harney
> as he implemented that code.
> 
> Jay
> 

create_volume_from_snapshot is implemented in the NFS driver.  It is in 
the remotefs code that the NFS driver inherits from.

But, I'm not sure I understand what's being asked here -- how is this 
related to using NFS as the backup backend?


>>
>>
>> [1]
>> https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L2142
>>
>>
>>
>>
>>
>>
>>
>>
>> Best Regards
>> Rambo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] about live-resize the instance

2018-11-04 Thread Rambo
Hi,all


  I find it is important that live-resize the instance in production 
environment. We have talked it many years and we agreed this in Rocky PTG, then 
the author remove the spec to Stein, but there is no information about this 
spec, is there anyone to push the spec and achieve it? Can you tell me more 
about this ?Thank you very much.


[1]https://review.openstack.org/#/c/141219/
















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about live-resize the instance

2018-11-04 Thread Chen CH Ji
Yes, this has been discussed for long time and If I remember this correctly seems S PTG also had some discussion on it (maybe public Cloud WG ? ), Claudiu has been pushing this for several cycles and he actually had some code at [1] but no additional progress there...
 
[1] https://review.openstack.org/#/q/status:abandoned+topic:bp/instance-live-resize
 
- Original message -From: "Rambo" To: "OpenStack Developmen" Cc:Subject: [openstack-dev] [nova] about live-resize the instanceDate: Mon, Nov 5, 2018 10:28 AM 
Hi,all
 
      I find it is important that live-resize the instance in production environment. We have talked it many years and we agreed this in Rocky PTG, then the author remove the spec to Stein, but there is no information about this spec, is there anyone to push the spec and achieve it? Can you tell me more about this ?Thank you very much.
 
[1]https://review.openstack.org/#/c/141219/
 
 
 
 
 
 
 
 
Best Regards
Rambo


__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev