[openstack-dev] [Tripleo] os-cloud-config

2014-10-30 Thread LeslieWang
Dear all,
Seems like os-cloud-config is to initialise uncercloud or overcloud after heat 
orchestration. But I can not find how it is used in either tuskar, or 
tuskar-UI. So can anyone explain a little bit how it is used in TripleO 
projects? Thanks.
Best RegardsLeslie Wang   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osprofiler] how to report buy for osprofiler

2014-08-25 Thread LeslieWang
Hi Boris,
Thanks for the reply. 
I'm trying TripleO, and want to enable Horizon UI in this environment, and the 
bug causes horizon can not start correctly. I feel my found probably is helpful 
for people who have needs, so wanna report back to community. 
I've created bug https://bugs.launchpad.net/osprofiler/+bug/1361235, and 
describe the problems and root cause as well as one possible solution.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osprofiler] how to report buy for osprofiler

2014-08-25 Thread LeslieWang
Dear all,
I found a bug of osprofiler, but can not find the project at launchpad.net, so 
wonder to know how to report the bug, and contribute my fix.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest environment for virtual or true bare metal

2014-08-05 Thread LeslieWang
Hi Ben,
Thanks for your reply. 
Actually I'm a little confused by "virtual environment". I guess what it means 
is as below:  - 1 Seed VM as deployment starting point.  - Both undercloud and 
overcloud images are loaded into Glance of Seed VM.  - 15 VMs are created. 1 
for undercloud, 1 for overcloud controller, left 13 are for overcloud compute.  
- 1 Host machine acts as container for all 15 VMs. It can be separated from 
Seed VM.  - Seed VM communicates with Host machine to create 15 VMs and 
installed corresponding images.  Is it correct? Or can you roughly introduces 
the topology of the devtest virtual environment.
Best RegardsLeslie Wang
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] devtest environment for virtual or true bare metal

2014-08-04 Thread LeslieWang
Dear all,
Looking at devtest pages at TripleO wiki 
http://docs.openstack.org/developer/tripleo-incubator/devtest.html. I thought 
all variables and configurations of devtest are for true bare-metal because I 
see diskimage-builder options of both overcloud and undercloud doesn't include 
"vm" option. But I see this configuration in 
tripleo-incubator/scripts/devtest_testenv.sh, 
## #. Set the default bare metal power manager. By default devtest uses
##nova.virt.baremetal.virtual_power_driver.VirtualPowerManager to
##support a fully virtualized TripleO test environment. You may
##optionally customize this setting if you are using real baremetal
##hardware with the devtest scripts. This setting controls the
##power manager used in both the seed VM and undercloud for Nova Baremetal.
##::


POWER_MANAGER=${POWER_MANAGER:-'nova.virt.baremetal.virtual_power_driver.VirtualPowerManager'}
Thus, seems like all setting are for virtual environment, not for true bare 
metal. So I'm a little confused. Can anyone help clarify it? And what is the 
right configure of POWER_MANAGER if using real bare metal hardware?
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] old cloud apt version of ubuntu at install-depencies

2014-08-04 Thread LeslieWang
Dear all,
I found file “tripleo-incubator/scripts/install-depencencies” has below clauses:
DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes 
ubuntu-cloud-keyring
(grep -Eqs "precise-updates/grizzly" 
/etc/apt/sources.list.d/cloud-archive.list) || echo 'deb 
http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main' 
| sudo tee -a /etc/apt/sources.list.d/cloud-archive.list
#adding precise-backports universe repository for jq packageif ! 
command -v add-apt-repository; then
  DEBIAN_FRONTEND=noninteractive sudo -E apt-get install --yes 
python-software-properties
fi
sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ 
precise-backports universe"

Seems like it still uses old grizzly repository. So can anyone please suggest 
if it is a bug and need be updated to latest icehouse? or not?
BTW, here is ubuntu cloud archive link. 
https://wiki.ubuntu.com/ServerTeam/CloudArchive
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Fuel] relationship btw TripleO and Fuel

2014-06-09 Thread LeslieWang
Dear all,
Seems like both Fuel and TripleO are designed to solve problem of complex 
Openstack installation and Deployment. TripleO is using Heat for orchestration. 
If we can define network creation, OS provision and deployment in Heat 
template, seems like they can achieve similar goal. So can anyone explain the 
difference of these two projects, and future roadmap of each of them? Thanks!
TripleO is a program aimed at installing, upgrading and operating OpenStack 
clouds using OpenStack's own cloud facilities as the foundations - building on 
nova, neutron and heat to automate fleet management at datacentre scale (and 
scaling down to as few as 2 machines).
Fuel is an all-in-one control plane for automated hardware discovery, network 
verification, operating systems provisioning and deployment of OpenStack. It 
provides user-friendly Web interface for installations management, simplifying 
OpenStack installation up to a few clicks.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal

2014-06-08 Thread LeslieWang
--
Hi Devananda,
Thanks for your reply. Your link shows how to create a VM through vSphere. What 
we are doing is to how to deploy vSphere to bare metal server, so that 
automation can be from installation, deployment, to configuration, to VM 
creation.
Hi Chris,
We do use diskimage-builder to create ubuntu image, kernel, initfs, and deploy 
ubuntu through Ironic API successful. However, seems like diskimage-builder 
doesn't support vmware, so we don't know how to extract vmware kernel, initfs 
from vmware image, such as 
http://partnerweb.vmware.com/programs/vmdkimage/debian-2.6.32-i686.vmdk. So 
wonder to know whether anyone has done this before.
Best RegardsLeslie
--
Hi Chao,
The ironic ssh driver does support vmware. See 
https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ssh.py#L69-L89.
 Have you seen the Triple-O tools, mainly Disk Image Builder 
(https://github.com/openstack/diskimage-builder). This is how I build images I 
use for testing. I have not tested the vmware parts of ironic as I do not have 
a vmware server to test with, others have tested it. 
Hope this helps.
Chris Krelle--NobodyCam

2014-06-06 1:31 GMT+08:00 Devananda van der Veen :
ChaoYan,

Are you asking about using vmware as a test platform for developing
Ironic, or as a platform on which to run a production workload managed
by Ironic? I do not understand your question -- why would you use
Ironic to manage a VMWare cluster, when there is a separate Nova
driver specifically designed for managing vmware? While I am not
familiar with it, I believe more information may be found here:
  https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

Best,
Devananda

On Thu, Jun 5, 2014 at 4:39 AM, 严超  wrote:
> Hi, All:
> Is ironic support EXSI when boot a bare metal ? If we can, how to
> make vmware EXSI ami bare metal image ?
>
> Best Regards!
> Chao Yan
> --
> My twitter:Andy Yan @yanchao727
> My Weibo:http://weibo.com/herewearenow
> --
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic]Communication between Nova and Ironic

2013-12-29 Thread LeslieWang
It makes a lot of sense. Thanks for your reply!

B.R.
Leslie

Sent from my iPhone

> On Dec 30, 2013, at 3:39 AM, "Oleg Gelbukh"  wrote:
> 
> Leslie,
> 
> This discussion is very interesting indeed :)
> 
> The current approach to auto-scale is that it is decided upon by Heat 
> service. Heat templates have special mechanisms to trigger auto-scaling of 
> resources when certain conditions are met.
> In combination with Ironic, it has powerful potential for 
> OpenStack-on-OpenStack use case you're describing.
> 
> Baiscally, Heat has all orchestration functions in OpenStack. I see it as a 
> natural place for other interesting things like auto-migration of workloads 
> and so on.
> 
> --
> Best regards,
> Oleg Gelbukh
> 
> 
>> On Sun, Dec 29, 2013 at 8:03 AM, LeslieWang  wrote:
>> Hi Client, 
>> 
>> Current ironic call is for add/delete baremetl server, not with auto-scale. 
>> As we discussed in another thread. What I'm thinking is related with 
>> auto-scale baremetal server. In my mind, the logic can be 
>>   1. Nova scheduler determines scale up one baremetal server.
>>   2. Nova scheduler notify ironic (or other API?) to power up the server.
>>   3. if ironic (or other service?) returns success, nova scheduler can call 
>> ironic to add the baremetal server into cluster.
>> 
>> Of course, this is not a sole way for auto-scale. As you specified in 
>> another thread, auto-scale can be triggered from under-cloud or other 
>> monitoring service. Just try to bring up the interesting discussion. :-)
>> 
>> Best Regards
>> Leslie
>> 
>> > From: cl...@fewbar.com
>> > To: openstack-dev@lists.openstack.org
>> > Date: Sat, 28 Dec 2013 13:40:08 -0800
>> > Subject: Re: [openstack-dev] [Ironic]Communication between Nova and Ironic
>> 
>> > 
>> > Excerpts from LeslieWang's message of 2013-12-24 03:01:51 -0800:
>> > > Hi Oleg,
>> > > 
>> > > Thanks for your promptly reply and detail explanation. Merry Christmas 
>> > > and wish you have a happy new year!
>> > > 
>> > > At the same time, I think we can discuss more on Ironic is for backend 
>> > > driver for nova. I'm new in ironic. Per my understanding, the purpose of 
>> > > bare metal as a backend driver is to solve the problem that some 
>> > > appliance systems can not be virtualized, but operator still wants same 
>> > > cloud management system to manage these systems. With the help of 
>> > > ironic, operator can achieve the goal, and use one openstack to manage 
>> > > these systems as VMs, create, delete, deploy image etc. this is one 
>> > > typical use case.
>> > > 
>> > > In addition, actually I'm thinking another interesting use case. 
>> > > Currently openstack requires ops to pre-install all servers. TripleO try 
>> > > to solve this problem and bootstrap openstack using openstack. However, 
>> > > what is missing here is dynamic power on VM/switches/storage only. 
>> > > Imagine initially lab only had one all-in-one openstack controller. The 
>> > > whole work flow can be:
>> > > 1. Users request one VM or baremetal server through portal.
>> > > 2. Horizon sends request to nova-scheduler
>> > > 3. Nova-scheduler finds no server, then invoke ironic api to power on 
>> > > one through IPMI, and install either hyper visor or appliance directly.
>> > > 4. If it need create VM, Nova-scheduler will find one compute node, and 
>> > > send message for further processing.
>> > > 
>> > > Based on this use case, I'm thinking whether it makes sense to embed 
>> > > this ironic invokation logic in nova-scheduler, or another approach is 
>> > > as overall orchestration manager, TripleO project has a 
>> > > TripleO-scheduler to firstly intercept the message, invoke ironic api, 
>> > > then heat api which calls nova api, neutron api, storage api. In this 
>> > > case, TripleO only powers on baremetal server running VM, nova is 
>> > > responsible to power on baremetal server running appliance system. 
>> > > Sounds like latter one is a good solution, however the prior one also 
>> > > works. So can you please comment on it? Thanks!
>> > > 
>> > 
>> > I think this basically already works the way you desire. The scheduler
>> > _does_ decide to call ironic, it just does so through nova-compute RPC
>> > calls. That is important, as

Re: [openstack-dev] [Ironic]Communication between Nova and Ironic

2013-12-28 Thread LeslieWang
Hi Client, 
Current ironic call is for add/delete baremetl server, not with auto-scale. As 
we discussed in another thread. What I'm thinking is related with auto-scale 
baremetal server. In my mind, the logic can be   1. Nova scheduler determines 
scale up one baremetal server.  2. Nova scheduler notify ironic (or other API?) 
to power up the server.  3. if ironic (or other service?) returns success, nova 
scheduler can call ironic to add the baremetal server into cluster.
Of course, this is not a sole way for auto-scale. As you specified in another 
thread, auto-scale can be triggered from under-cloud or other monitoring 
service. Just try to bring up the interesting discussion. :-)
Best RegardsLeslie

> From: cl...@fewbar.com
> To: openstack-dev@lists.openstack.org
> Date: Sat, 28 Dec 2013 13:40:08 -0800
> Subject: Re: [openstack-dev] [Ironic]Communication between Nova and Ironic
> 
> Excerpts from LeslieWang's message of 2013-12-24 03:01:51 -0800:
> > Hi Oleg,
> > 
> > Thanks for your promptly reply and detail explanation. Merry Christmas and 
> > wish you have a happy new year!
> > 
> > At the same time, I think we can discuss more on Ironic is for backend 
> > driver for nova. I'm new in ironic. Per my understanding, the purpose of 
> > bare metal as a backend driver is to solve the problem that some appliance 
> > systems can not be virtualized, but operator still wants same cloud 
> > management system to manage these systems. With the help of ironic, 
> > operator can achieve the goal, and use one openstack to manage these 
> > systems as VMs, create, delete, deploy image etc. this is one typical use 
> > case.
> > 
> > In addition, actually I'm thinking another interesting use case. Currently 
> > openstack requires ops to pre-install all servers. TripleO try to solve 
> > this problem and bootstrap openstack using openstack. However, what is 
> > missing here is dynamic power on VM/switches/storage only. Imagine 
> > initially lab only had one all-in-one openstack controller. The whole work 
> > flow can be:
> >   1. Users request one VM or baremetal server through portal.
> >   2. Horizon sends request to nova-scheduler
> >   3. Nova-scheduler finds no server, then invoke ironic api to power on one 
> > through IPMI, and install either hyper visor or appliance directly.
> >   4. If it need create VM, Nova-scheduler will find one compute node, and 
> > send message for further processing.
> > 
> > Based on this use case, I'm thinking whether it makes sense to embed this 
> > ironic invokation logic in nova-scheduler, or another approach is as 
> > overall orchestration manager, TripleO project has a TripleO-scheduler to 
> > firstly intercept the message, invoke ironic api, then heat api which calls 
> > nova api, neutron api, storage api.  In this case, TripleO only powers on 
> > baremetal server running VM, nova is responsible to power on baremetal 
> > server running appliance system. Sounds like latter one is a good solution, 
> > however the prior one also works. So can you please comment on it? Thanks!
> > 
> 
> I think this basically already works the way you desire. The scheduler
> _does_ decide to call ironic, it just does so through nova-compute RPC
> calls. That is important, as this allows the scheduler to hand-off the
> entire work-flow of provisioning a machine to nova-compute in the exact
> same way as is done for regular cloud workloads.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UnderCloud & OverCloud

2013-12-28 Thread LeslieWang
Hi Clint,
Thanks for your reply. Please see inline.
Best RegardsLeslie

> From: cl...@fewbar.com
> To: openstack-dev@lists.openstack.org
> Date: Sat, 28 Dec 2013 08:23:45 -0800
> Subject: Re: [openstack-dev] [Spam]  [TripleO] UnderCloud & OverCloud
> 
> Excerpts from LeslieWang's message of 2013-12-24 19:19:52 -0800:
> > Dear All,
> > Merry Christmas & Happy New Year!
> > I'm new in TripleO. After some investigation, I have one question on 
> > UnderCloud & OverCloud. Per my understanding, UnderCloud will pre-install 
> > and set up all baremetal servers used for OverCloud. Seems like it assumes 
> > all baremetal server should be installed in advance. Then my question is 
> > from green and elasticity point of view. Initially OverCloud should have 
> > zero baremetal server. Per user demands, OverCloud Nova Scheduler should 
> > decide if I need more baremetal server, then talk to UnderCloud to allocate 
> > more baremetal servers, which will use Heat to orchestrate baremetal server 
> > starts. Does it make senses? Does it already plan in the roadmap?
> > If UnderCloud resources are created/deleted elastically, why not OverCloud 
> > talks to Ironic to allocate resource directly? Seems like it can achieve 
> > same goal. What else features UnderCloud will provide? Thanks in advance.
> > Best RegardsLeslie   
> 
> Having the overcloud scheduler ask for new servers would be pretty
> interesting. It takes most large scale servers several minutes just to
> POST though, so I'm not sure it is going to work out well if you care
> about latency for booting VMs.
Leslie - Nova API can add one option (latency sensitive or not) to aid 
scheduler decision. If client is sensitive about latency for booting VM, it can 
pass one parameter to specify booting VM immediately. Then scheduler can start 
VM from running baremetal server. Otherwise, if client doesn't create latency, 
scheduler can start new servers, then start VM on top. 
> 
> What might work is to use an auto-scaler in the undercloud though, perhaps
> having it informed by the overcloud in some way for more granular policy
> possibilities, but even just knowing how much RAM and CPU are allocated
> across the compute nodes would help to inform us when it is time for
> more compute nodes.
> 
> Also the scale-up is fun, but scaling down is tougher. One can only scale
> down off nodes that have no more compute workloads. If you have live
> migration then you can kick off live migration before scale down, but
> in a highly utilized cluster I think that will be a net loss over time
> as the extra load caused by a large scale live migration will outweigh
> the savings from turning off machines. The story might be different for
> a system built on network based volumes like CEPH,  I'm not sure.
Leslie - agree.
> 
> Anyway, this is really interesting to think about, but it is not
> something we're quite ready for yet. We're just getting to the point
> of being able to deploy software updates using images, and then I hope
> to focus on improving usage of Heat with rolling updates and the new
> software config capabilities. After that it may be that we can look at
> how to scale down a compute cluster automatically. :)
Leslie - understand. Roma is not build in the one day.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] UnderCloud & OverCloud

2013-12-24 Thread LeslieWang
Dear All,
Merry Christmas & Happy New Year!
I'm new in TripleO. After some investigation, I have one question on UnderCloud 
& OverCloud. Per my understanding, UnderCloud will pre-install and set up all 
baremetal servers used for OverCloud. Seems like it assumes all baremetal 
server should be installed in advance. Then my question is from green and 
elasticity point of view. Initially OverCloud should have zero baremetal 
server. Per user demands, OverCloud Nova Scheduler should decide if I need more 
baremetal server, then talk to UnderCloud to allocate more baremetal servers, 
which will use Heat to orchestrate baremetal server starts. Does it make 
senses? Does it already plan in the roadmap?
If UnderCloud resources are created/deleted elastically, why not OverCloud 
talks to Ironic to allocate resource directly? Seems like it can achieve same 
goal. What else features UnderCloud will provide? Thanks in advance.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic]Communication between Nova and Ironic

2013-12-24 Thread LeslieWang
Hi Oleg,

Thanks for your promptly reply and detail explanation. Merry Christmas and wish 
you have a happy new year!

At the same time, I think we can discuss more on Ironic is for backend driver 
for nova. I'm new in ironic. Per my understanding, the purpose of bare metal as 
a backend driver is to solve the problem that some appliance systems can not be 
virtualized, but operator still wants same cloud management system to manage 
these systems. With the help of ironic, operator can achieve the goal, and use 
one openstack to manage these systems as VMs, create, delete, deploy image etc. 
this is one typical use case.

In addition, actually I'm thinking another interesting use case. Currently 
openstack requires ops to pre-install all servers. TripleO try to solve this 
problem and bootstrap openstack using openstack. However, what is missing here 
is dynamic power on VM/switches/storage only. Imagine initially lab only had 
one all-in-one openstack controller. The whole work flow can be:
  1. Users request one VM or baremetal server through portal.
  2. Horizon sends request to nova-scheduler
  3. Nova-scheduler finds no server, then invoke ironic api to power on one 
through IPMI, and install either hyper visor or appliance directly.
  4. If it need create VM, Nova-scheduler will find one compute node, and send 
message for further processing.

Based on this use case, I'm thinking whether it makes sense to embed this 
ironic invokation logic in nova-scheduler, or another approach is as overall 
orchestration manager, TripleO project has a TripleO-scheduler to firstly 
intercept the message, invoke ironic api, then heat api which calls nova api, 
neutron api, storage api.  In this case, TripleO only powers on baremetal 
server running VM, nova is responsible to power on baremetal server running 
appliance system. Sounds like latter one is a good solution, however the prior 
one also works. So can you please comment on it? Thanks!

B.R.
Leslie

Sent from my iPhone

> On Dec 24, 2013, at 5:02 PM, "Oleg Gelbukh"  wrote:
> 
> Hello, Leslie
> 
> There seem to be some misnotation in this picture, in steps #2 and #4-7. In 
> #2, the 'node' means 'instance of nova-compute' which should handle the 
> request, and nova-scheduler selects that instance of nova-compute.
> 
> In #4 and #7, however, the 'node' means bare-metal server under management of 
> that compute node. That server is actually provisioned as an instance of 
> bare-metal cloud. It is assumed that all servers under management of single 
> instance of nova-compute are equal. Ironic does not perfrom any scheduling or 
> selection of bare-metal servers on its side, just picks the first one.
> 
> Ironic is considered to be a back-end to virt driver of Nova. That is why 
> nova-compute service talks to Ironic API, just as it talks to vCenter API, 
> for example, when VMWare is used as a virt back-end.
> 
> Hope it helps to clarify the picture.
> 
> --
> Best regards,
> Oleg Gelbukh
> 
> 
>> On Tue, Dec 24, 2013 at 12:28 PM, LeslieWang  wrote:
>> Hi All,
>> 
>> I'm investigating Ironic recently, and found below diagram describing the 
>> workflow between nova and ironic. I have one questions about step #4, #7. 
>> Why it is invoked by Nova Compute, not Nova Scheduler. Ironic is used to 
>> power on baremetal server, and deploy image. Seems like nova compute should 
>> be installed after this call. So I guess this call should be initiated from 
>> Nova Scheduler through either synchronous ironic API call, or asynchronous 
>> message queue. 
>> 
>>   
>> 
>> Can anyone please answer this question? Your input is highly appreciated.
>> 
>> Best Regards
>> Leslie
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev