Re: [openstack-dev] [nova] Distributed locking
On 13 June 2014 02:30, Matthew Booth mbo...@redhat.com wrote: We have a need for a distributed lock in the VMware driver, which I suspect isn't unique. Specifically it is possible for a VMware datastore to be accessed via multiple nova nodes if it is shared between clusters[1]. Unfortunately the vSphere API doesn't provide us with the primitives to implement robust locking using the storage layer itself, so we're looking elsewhere. Perhaps I'm missing something, but I didn't see anything in your description about actually needing a *distributed* lock, just needing a local that can be held by remote systems. As Devananda says, a centralised lock that can be held by agents has been implemented in Ironic - such a thing is very simple and quite easy to reason about... but its not suitable for all problems. HA and consistency requirements for such a thing are delivered through e.g. galera in the DB layer. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
On 06/13/2014 06:40 PM, Michael Still wrote: Greetings, I would like to nominate Ken'ichi Ohmichi for the nova-core team. Ken'ichi has been involved with nova for a long time now. His reviews on API changes are excellent, and he's been part of the team that has driven the new API work we've seen in recent cycles forward. Ken'ichi has also been reviewing other parts of the code base, and I think his reviews are detailed and helpful. Please respond with +1s or any concerns. +1 References: https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z http://www.stackalytics.com/?module=nova-groupuser_id=oomichi As a reminder, we use the voting process outlined at https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team. Thanks, Michael -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate
On 06/13/2014 06:47 PM, Joe Gordon wrote: On Thu, Jun 12, 2014 at 7:18 PM, Dan Prince dpri...@redhat.com mailto:dpri...@redhat.com wrote: On Thu, 2014-06-12 at 09:24 -0700, Joe Gordon wrote: On Jun 12, 2014 8:37 AM, Sean Dague s...@dague.net mailto:s...@dague.net wrote: On 06/12/2014 10:38 AM, Mike Bayer wrote: On 6/12/14, 8:26 AM, Julien Danjou wrote: On Thu, Jun 12 2014, Sean Dague wrote: That's not cacthable in unit or functional tests? Not in an accurate manner, no. Keeping jobs alive based on the theory that they might one day be useful is something we just don't have the liberty to do any more. We've not seen an idle node in zuul in 2 days... and we're only at j-1. j-3 will be at least +50% of this load. Sure, I'm not saying we don't have a problem. I'm just saying it's not a good solution to fix that problem IMHO. Just my 2c without having a full understanding of all of OpenStack's CI environment, Postgresql is definitely different enough that MySQL strict mode could still allow issues to slip through quite easily, and also as far as capacity issues, this might be longer term but I'm hoping to get database-related tests to be lots faster if we can move to a model that spends much less time creating databases and schemas. This is what I mean by functional testing. If we were directly hitting a real database on a set of in tree project tests, I think you could discover issues like this. Neutron was headed down that path. But if we're talking about a devstack / tempest run, it's not really applicable. If someone can point me to a case where we've actually found this kind of bug with tempest / devstack, that would be great. I've just *never* seen it. I was the one that did most of the fixing for pg support in Nova, and have helped other projects as well, so I'm relatively familiar with the kinds of fails we can discover. The ones that Julien pointed really aren't likely to be exposed in our current system. Which is why I think we're mostly just burning cycles on the existing approach for no gain. Given all the points made above, I think dropping PostgreSQL is the right choice; if only we had infinite cloud that would be another story. What about converting one of our existing jobs (grenade partial ncpu, large ops, regular grenade, tempest with nova network etc.) Into a PostgreSQL only job? We could get some level of PostgreSQL testing without any additional jobs, although this is tradeoff obviously. I'd be fine with this tradeoff if it allows us to keep PostgreSQL in the mix. Here is my proposed change to how we handle postgres in the gate: https://review.openstack.org/#/c/100033 Merge postgres and neutron jobs in integrated-gate template Instead of having a separate job for postgres and neutron, combine them. In the integrated-gate we will only test postgres+neutron and not neutron/mysql or nova-network/postgres. * neutron/mysql is still tested in integrated-gate-neutron * nova-network/postgres is tested in nova Because neutron only runs smoke jobs, this actually drops all the interesting testing of pg. The things I've actually seen catch differences are the nova negative tests, which basically aren't run in this job. So I think that's kind of the worst of all possible worlds, because it would make people think the thing is tested interestingly, when it's not. -Sean -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)
On 06/13/2014 03:01 PM, Mathew R Odden wrote: I am surprised this became a concern so quickly, but I do understand the strangeness of installing a 'bash8' binary on command line. I'm fine with renaming to 'bashate' or 'bash_tidy', but renames can take some time to work through all the references. Apparently Sean and I both thought of the 'bashate' name independently (from gpb = jeepyb) but I wasn't to keen on the idea since it isn't very descriptive. 'bash-tidy' makes more sense but we can't use dashes in python package names :( My vote would be for 'bashate' still, since I think that would be the easiest to transition to from the current name. -tidy programs typically rewrite your code (at least html-tidy and perl-tidy do), so I think that's definitely not a name we want, because we aren't doing that (or ever plan to do that). bashate ftw. Because if you can't have an inside joke buried within your naming of an open source project, what's the point. :) -Sean -- Sean Dague http://dague.net signature.asc Description: OpenPGP digital signature ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Running dnsmasq in Neutron: unix rights
Hi, I've been thinking for a long time on how to fix dnsmasq unix rights issue in Neutron. Namely (from syslog): /var/lib/neutron/dhcp/{id}/host : Permission denied One way to fix it is to do: chmod o+x /var/lib/neutron Though I don't feel it's the right way to do things. Wouldn't it be nicer to add: --user=neutron in spawn_process() in neutron/agent/linux/dhcp.py? I know some Debian users did that, and it worked. I was tempted to add such patch, but I don't think it's the right thing to do without upstream approval. Yet another way would be to use adduser and add the nobody user in the neutron group, but I'm discarding that option as the least safe. I don't want to introduce a Debian specific security hole in my Neutron package, and I am therefore seeking for advices in this list. What's the safest way to fix that problem? Cheers, Thomas Goirand (zigo) P.S: The issue is also tracked at https://bugs.debian.org/751524, so please leave 751...@bugs.debian.org as Cc: when replying. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Distributed locking
Are the details of that implementation described on wiki or elsewhere? (Partially for my own curiosity). I think I understand how it works but write ups usually clear that right up. Sent from my really tiny device... On Jun 14, 2014, at 12:15 AM, Robert Collins robe...@robertcollins.net wrote: On 13 June 2014 02:30, Matthew Booth mbo...@redhat.com wrote: We have a need for a distributed lock in the VMware driver, which I suspect isn't unique. Specifically it is possible for a VMware datastore to be accessed via multiple nova nodes if it is shared between clusters[1]. Unfortunately the vSphere API doesn't provide us with the primitives to implement robust locking using the storage layer itself, so we're looking elsewhere. Perhaps I'm missing something, but I didn't see anything in your description about actually needing a *distributed* lock, just needing a local that can be held by remote systems. As Devananda says, a centralised lock that can be held by agents has been implemented in Ironic - such a thing is very simple and quite easy to reason about... but its not suitable for all problems. HA and consistency requirements for such a thing are delivered through e.g. galera in the DB layer. -Rob -- Robert Collins rbtcoll...@hp.com Distinguished Technologist HP Converged Cloud ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)
On 6/12/2014 10:31 AM, John Garbutt wrote: On 11 June 2014 20:07, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Jun 11, 2014 at 11:38 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 6/11/2014 10:01 AM, Eoghan Glynn wrote: Thanks for bringing this to the list Matt, comments inline ... tl;dr: some pervasive changes were made to nova to enable polling in ceilometer which broke some things and in my opinion shouldn't have been merged as a bug fix but rather should have been a blueprint. === The detailed version: I opened bug 1328694 [1] yesterday and found that came back to some changes made in ceilometer for bug 1262124 [2]. Upon further inspection, the original ceilometer bug 1262124 made some changes to the nova os-floating-ips API extension and the database API [3], and changes to python-novaclient [4] to enable ceilometer to use the new API changes (basically pass --all-tenants when listing floating IPs). The original nova change introduced bug 1328694 which spams the nova-api logs due to the ceilometer change [5] which does the polling, and right now in the gate ceilometer is polling every 15 seconds. IIUC that polling cadence in the gate is in the process of being reverted to the out-of-the-box default of 600s. I pushed a revert in ceilometer to fix the spam bug and a separate patch was pushed to nova to fix the problem in the network API. Thank you for that. The revert is just now approved on the ceilometer side, and is wending its merry way through the gate. The bigger problem I see here is that these changes were all made under the guise of a bug when I think this is actually a blueprint. We have changes to the nova API, changes to the nova database API, CLI changes, potential performance impacts (ceilometer can be hitting the nova database a lot when polling here), security impacts (ceilometer needs admin access to the nova API to list floating IPs for all tenants), documentation impacts (the API and CLI changes are not documented), etc. So right now we're left with, in my mind, two questions: 1. Do we just fix the spam bug 1328694 and move on, or 2. Do we revert the nova API/CLI changes and require this goes through the nova-spec blueprint review process, which should have happened in the first place. So just to repeat the points I made on the unlogged #os-nova IRC channel earlier, for posterity here ... Nova already exposed an all_tenants flag in multiple APIs (servers, volumes, security-groups etc.) and these would have: (a) generally pre-existed ceilometer's usage of the corresponding APIs and: (b) been tracked and proposed at the time via straight-forward LP bugs, as opposed to being considered blueprint material So the manner of the addition of the all_tenants flag to the floating_ips API looks like it just followed existing custom practice. Though that said, the blueprint process and in particular the nova-specs aspect, has been tightened up since then. My preference would be to fix the issue in the underlying API, but to use this as a teachable moment ... i.e. to require more oversight (in the form of a reviewed approved BP spec) when such API changes are proposed in the future. Cheers, Eoghan Are there other concerns here? If there are no major objections to the code that's already merged, then #2 might be excessive but we'd still need docs changes. I've already put this on the nova meeting agenda for tomorrow. [1] https://bugs.launchpad.net/ceilometer/+bug/1328694 [2] https://bugs.launchpad.net/nova/+bug/1262124 [3] https://review.openstack.org/#/c/81429/ [4] https://review.openstack.org/#/c/83660/ [5] https://review.openstack.org/#/c/83676/ -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev While there is precedent for --all-tenants with some of the other APIs, I'm concerned about where this stops. When ceilometer wants polling on some other resources that the nova API exposes, will it need the same thing? Doing all of this polling for resources in all tenants in nova puts an undue burden on the nova API and the database. Can we do something with notifications here instead? That's where the nova-spec process would have probably caught this. ++ to notifications and not polling. Yeah, I think we need to revert this, and go through the specs process. Its been released in Juno-1 now, so this revert feels bad, but perhaps its the best of a bad situation?
Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)
On Sat, Jun 14, 2014 at 5:01 AM, Sean Dague s...@dague.net wrote: On 06/13/2014 03:01 PM, Mathew R Odden wrote: I am surprised this became a concern so quickly, but I do understand the strangeness of installing a 'bash8' binary on command line. I'm fine with renaming to 'bashate' or 'bash_tidy', but renames can take some time to work through all the references. Apparently Sean and I both thought of the 'bashate' name independently (from gpb = jeepyb) but I wasn't to keen on the idea since it isn't very descriptive. 'bash-tidy' makes more sense but we can't use dashes in python package names :( My vote would be for 'bashate' still, since I think that would be the easiest to transition to from the current name. -tidy programs typically rewrite your code (at least html-tidy and perl-tidy do), so I think that's definitely not a name we want, because we aren't doing that (or ever plan to do that). bashate ftw. I completely did not care at all until you suggested this! +1 for bashate!!! Because if you can't have an inside joke buried within your naming of an open source project, what's the point. :) -Sean -- Sean Dague http://dague.net ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][ceilometer] FloatingIp pollster spamming n-api logs (bug 1328694)
- Original Message - On 11 June 2014 20:07, Joe Gordon joe.gord...@gmail.com wrote: On Wed, Jun 11, 2014 at 11:38 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote: On 6/11/2014 10:01 AM, Eoghan Glynn wrote: Thanks for bringing this to the list Matt, comments inline ... tl;dr: some pervasive changes were made to nova to enable polling in ceilometer which broke some things and in my opinion shouldn't have been merged as a bug fix but rather should have been a blueprint. === The detailed version: I opened bug 1328694 [1] yesterday and found that came back to some changes made in ceilometer for bug 1262124 [2]. Upon further inspection, the original ceilometer bug 1262124 made some changes to the nova os-floating-ips API extension and the database API [3], and changes to python-novaclient [4] to enable ceilometer to use the new API changes (basically pass --all-tenants when listing floating IPs). The original nova change introduced bug 1328694 which spams the nova-api logs due to the ceilometer change [5] which does the polling, and right now in the gate ceilometer is polling every 15 seconds. IIUC that polling cadence in the gate is in the process of being reverted to the out-of-the-box default of 600s. I pushed a revert in ceilometer to fix the spam bug and a separate patch was pushed to nova to fix the problem in the network API. Thank you for that. The revert is just now approved on the ceilometer side, and is wending its merry way through the gate. The bigger problem I see here is that these changes were all made under the guise of a bug when I think this is actually a blueprint. We have changes to the nova API, changes to the nova database API, CLI changes, potential performance impacts (ceilometer can be hitting the nova database a lot when polling here), security impacts (ceilometer needs admin access to the nova API to list floating IPs for all tenants), documentation impacts (the API and CLI changes are not documented), etc. So right now we're left with, in my mind, two questions: 1. Do we just fix the spam bug 1328694 and move on, or 2. Do we revert the nova API/CLI changes and require this goes through the nova-spec blueprint review process, which should have happened in the first place. So just to repeat the points I made on the unlogged #os-nova IRC channel earlier, for posterity here ... Nova already exposed an all_tenants flag in multiple APIs (servers, volumes, security-groups etc.) and these would have: (a) generally pre-existed ceilometer's usage of the corresponding APIs and: (b) been tracked and proposed at the time via straight-forward LP bugs, as opposed to being considered blueprint material So the manner of the addition of the all_tenants flag to the floating_ips API looks like it just followed existing custom practice. Though that said, the blueprint process and in particular the nova-specs aspect, has been tightened up since then. My preference would be to fix the issue in the underlying API, but to use this as a teachable moment ... i.e. to require more oversight (in the form of a reviewed approved BP spec) when such API changes are proposed in the future. Cheers, Eoghan Are there other concerns here? If there are no major objections to the code that's already merged, then #2 might be excessive but we'd still need docs changes. I've already put this on the nova meeting agenda for tomorrow. [1] https://bugs.launchpad.net/ceilometer/+bug/1328694 [2] https://bugs.launchpad.net/nova/+bug/1262124 [3] https://review.openstack.org/#/c/81429/ [4] https://review.openstack.org/#/c/83660/ [5] https://review.openstack.org/#/c/83676/ -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev While there is precedent for --all-tenants with some of the other APIs, I'm concerned about where this stops. When ceilometer wants polling on some other resources that the nova API exposes, will it need the same thing? Doing all of this polling for resources in all tenants in nova puts an undue burden on the nova API and the database. Can we do something with notifications here instead? That's where the nova-spec process would have probably caught this. ++ to notifications and not polling. Yeah, I think we need to revert this, and go through the specs process. Its been released in Juno-1 now, so this revert feels bad, but perhaps its the best of a bad situation? Word of caution, we need to get notifications
Re: [openstack-dev] mysql/mysql-python license contamination into openstack?
On Thu Jun 12 14:13:05 2014, Chris Friesen wrote: Hi, I'm looking for the community viewpoint on whether there is any chance of license contamination between mysql and nova. I realize that lawyers would need to be involved for a proper ruling, but I'm curious about the view of the developers on the list. Suppose someone creates a modified openstack and wishes to sell it to others. They want to keep their changes private. They also want to use the mysql database. The concern is this: nova is apache licensed sqlalchemy is MIT licensed mysql-python (aka mysqldb1) is GPLv2 licensed mysql is GPLv2 licensed The concern is that since nova/sqlalchemy/mysql-python are all essentially linked together, an argument could be made that the work as a whole is a derivative work of mysql-python, and thus all the source code must be made available to anyone using the binary. Does this argument have any merit? the GPL is excepted in the case of MySQL and other MySQL products released by Oracle (can you imagine such a sentence being written.), see http://www.mysql.com/about/legal/licensing/foss-exception/. If MySQL-Python itself were an issue, OpenStack could switch to another MySQL library, such as MySQL Connector/Python which is now MySQL's official Python driver: http://dev.mysql.com/doc/connector-python/en/index.html Usual IANAL caveats ... But do these concerns about license contamination from the DB service, via the client libraries, have any relevance to the prior discussion about MongoDB licensing? Substituting: s/mysql is GPLv2/mongodb is AGPLv3/ s/sqlalchemy is MIT licensed/pymongo is Apache licensed/ and noting the MongoDB Inc promise about database and client library separateness. If there is some relevance, then some of the conclusions to the prior legal discussion may be useful to recall: http://lists.openstack.org/pipermail/legal-discuss/2014-March/000189.html Cheers, Eoghan Has anyone tested any of the mysql DBAPIs with more permissive licenses? I just mentioned other MySQL drivers the other day; MySQL Connector/Python, OurSQL and pymysql are well tested within SQLAlchemy and these drivers generally pass all tests. There's some concern over compatibility with eventlet, however, I can't speak to that just yet. Chris ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] nova networking API and CLI are poorly documented and buggy
I am not even sure what is the intent, but some of the behavior looks like it is clearly unintended and not useful (a more precise formulation of buggy that is not defeated by the lack of documentation). IMHO, the API and CLI documentation should explain these calls/commands in enough detail that the reader can tell the difference. And the difference should be useful in at least some networking configurations. It seems to me that in some configurations an administrative user may want THREE varieties of the network listing call/command: one that shows networks assigned to his tenant, one that also shows networks available to be assigned, and one that shows all networks. And in no configurations should a non-administrative user be blind to all categories of networks. In the API, there are the calls on /v2/{tenant_id}/os-networks and they are documented at http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html . There are also calls on /v2/{tenant_id}/os-tenant-networks --- but I can not find documentation for them. http://docs.openstack.org/api/openstack-compute/2/content/ext-os-networks.html does not describe the meaning of the calls in much detail. For example, about GET /v2/{tenant_id}/os-networks that doc says only Lists networks that are available to the tenant. In some networking configurations, there are two levels of availability: a network might be assigned to a tenant, or a network might be available for assignment. In other networking configurations there are NOT two levels of availability. For example, in Flat DHCP nova networking (which is the default in DevStack), a network CAN NOT be assigned to a tenant. You might think that the to the tenant qualification implies filtering by the invoker's tenant. But you would be wrong in the case of an administrative user; see the model_query method in nova/db/sqlalchemy/api.py In the CLI, we have two sets of similar-seeming commands. For example, $ nova help net-list usage: nova net-list List networks $ nova help network-list usage: nova network-list Print a list of available networks. Those remarks are even briefer than the one description in the API doc, omitting the qualification to the tenant. Experimentation shows that, in the case of flat DHCP nova networking, both of those commands show zero networks to a non-administrative user (and remember that networks can not be assigned to tenants in that configuration) and all the networks to an administrative user. At the API the GET calls behave the same way. The fact that a non-administrative user sees zero networks looks unintended and not useful. See https://bugs.launchpad.net/openstack-manuals/+bug/1152862 and https://bugs.launchpad.net/nova/+bug/1327406 Can anyone tell me why there are both /os-networks and /os-tenant-networks calls and what their intended semantics are? Thanks, Mike___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Sahara][Swift] Swift integration with Apache Spark
Hi All, I would like to share with you about my recent efforts on the integration between Swift and Apache Spark. Spark claims to have x100 faster map reduce analytics than conventional Apache Hadoop. (http://spark.apache.org/ for more information about Spark) Spark can read data from various sources, HDFS, S3, local file system, various streaming sources. Spark then used to perform analytics on this data. I started to work on the integration between Spark and Swift, allowing Spark to integrate with Swift and perform data analytics on the objects stored in Swift. In my local tests this works very well. There are no modification needed for Swift. I submitted patches to the Spark community with information how to integrate it with Swift. This work is still in progress. https://github.com/apache/spark/pull/1010 All the best, Gil Vernik.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate
You know its bad when you can't sleep because you're redesigning gate workflows in your head so I apologise that this email is perhaps not as rational, nor as organised, as usual - but , . :) Obviously this is very important to address, and if we can come up with something systemic I'm going to devote my time both directly, and via resource-hunting within HP, to address it. And accordingly I'm going to feel free to say 'zuul this' with no regard for existing features. We need to get ahead of the problem and figure out how to stay there, and I think below I show why the current strategy just won't do that. On 13 June 2014 06:08, Sean Dague s...@dague.net wrote: We're hitting a couple of inflection points. 1) We're basically at capacity for the unit work that we can do. Which means it's time to start making decisions if we believe everything we currently have running is more important than the things we aren't currently testing. Everyone wants multinode testing in the gate. It would be impossible to support that given current resources. How much of our capacity problems are due to waste - such as: - tempest runs of code the author knows is broken - tempest runs of code that doesn't pass unit tests - tempest runs while the baseline is unstable - to expand on this one, if master only passes one commit in 4, no check job can have a higher success rate overall. Vs how much are an indication of the sheer volume of development being done? 2) We're far past the inflection point of people actually debugging jobs when they go wrong. The gate is backed up (currently to 24hrs) because there are bugs in OpenStack. Those are popping up at a rate much faster than the number of people who are willing to spend any time on them. And often they are popping up in configurations that we're not all that familiar with. So, I *totally* appreciate that people fixing the jobs is the visible expendable resource, but I'm not sure its the bottleneck. I think the bottleneck is our aggregate ability to a) detect the problem and b) resolve it. For instance - strawman - if when the gate goes bad, after a check for external issues like new SQLAlchemy releases etc, what if we just rolled trunk of every project that is in the integrated gate back to before the success rate nosedived ? I'm well aware of the DVCS issues that implies, but from a human debugging perspective that would massively increase the leverage we get from the folk that do dive in and help. It moves from 'figure out that there is a problem and it came in after X AND FIX IT' to 'figure out it came in after X'. Reverting is usually much faster and more robust than rolling forward, because rolling forward has more unknowns. I think we have a systematic problem, because this situation happens again and again. And the root cause is that our time to detect races/nondeterministic tests is a probability function, not a simple scalar. Sometimes we catch such tests within one patch in the gate, sometimes they slip through. If we want to land hundreds or thousands of patches a day, and we don't want this pain to happen, I don't see any way other than *either*: A - not doing this whole gating CI process at all B - making detection a whole lot more reliable (e.g. we want near-certainty that a given commit does not contain a race) C - making repair a whole lot faster (e.g. we want = one test cycle in the gate to recover once we have determined that some commit is broken. Taking them in turn: A - yeah, no. We have lots of experience with the axiom that that which is not tested is broken. And thats the big concern about removing things from our matrix - when they are not tested, we can be sure that they will break and we will have to spend neurons fixing them - either directly or as reviews from people fixing it. B - this is really hard. Say we want quite sure sure that there are no new races that will occur with more than some probability in a given commit, and we assume that race codepaths might be run just once in the whole test matrix. A single test run can never tell us that - it just tells us it worked. What we need is some N trials where we don't observe a new race (but may observe old races), given a maximum risk of the introduction of a (say) 5% failure rate into the gate. [check my stats] (1-max risk)^trials = margin-of-error 0.95^N = 0.01 log(0.01, base=0.95) = N N ~= 90 So if we want to stop 5% races landing, and we may exercise any given possible race code path a minimum of 1 times in the test matrix, we need to exercise the whole test matrix 90 times to have that 1% margin sure we saw it. Raise that to a 1% race: log(0.01. base=0.99) = 458 Thats a lot of test runs. I don't think we can do that for each commit with our current resources - and I'm not at all sure that asking for enough resources to do that makes sense. Maybe it does. Data point - our current risk, with 1% margin: (1-max risk)^1 = 0.01 99% (that is, a single passing gate
Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
On 6/14/2014 5:40 AM, Sean Dague wrote: On 06/13/2014 06:40 PM, Michael Still wrote: Greetings, I would like to nominate Ken'ichi Ohmichi for the nova-core team. Ken'ichi has been involved with nova for a long time now. His reviews on API changes are excellent, and he's been part of the team that has driven the new API work we've seen in recent cycles forward. Ken'ichi has also been reviewing other parts of the code base, and I think his reviews are detailed and helpful. Please respond with +1s or any concerns. +1 References: https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z http://www.stackalytics.com/?module=nova-groupuser_id=oomichi As a reminder, we use the voting process outlined at https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team. Thanks, Michael ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +1 -- Thanks, Matt Riedemann ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints
A sprint in Lisbon sounds very good to me. I lived a while in Portugal and Portuguese is my second language. This is very short notice so it is probably not possible for me to make it during this cycle. Don't count on me but if an event is scheduled in Lisbon, I'd certainly want to give it a try. An event during a future cycle would be much easier to plan for. Carl On Jun 13, 2014 3:00 PM, Carlos Gonçalves m...@cgoncalves.pt wrote: Let me add to what I've said in my previous email, that Instituto de Telecomunicacoes and Portugal Telecom are also available to host and organize a mid cycle sprint in Lisbon, Portugal. Please let me know who may be interested in participating. Thanks, Carlos Goncalves On 13 Jun 2014, at 10:45, Carlos Gonçalves m...@cgoncalves.pt wrote: Hi, I like the idea of arranging a mid cycle for Neutron in Europe somewhere in July. I was also considering inviting folks from the OpenStack NFV team to meet up for a F2F kick-off. I did not know about the sprint being hosted and organised by eNovance in Paris until just now. I think it is a great initiative from eNovance even because it’s not being focused on a specific OpenStack project. So, I'm interested in participating in this sprint for discussing Neutron and NFV. Two more people from Instituto de Telecomunicacoes and Portugal Telecom have shown interested too. Neutron and NFV team members, who’s interested in meeting in Paris, or if not available on the date set by eNovance in other time and place? Thanks, Carlos Goncalves On 13 Jun 2014, at 08:42, Sylvain Bauza sba...@redhat.com wrote: Le 12/06/2014 15:32, Gary Kotton a écrit : Hi, There is the mid cycle sprint in July for Nova and Neutron. Anyone interested in maybe getting one together in Europe/Middle East around the same dates? If people are willing to come to this part of the world I am sure that we can organize a venue for a few days. Anyone interested. If we can get a quorum then I will be happy to try and arrange things. Thanks Gary Hi Gary, Wouldn't it be more interesting to have a mid-cycle sprint *before* the Nova one (which is targeted after juno-2) so that we could discuss on some topics and make a status to other folks so that it would allow a second run ? There is already a proposal in Paris for hosting some OpenStack sprints, see https://wiki.openstack.org/wiki/Sprints/ParisJuno2014 -Sylvain ___ OpenStack-dev mailing listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova][neutron][NFV] Mid cycle sprints
On 13 June 2014 11:45, Carlos Gonçalves m...@cgoncalves.pt wrote: Neutron and NFV team members, who’s interested in meeting in Paris, or if not available on the date set by eNovance in other time and place? I'd be very interested in an NFV meet up in Paris in July. Cheers, -Luke ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing
I noticed this afternoon (Saturday PST 1:18pm) that most of the Third Party test systems started to fail because of the seuptools bug because of dependency in python-swiftclient. I further noticed that some of the CI's are voting +1, but, when I look through the logs, they seem to be hitting this issue as well. I have been on #openstack-infra most of the afternoon discussing various options suggested by folks. Infra folks have confirmed this issue and are looking for solution. I tried fixes suggested in [1] and [2] below and removed the setuptools and reinstalled version 3.8. This did not help. I have opened the bug[3] to track this issue. I thought I send out this message in case other CI maintainers are investigating this issue. Please share ideas/thoughts so that we can get the CIs fixed as soon as possible. Thanks -Sukhdev [1] https://bugs.launchpad.net/python-swiftclient/+bug/1326972 [2] https://mail.python.org/pipermail/distutils-sig/2014-June/024478.html [3] https://bugs.launchpad.net/python-swiftclient/+bug/1330140 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [neutron] [third-party] Current status of Neutron 3rd Party CI and how to move forward
Hi Kyle, Arista CI has been voting +1 for success and comments in case of Failures. Are the CI's now allowed to post -1 for failures? I have to make a minor change to start voting -1. Please advise. -Sukhdev On Fri, Jun 13, 2014 at 10:07 AM, Kyle Mestery mest...@noironetworks.com wrote: I've spent some time doing some initial analysis of 3rd Party CI in Neutron. The tl;dr is that it's a mess, and it needs fixing. And I'm setting a deadline of Juno-2 for people to address their CI systems and get them in shape or we will remove plugins and drivers in Juno-3 which do not meet the expectations set out below. My initial analysis of Neutron 3rd Party CI is here [1]. This was somewhat correlated with information from DriverLog [2], which was helpful to put this together. As you can see from the list, there are a lot of CI systems which are broken right now. Some have just recently started working again. Others are working great, and some are in the middle somewhere. The overall state isn't that great. I'm sending this email to openstack-dev and BCC;ing CI owners to raise awareness of this issue. If I have incorrectly labeled your CI, please update the etherpad and include links to the latest voting/comments your CI system has done upstream and reply to this thread. I have documented the 3rd Party CI requirements for Neutron here [3]. I expect people to be following these guidelines for their CI systems. If there are questions on the guidelines or expectations, please reply to this thread or reach out to myself in #openstack-neutron on Freenode. There is also a third-party meeting [4] which is a great place to ask questions and share your experience setting up a 3rd party CI system. The infra team has done a great job sponsoring and running this meeting (thanks Anita!), so please both take advantage of it and also contribute to it so we can all share knowledge and help each other. Owners of plugins/drivers should ensure their CI is matching the requirements set forth by both infra and Neutron when running tests and posting results. Like I indicated earlier, we will look at removing code for drivers which are not meeting these requirements as set forth in the wiki pages. The goal of this effort is to ensure consistency across testing platforms, making it easier for developers to diagnose issues when third party CI systems fail, and to ensure these drivers are tested since they are part of the integrated releases we perform. We used to require a core team member to sponsor a plugin/driver, but we moved to the 3rd party CI system in Icehouse instead. Ensuring these systems are running and properly working is the only way we can ensure code is working when it's part of the integrated release. Thanks, Kyle [1] https://etherpad.openstack.org/p/ZLp9Ow3tNq [2] http://www.stackalytics.com/driverlog/?project_id=openstack%2Fneutronvendor=release_id= [3] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting [4] https://wiki.openstack.org/wiki/Meetings/ThirdParty ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
On 06/13/2014 06:40 PM, Michael Still wrote: Greetings, I would like to nominate Ken'ichi Ohmichi for the nova-core team. +1 -- Russell Bryant ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
+1 On Jun 13, 2014, at 3:40 PM, Michael Still mi...@stillhq.com wrote: Greetings, I would like to nominate Ken'ichi Ohmichi for the nova-core team. Ken'ichi has been involved with nova for a long time now. His reviews on API changes are excellent, and he's been part of the team that has driven the new API work we've seen in recent cycles forward. Ken'ichi has also been reviewing other parts of the code base, and I think his reviews are detailed and helpful. Please respond with +1s or any concerns. References: https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z http://www.stackalytics.com/?module=nova-groupuser_id=oomichi As a reminder, we use the voting process outlined at https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team. Thanks, Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] Debugging Devstack Neutron with Pycharm
I think we need to pick either one or the other. We currently have two places where debugging is documented, the OpenStack wiki and in the Neutron tree http://git.openstack.org/cgit/openstack/neutron/tree/TESTING.rst#n143 Sean M. Collins From: Gal Sagie [gsa...@vmware.com] Sent: Wednesday, June 11, 2014 8:36 AM To: ges...@cisco.com Cc: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] Debugging Devstack Neutron with Pycharm Thanks a lot, this works, i will update the wiki. Gal. From: Henry Gessau ges...@cisco.com To: openstack-dev@lists.openstack.org Sent: Wednesday, June 11, 2014 2:41:47 PM Subject: Re: [openstack-dev] [Neutron] Debugging Devstack Neutron with Pycharm Gal Sagie wrote: I am trying to debug devstack Neutron with Pycharm, i have found here (https://wiki.openstack.org/wiki/NeutronDevelopment#How_to_debug_Neutron_.28and_other_OpenStack_projects_probably_.29) That i need to change the neutron server code to this=eventlet.monkey_patch() To: eventlet.monkey_patch(os=False, thread=False) I don't need to do this. But I do need to go into the PyCharm settings under Python Debugger and enable Gevent compatible debugging. I have done so, debug seems to run, but when i am trying to initiate commands from the CLI i get this: gal@ubuntu:~/devstack$ neutron net-list Connection to neutron failed: Maximum attempts reached Are you sure you have sourced openrc correctly for the credentials? (the server seems to run ok...) Any help is appreciated as i am trying to learn and understand main flows by debugging the code locally. First I start devstack in offline mode. OFFLINE=True ./stack.sh Once it is running I go to the neutron window in screen. There I stop neutron-server with ctrl-C, and press up-arrow to view the start command. To run neutron-server in the PyCharm debugger edit the Run/Debug configuration with the following settings: Script: /usr/local/bin/neutron-server Script params: --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini # I got this from the screen window where I stopped neutron Working directory: /opt/stack/neutron Now restart neutron from PyCharm instead of screen. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing
Fellow Stackers, I have an update on the issue. Kudos to the Infra folks, a huge thanks to Monty for coming up with patch for this setuptools issue, and Anita for for being on top of this. Please follow the steps in http://paste.openstack.org/show/84076/ to pull this patch on your local systems to get past the issue - until the fix in the upstream is merged. Note that you have to install mercurial to pull this patch. Hope this helps. regards.. -Sukhdev On Sat, Jun 14, 2014 at 5:45 PM, Sukhdev Kapur sukhdevka...@gmail.com wrote: I noticed this afternoon (Saturday PST 1:18pm) that most of the Third Party test systems started to fail because of the seuptools bug because of dependency in python-swiftclient. I further noticed that some of the CI's are voting +1, but, when I look through the logs, they seem to be hitting this issue as well. I have been on #openstack-infra most of the afternoon discussing various options suggested by folks. Infra folks have confirmed this issue and are looking for solution. I tried fixes suggested in [1] and [2] below and removed the setuptools and reinstalled version 3.8. This did not help. I have opened the bug[3] to track this issue. I thought I send out this message in case other CI maintainers are investigating this issue. Please share ideas/thoughts so that we can get the CIs fixed as soon as possible. Thanks -Sukhdev [1] https://bugs.launchpad.net/python-swiftclient/+bug/1326972 [2] https://mail.python.org/pipermail/distutils-sig/2014-June/024478.html [3] https://bugs.launchpad.net/python-swiftclient/+bug/1330140 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][Swift][third-party] Most Third Party CI's failing
Oppss...sorry wrong link... please use this http://paste.openstack.org/show/84073/. If anybody needs help, please ping me or go to #openstack-infra. regards.. -Sukhdev On Sat, Jun 14, 2014 at 9:34 PM, Sukhdev Kapur sukhdevka...@gmail.com wrote: Fellow Stackers, I have an update on the issue. Kudos to the Infra folks, a huge thanks to Monty for coming up with patch for this setuptools issue, and Anita for for being on top of this. Please follow the steps in http://paste.openstack.org/show/84076/ to pull this patch on your local systems to get past the issue - until the fix in the upstream is merged. Note that you have to install mercurial to pull this patch. Hope this helps. regards.. -Sukhdev On Sat, Jun 14, 2014 at 5:45 PM, Sukhdev Kapur sukhdevka...@gmail.com wrote: I noticed this afternoon (Saturday PST 1:18pm) that most of the Third Party test systems started to fail because of the seuptools bug because of dependency in python-swiftclient. I further noticed that some of the CI's are voting +1, but, when I look through the logs, they seem to be hitting this issue as well. I have been on #openstack-infra most of the afternoon discussing various options suggested by folks. Infra folks have confirmed this issue and are looking for solution. I tried fixes suggested in [1] and [2] below and removed the setuptools and reinstalled version 3.8. This did not help. I have opened the bug[3] to track this issue. I thought I send out this message in case other CI maintainers are investigating this issue. Please share ideas/thoughts so that we can get the CIs fixed as soon as possible. Thanks -Sukhdev [1] https://bugs.launchpad.net/python-swiftclient/+bug/1326972 [2] https://mail.python.org/pipermail/distutils-sig/2014-June/024478.html [3] https://bugs.launchpad.net/python-swiftclient/+bug/1330140 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related specs
Hi Mohammad, Thank you for sharing the links. Can you please elaborate on columns of the table in [1]. Is [R] supposed to be for spec review and [C] for code review? If this correct, would it be possible to add [C] columns for already merged specs that still have the code under review? Thanks a lot, Irena From: Mohammad Banikazemi [mailto:m...@us.ibm.com] Sent: Friday, June 13, 2014 8:02 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Neutron][ml2] Tracking the reviews for ML2 related specs In order to make the review process a bit easier (without duplicating too much data and without creating too much overhead), we have created a wiki to keep track of the ML2 related specs for the Juno cycle [1]. The idea is to organize the people who participate in the ML2 subgroup activities and get the related specs reviewed as much as possible in the subgroup before asking the broader community to review. (There is of course nothing that prevents others from reviewing these specs as soon as they are available for review.) If you have any ML2 related spec under review or being planned, you may want to update the wiki [1] accordingly. We will see if this will be useful or not. If you have any comments or suggestions please post here or bring them to the IRC weekly meetings [2]. Best, Mohammad [1] https://wiki.openstack.org/wiki/Tracking_ML2_Subgroup_Reviews [2] https://wiki.openstack.org/wiki/Meetings/ML2 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints
+ 1 Would love to join the gang :) -Original Message- From: Assaf Muller [mailto:amul...@redhat.com] Sent: Friday, June 13, 2014 4:21 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints - Original Message - Hi, There is the mid cycle sprint in July for Nova and Neutron. Anyone interested in maybe getting one together in Europe/Middle East around the same dates? If people are willing to come to this part of the world I am sure that we can organize a venue for a few days. Anyone interested. If we can get a quorum then I will be happy to try and arrange things. +1 on an Israel sprint :) Thanks Gary ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core
+1 On 6/15/14, 6:15 AM, Chris Behrens cbehr...@codestud.com wrote: +1 On Jun 13, 2014, at 3:40 PM, Michael Still mi...@stillhq.com wrote: Greetings, I would like to nominate Ken'ichi Ohmichi for the nova-core team. Ken'ichi has been involved with nova for a long time now. His reviews on API changes are excellent, and he's been part of the team that has driven the new API work we've seen in recent cycles forward. Ken'ichi has also been reviewing other parts of the code base, and I think his reviews are detailed and helpful. Please respond with +1s or any concerns. References: https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:o pen,n,z https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z https://urldefense.proofpoint.com/v1/url?u=http://www.stackalytics.com/?m odule%3Dnova-group%26user_id%3Doomichik=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0 Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=kMfBWe6o2%2BQDf QNNU1pcKVuu7ezNLimr6qLaM9vObMI%3D%0As=d9b50754db9a4e9f859919399efbb4f0a5 bc69a949139611e8423a523601caa2 As a reminder, we use the voting process outlined at https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team. Thanks, Michael -- Rackspace Australia ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev