[openstack-dev] [neutron][dynamic-routing] Broken unit tests
Hi, Can cores please look at https://review.openstack.org/543208. We are currently blocked on master and stable/queens with this. Thanks Gary __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion.
Now I am working on an FPGA management POC with Dolpher. We have finished some code, and have discussion with Li Liu and some cyborg developer guys. Here are some discussions: image management 1. User should upload the FPGA image to glance and set the tags as follow: There are two suggestions to upload an FPGA image. A. use raw glance api like: $ openstack image create --file mypath/FPGA.img fpga.img $ openstack image set --tag FPGA --property vendor=intel --property type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22 The image must have "FPGA" tag and accelerator type(such as type=crypto). B. cyborg support a new api to upload a image. This API will wrap glance api and include the above steps, also make image record in it's local DB. 2. Cyborg agent/conductor get the FPGA image info from glance. There are also two suggestions to get the FPGA image info. A. use raw glance api. Cyborg will get the images by FPGA tag and timestamp periodically and store them in it's local cache. It will use the images tags and properties to form placement taits and resource_class name. B. store the imformations when call cybort's new upload API. 3. Image download. call glance image download API to local file. and make a corresponding md5 files for checksum. GAP in image management: missing related glance image client in cyborg. resource report management for scheduler. 1. Cyborg agent/conductor need synthesize all useful information from FPGA driver and image information. The traits will be like: CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO, The resource_class will be like: CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF {"inventories": "CUSTOM_FPGA_INTEL_PF": { "allocation_ratio": 1.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 4 } } Accelerator claim and release: 1. Cybort will support the releated API for accelerator claim and release. It can pass the follow parameters: nodename: Which host that accelerator located on, it is required. type: This accelerator type, cyborg can get image uuid by it. it is optional. image uuid: the uuid of FPGA bitstream image, . it is optional. traits: the traits info that cyborg reports to placement. resource_class: the resource_class name that reports to placement. And return the address for the accelerator. At present, it is the PCIE_ADDRESS. 2. When claim an accelerator, type and image is None, cybort will not program the fpga for user. FPGA accelerator program API: We still need to support an independent program API for some specific scenarios. Such as as a FPGA developer, I will change my verilog logical frequently and need to do verification on my guest. I upload my new bitstream image to glance, and call cyborg to program my FPGA accelerator. End user operations follow: 1. upload an bitstream image to glance if necessary and set its tags(at least FPGA is requied) and property. sucn as: --tag FPGA --property vendor=intel --property type=crypto 2. list the FPGA related traits and resource_class names by placement API. such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits. 3. create a new falvor wiht his expected traits and resource_class as extra spec. such as: "resourcesn:CUSTOM_FPGA_INTEL_PF=2" n is an integer or empty string. "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO". 4. create the VM with this flavor. BR Shaohe Feng __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator
Hi, Zane, Sorry for the late reply I was on leave for a couple of days. Firstly, Thanks for the clear in detail analysis and suggestions on quotas and resources-management it really means a lot to us :). Secondly, these are the use-cases which kingbird is mainly developed for. *OUR USE-CASES QUOTA-MANAGEMENT:* 1. Admin must have a global view of all quotas to all tenants across all the regions 2. Admin can periodically balance the quotas (we have a formula using which we do this balancing ) across regions 3. Admin can update, Delete quotas for tenants 4. Admin can sync quotas for all tenants so that the quotas will be updated in all regions. *USE-CASES FOR RESOURCE-MANAGEMENT:* 1. Resources which are required to boot up a VM in One region should be accessible in other target-regions In the process, Kingbird has support for the following a) Sync/Replicate existing Nova-Keypairs b) Sync/Replicate existing Glance-Images c) Sync/Replicate existing Nova-Flavors.(Only admin can sync these.) 2. User who has a VM in one region should have the ease or possibility to have a replica of the same vm in target-region(s) a) It can be a snapshot of the already booted-up VM or with the same qcow2 image. *GENERIC USE-CASES* 1. Automation scripts for kingbird in -ansible, -salt -puppet. 2. Add SSL support to kingbird 3. Resource management in Kingbird-dashboard. 4. Kingbird in a docker 5. Add Kingbird into Kolla. On Fri, Feb 9, 2018 at 12:47 AM, Zane Bitterwrote: > On 07/02/18 12:24, Goutham Pratapa wrote: > >> >Yes as you said it can be interpreted as a tool that can >> orchestrate multiple-regions. >> > > Actually from your additional information I'm now getting the impression > that you are, in fact, positioning this as a partial competitor to Heat. >To some extent yes, Till now we have focused on resource-synchronization > and quota-balancing for various tenants across multiple-regions. But in the > coming cycle we want to enter the orchestration game. > > Just to be sure does openstack already has project which can >> replicate the resources and orchestrate??? >> > > OpenStack has an orchestration service - Heat - and it allows you to do > orchestration across multiple regions by creating a nested Stack in an > arbitrary region as a resource in a Heat Stack.[1] > > Heat includes the ability to create Nova keypairs[2] and even, for those > users with sufficient privileges, flavors[3] and quotas[4][5][6]. (It used > to be able to create Glance images as well, but this was deprecated because > it is not feasible using the Glance v2 API.) > > [1] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Heat::Stack > [2] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::KeyPair > [3] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::Flavor > [4] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::Quota > [5] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Cinder::Quota > [6] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Neutron::Quota > > why because In coming >> cycle our idea is that a user just gives a VM-ID or Vm-name and we >> sync all the resources with which the vm is actually created >> ofcourse we cant have the same network in target-region so we may >> need the network-id or port-id from the target region from user so >> that kingbird will boot up the requested vm in the target region(s). >> > > So it sounds like you are starting from the premise that users will create > stuff in an ad-hoc way, then later discover that they need to replicate > their ad-hoc deployments to multiple regions, and you're building a tool to > do that. Heat, on the other hand, starts from the premise that users will > invest a little up-front effort to create a declarative definition of their > deployment, which they can then deploy repeatably in multiple (or the > same!) regions. Our experience is that people have shown themselves to be > quite willing to do this, because repeatable deployments have lots of > benefits. > Yes that is true. But, our idea is the same as what you have stated above > ` *So it sounds like you are starting from the premise that users will > create stuff in an ad-hoc way, then later discover that they need to > replicate their ad-hoc deployments to multiple regions *` to reduce the > repeatable deployments. > > Looking at the things you want to synchronise: > > * Quotas > > Synchronize after balancing quotas across regions. (our use-case is if an > admin user wants to know the global limit for a tenant across regions then > he can view, update and delete from one region using Kingbird.) > > Operators can already use Heat templates to manage these if they so desire. > > * Flavors > > Some clouds allow users to create flavors, and those
[openstack-dev] [tripleo][nova][neutron] changing the default qemu group in tripleo
Hello, With OvS2.8, the USER and GROUP in which ovs will run has been changed to openvswitch:openvswitch (for regular ovs builds) and openvswitch:hugetlbfs (for DPDK enable ovs builds). Since for fedora family, we have always DPDK enabled builds, all TripleO deployments will have OvS running with openvswitch:hugetlbfs. For DPDK, qemu should also run with the same group "hugetlbfs" so that the vhost sockets could be shared between qemu and openvswitch. So we are making the change to set "group" in /etc/libvirt/qemu.conf to "hugetlbfs" for DPDK deployments. And it is all working fine. Now the question is - should we make qemu run with same group for all the nodes of the deployment [or] only the nodes which have DPDK enabled? It is possible for the DPDK nodes to host non-DPDK VMs (like SR-IOV or regular tenant VMs). So all VMs will be running with "qemu:hugetlbfs" user and group. So to avoid conflicts of running different group on different roles of a TripleO deployment, I prefer to update the qemu group as "hugetlbfs" for all the nodes of all roles, if DPDK is enabled in the deployment. Let us know if you see any issues on this approach? Regards, Saravanan KR __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [osc][python-openstackclient] Consistency of option name
Hi all, I was working on the OSC plugin of my project and trying to choose a CLI option to represent the availability zone of the container. When I came across the existing commands, I saw some inconsistencies on the naming. Some commands use the syntax '--zone ', while others use the syntax '--availability-zone '. For example: * openstack host list ... [--zone ] * openstack aggregate create ... [--zone ] * openstack volume create ... [--availability-zone ] * openstack consistency group create ... [--availability-zone ] I wonder if it makes sense to address this inconsistency. Is it possible have all commands using one syntax? Best regards, Hongbin __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [murano] Next 2 team meetings canceled
Hi Teams, Let's cancel meetings at 13 Feb and 20 Feb because of Chinese Spring Festival. Cheers, Rong Zhu __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [FFE][requirements][release][oslo] osprofiler bug fix needed
Hello, Currently, Oslo release for Queens is out. However, OSProfiler faces an issue that make some Nova CLI command not working. Detail for this issue: https://launchpad.net/bugs/1743586 Patch that fix this bug: https://review.openstack.org/#/c/535219/ Back port for this: https://review.openstack.org/#/c/537735/ Release new version for OSProfiler with this bug fix in Queens: https://review.openstack.org/#/c/541645/ Therefore, I send this email to get a FFE for it. Thank you! -- Best, Tovin Nguyễn Trọng Vĩnh (Tovin Seven) Email: tovi...@gmail.com __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack
On 09/02/18 15:50, Lance Bragstad wrote: > On 02/08/2018 03:36 PM, Adrian Turjak wrote: >> My plan for the Rocky cycle is to work in Keystone and address the missing >> pieces I need to get MFA working properly throughout OpenStack in an >> actually useful way, and I'll provide updates for that once I have the specs >> ready to submit (am waiting until start of Rocky for that). The good thing, >> is that this current solution for MFA works, and it can be migrated from to >> the methods I intend to work on for Rocky. The same credential models will >> be used in Keystone, and I will write tools to take users with TOTP >> credentials and configure auth rules for them for more official MFA support >> in Keystone once it is useful. > Are you planning to revive the previous proposal [0]? We should have > stable/queens branch by EOW, so Rocky development will be here soon. Are > you planning on attending the PTG? It might be valuable to discuss what > you have and how we can integrate it upstream. I thought I remember the > issue being policy related (where admins were required to update user > secrets and it wasn't necessarily a self-serving API). Now that we're in > a better place with system-scope, we might be able to move the ball > forward a bit regarding your use case. > > [0] https://review.openstack.org/#/c/345705/ So the use case is not just self-management, that's a part of it, but one at least we've solved outside of Keystone. The bigger issue is that MFA as we currently have it in Keystone is... unfinished and very hard to consume. And no I won't be coming to the PTG. :( The multi-auth-method approach is good, as are the per user auth rules, but right now nothing is consuming it using more than one method. In fact KeystoneAuth doesn't know how to deal with it. In part that is my fault since I put my hand up to make KeystoneAuth work with Multi-method auth, but... I gave up because it got ugly fast. We could make auth methods in KeystoneAuth that are made up of multiple methods, but then you need an explicit auth module for each combination... We need to refactor that code to allow you to specify a combination and have the code underneath do the right thing. The other issue is that you always need to know ahead of time how to auth for a given user and their specific auth rules, and you can't programmatically figure that out. The missing piece is something that allows us to programmatically know what is missing when 1 out of 2+ auth rules succeeds. When a user with more than 1 auth rule attempts to auth to Keystone, if they auth with 1 rule, but need 2 (password and totp), then the auth will fail and the error will be unhelpful. Even if the error was helpful, we can't rely on parsing error messages, that's unsafe. What should happen is Keystone acknowledges they were successful with one of their configured auth rules, at which point we know this user is 'probably' who they say they are. We now pass them a Partially Authed Token, which says they've already authed with 'password', but are missing 'totp' to complete their auth. The user can now return that token, and the missing totp auth method, and get back a full token. So the first spec I intend to propose is the Partially Authed Token type. Which solves the challenge response problem we have, and lets us actually know how to proceed when auth is unfinished. Once we have that, we can update KeystoneAuth, then the CLI to support challenge response, and then Horizon as well. Then we can look at user self management of MFA. Amusingly the very original spec that brought it multi-auth methods into Keystone talked about the need for a 'half-token': https://adam.younglogic.com/2012/10/multifactor-auth-and-keystone/ https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn https://review.openstack.org/#/c/21487/ But the 'half-token' was never implemented. :( The MFA method in this original email was just... replace the password auth method with one that expects an appended totp passcode. It's simple, it doesn't break anything nor expect more than one auth method, it works with Horizon and the CLI because of that, but it doesn't do real challenge response. It's a stop gap measure for us since we're on an older version of Keystone, and because the current methods are too hard for our customers to actually consume. And most importantly, I can migrate users from using it to using user auth rules, since Keystone already stores the totp credential and all I need to then do is make auth rules for users with a totp cred. Hope that explains my plan, and why I'm going to be proposing it. It's going to be a lot of work. :P __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Regression bug for boot from volume with IsolatedHostsFilter
I triaged this bug a couple of weeks ago: https://bugs.launchpad.net/nova/+bug/1746483 It looks like it's been regressed since Mitaka when that filter started using the RequestSpec object rather than legacy filter_properties dict. Looking a bit deeper though, it looks like this filter never worked for volume-backed instances. That's because this code, called from the compute API, never takes the image_id out of the volumes "volume_image_metadata": https://github.com/openstack/nova/blob/fa6c0f9cb14f1b4ce4d9b1dbacb1743173089986/nova/utils.py#L1032 So before the regression that breaks the filter, the filter just never got the image.id to validate and accepted whatever host for that instance since it didn't know the image to tell if it was isolated or not. I've got a functional recreate test for the bug and I think it's a pretty easy fix, but a question comes up about backports, which is - do we do two fixes for this bug, one to backport to stable which is just handling the missing RequestSpec.image.id attribute in the filter so the filter doesn't explode? Then we do another fix which actually pulls the image_id off the volume_image_metadata and put that properly into the RequestSpec so the filter actually _works_ with volume-backed instances? That would technically be a change in behavior for the filter, albeit likely the correct thing to do all along but we just never did it, and apparently no one ever noticed or cared (it's not a default enabled filter after all). -- Thanks, Matt __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] Please add me to Tatu's Gerrit groups
Thanks! On Fri, Feb 9, 2018 at 1:21 PM, Jeremy Stanleywrote: > On 2018-02-09 10:00:25 -0600 (-0600), Pino de Candia wrote: > > I'd like to be added to the recently created tatu-core and > > tatu-release Gerrit groups. > > Since your Gerrit account is the one which proposed the change to > add the project whose ACLs use those groups, I have added you as the > initial member in both of them. > -- > Jeremy Stanley > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [security] Security PTG Planning, x-project request for topics.
I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it from the slides. On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candiawrote: > Hi Folks, > > here are the slides for the Tatu presentation: https://docs. > google.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM > > I meant to record the demo video as well but I haven't gotten around to > editing all the bits. Please stay tuned. > > thanks, > Pino > > > On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < > giuseppe.decan...@gmail.com> wrote: > >> Hi Luke, >> >> Fantastic! An hour would be great if the schedule allows - there are lots >> of different aspects we can dive into and potential future directions the >> project can take. >> >> thanks! >> Pino >> >> >> >> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >> >>> >>> >>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>> giuseppe.decan...@gmail.com> wrote: >>> Hi Folks, I know the request is very late, but I wasn't aware of this SIG until recently. Would it be possible to present a new project to the Security SIG at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the project, sign on users and contributors and get feedback. For the past few months I have been working on a new project - Tatu [1]- to automate the management of SSH certificates (for both users and hosts) in OpenStack. Tatu allows users to generate SSH certificates with principals based on their Project role assignments, and VMs automatically set up their SSH host certificate (and related config) via Nova vendor data. The project also manages bastions and DNS entries so that users don't have to assign Floating IPs for SSH nor remember IP addresses. I have a working demo (including Horizon panels [2] and OpenStack CLI [3]), but am still working on the devstack script and patches [4] to get Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a demo video in the next few days. best regards, Pino References: 1. https://github.com/pinodeca/tatu (Please note this is still very much a work in progress, lots of TODOs in the code, very little testing and documentation doesn't reflect the latest design). 2. https://github.com/pinodeca/tatu-dashboard 3. https://github.com/pinodeca/python-tatuclient 4. https://review.openstack.org/#/q/tatu >>> Hi Giuseppe, of course you can! I will add you to the agenda. We could >>> get your an hour if it allows more time for presenting and post discussion? >>> >>> We will be meeting in an allocated room on Monday (details to follow). >>> >>> https://etherpad.openstack.org/p/security-ptg-rocky >>> >>> Luke >>> >>> >>> >>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: > > On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: > >> Bug 968696 and System Roles. Needs to be addressed across the >> Service catalog. >> > > Thanks Adam, will add it to the list. I see it's been open since 2012! > > >> >> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >> wrote: >> >>> Just a reminder as we have not had many uptakes yet.. >>> >>> Are there any projects (new and old) that would like to make use of >>> the security SIG for either gaining another perspective on security >>> challenges / blueprints etc or for help gaining some cross project >>> collaboration? >>> >>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>> wrote: >>> Hello All, I am seeking topics for the PTG from all projects, as this will be where we try out are new form of being a SIG. For this PTG, we hope to facilitate more cross project collaboration topics now that we are a SIG, so if your project has a security need / problem / proposal than please do use the security SIG room where a larger audience may be present to help solve problems and gain x-project consensus. Please see our PTG planning pad [0] where I encourage you to add to the topics. [0] https://etherpad.openstack.org/p/security-ptg-rocky -- Luke Hinds Security Project PTL >>> >>> >>> >>> __ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: openstack-dev-requ...@lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >>
Re: [openstack-dev] [OpenStack][Vitrage] .success error on vitrage-dashboard
Hi, The bug was fixed both in queens and in master https://review.openstack.org/#/c/543224/ https://review.openstack.org/#/c/543223/ Thanks Eyal On 11 February 2018 at 13:55, Eyal Bwrote: > Hi, > > Yes this is a bug due to the upgrade of angular the function that was > deprecated in now removed > We will push a fix > > Thanks >Eyal > > On 9 February 2018 at 14:04, MinWookKim wrote: > >> Hello Vitrage. >> >> I installed the vitrage and vitrage-dashboard master versions and tested >> them. >> >> However, an unrecognized error ('.success () is not function') occurs and >> all panels of the vitrage-dashboard do not appear normally. >> >> I can not figure out the cause, but I changed the .success and .error of >> each function to .then and .catch in dashboard / static / dashboard / >> projct / services / vitrage_topology.service.js. >> >> As a result of this, I have confirmed the normal operation of the >> vitrage-dashboard panel. >> >> What is the cause? >> >> Thanks J >> >> >> Best Regards, >> >> >> Minwook. >> >> >> __ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [OpenStack][Vitrage] .success error on vitrage-dashboard
Hi, Yes this is a bug due to the upgrade of angular the function that was deprecated in now removed We will push a fix Thanks Eyal On 9 February 2018 at 14:04, MinWookKimwrote: > Hello Vitrage. > > I installed the vitrage and vitrage-dashboard master versions and tested > them. > > However, an unrecognized error ('.success () is not function') occurs and > all panels of the vitrage-dashboard do not appear normally. > > I can not figure out the cause, but I changed the .success and .error of > each function to .then and .catch in dashboard / static / dashboard / > projct / services / vitrage_topology.service.js. > > As a result of this, I have confirmed the normal operation of the > vitrage-dashboard panel. > > What is the cause? > > Thanks J > > > Best Regards, > > > Minwook. > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev