Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Boxiang Zhu


Jay and Melanie, It's my fault to let you misunderstand the problem. I should 
describe my problem more clearly. My problem is not to migrate volumes between 
two ceph clusters. 


I have two clusters, one is openstack cluster(allinone env, hostname is dev) 
and another is ceph cluster. Omit the integrated configurations for openstack 
and ceph.[1] The special config of cinder.conf is as followed:


[DEFAULT]
enabled_backends = rbd-1,rbd-2
..
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes001
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81
[rbd-2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes002
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81


There will be two hosts named dev@rbd-1#ceph and dev@rbd-2#ceph.
Then I create a volume type named 'ceph' with the command 'cinder type-create 
ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 
'cinder type-key  set volume_backend_name=ceph'. 


I created a new vm and a new volume with type 'ceph'[So that the volume will be 
created on one of two hosts. I assume that the volume created on host 
dev@rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last 
I want to migrate the volume from host dev@rbd-1#ceph to host dev@rbd-2#ceph, 
but it failed with the exception 'NotImplementedError(_("Swap only supports 
host devices")'.


So that, my real problem is that is there any work to migrate 
volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same 
ceph cluster?
The difference between the spec[2] with my scope is only one is available(the 
spec) and another is in-use(my scope).




[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Cheers,
Boxiang
On 10/21/2018 23:19,Jay S. Bryant wrote:

Boxiang,

I have not herd any discussion of extending this functionality for Ceph to work 
between different Ceph Clusters.  I wasn't aware, however, that the existing 
spec was limited to one Ceph cluster.  So, that is good to know.

I would recommend reaching out to Jon Bernard or Eric Harney for guidance on 
how to proceed.  They work closely with the Ceph driver and could provide 
insight.

Jay




On 10/19/2018 10:21 AM, Boxiang Zhu wrote:



Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph 
cluster.
If the volume is in-use status[2], it will call the generic migration function. 
So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only 
supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is 
anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [zun][zun-ui] How to get "host" attribute for image

2018-10-21 Thread Shu M.
Hongbin,

Thank you for your proposing the patch! I'll check it and proceed
implementation for zun-ui from now.

Best regards,
Shu

2018年10月19日(金) 12:28 Hongbin Lu :

> Shu,
>
> It looks the 'host' field was added to the DB table but not exposed via
> REST API by mistake. See if this patch fixes the issue:
> https://review.openstack.org/#/c/611753/ .
>
> Best regards,
> Hongbin
>
> On Thu, Oct 18, 2018 at 10:50 PM Shu M.  wrote:
>
>> Hi folks,
>>
>> I found the following commit to show "host" attribute for image.
>>
>>
>> https://github.com/openstack/zun/commit/72eac7c8f281de64054dfa07e3f31369c5a251f0
>>
>> But I could not get the "host" for image with zun-show.
>>
>> I think image-list and image-show need to show "host" for admin, so I'd
>> like to add "host" for image into zun-ui.
>> Please let me know how to show "host" attribute.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][oslo][messaging] Removing “blocking” executor from oslo.messaging

2018-10-21 Thread Renat Akhmerov
Hi Ken,

Awesome! IMO it works for us.

Thanks

Renat Akhmerov
@Nokia
On 20 Oct 2018, 01:19 +0700, Ken Giusti , wrote:
> Hi Renat,
> After discussing this a bit with Ben on IRC we're going to push the removal 
> off to T milestone 1.
>
> I really like Ben's idea re: adding a blocking entry to your project's 
> setup.cfg file.  We can remove the explicit check for blocking in 
> oslo.messaging so you won't get an annoying warning if you want to load 
> blocking on your own.
>
> Let me know what you think, thanks.
>
> > On Fri, Oct 19, 2018 at 12:02 AM Renat Akhmerov  
> > wrote:
> > > Hi,
> > >
> > >
> > > @Ken, I understand your considerations. I get that. I’m only asking not 
> > > to remove it *for now*. And yes, if you think it should be discouraged 
> > > from using it’s totally fine. But practically, it’s been the only 
> > > reliable option for Mistral so far that may be our fault, I have to 
> > > admit, because we weren’t able to make it work well with other executor 
> > > types but we’ll try to fix that.
> > >
> > > By the way, I was playing with different options yesterday and it seems 
> > > like that setting the executor to “threading” and the 
> > > “executor_thread_pool_size” property to 1 behaves the same way as 
> > > “blocking”. So may be that’s an option for us too, even if “blocking” is 
> > > completely removed. But I would still be in favour of having some extra 
> > > time to prove that with thorough testing.
> > >
> > > @Ben, including the executor via setup.cfg also looks OK to me. I see no 
> > > issues with this approach.
> > >
> > >
> > > Thanks
> > >
> > > Renat Akhmerov
> > > @Nokia
> > > On 18 Oct 2018, 23:35 +0700, Ben Nemec , wrote:
> > > >
> > > >
> > > > On 10/18/18 9:59 AM, Ken Giusti wrote:
> > > > > Hi Renat,
> > > > >
> > > > > The biggest issue with the blocking executor (IMHO) is that it blocks
> > > > > the protocol I/O while  RPC processing is in progress.  This increases
> > > > > the likelihood that protocol processing will not get done in a timely
> > > > > manner and things start to fail in weird ways.  These failures are
> > > > > timing related and are typically hard to reproduce or root-cause.   
> > > > > This
> > > > > isn't something we can fix as blocking is the nature of the executor.
> > > > >
> > > > > If we are to leave it in we'd really want to discourage its use.
> > > >
> > > > Since it appears the actual executor code lives in futurist, would it be
> > > > possible to remove the entrypoint for blocking from oslo.messaging and
> > > > have mistral just pull it in with their setup.cfg? Seems like they
> > > > should be able to add something like:
> > > >
> > > > oslo.messaging.executors =
> > > > blocking = futurist:SynchronousExecutor
> > > >
> > > > to their setup.cfg to keep it available to them even if we drop it from
> > > > oslo.messaging itself. That seems like a good way to strongly discourage
> > > > use of it while still making it available to projects that are really
> > > > sure they want it.
> > > >
> > > > >
> > > > > However I'm ok with leaving it available if the policy for using
> > > > > blocking is 'use at your own risk', meaning that bug reports may have 
> > > > > to
> > > > > be marked 'won't fix' if we have reason to believe that blocking is at
> > > > > fault.  That implies removing 'blocking' as the default executor value
> > > > > in the API and having applications explicitly choose it.  And we keep
> > > > > the deprecation warning.
> > > > >
> > > > > We could perhaps implement time duration checks around the executor
> > > > > callout and log a warning if the executor blocked for an extended 
> > > > > amount
> > > > > of time (extended=TBD).
> > > > >
> > > > > Other opinions so we can come to a consensus?
> > > > >
> > > > >
> > > > > On Thu, Oct 18, 2018 at 3:24 AM Renat Akhmerov 
> > > > >  > > > > > wrote:
> > > > >
> > > > > Hi Oslo Team,
> > > > >
> > > > > Can we retain “blocking” executor for now in Oslo Messaging?
> > > > >
> > > > >
> > > > > Some background..
> > > > >
> > > > > For a while we had to use Oslo Messaging with “blocking” executor in
> > > > > Mistral because of incompatibility of MySQL driver with green
> > > > > threads when choosing “eventlet” executor. Under certain conditions
> > > > > we would get deadlocks between green threads. Some time ago we
> > > > > switched to using PyMysql driver which is eventlet friendly and did
> > > > > a number of tests that showed that we could safely switch to
> > > > > “eventlet” executor (with that driver) so we introduced a new option
> > > > > in Mistral where we could choose an executor in Oslo Messaging. The
> > > > > corresponding bug is [1].
> > > > >
> > > > > The issue is that we recently found that not everything actually
> > > > > works as expected when using combination PyMysql + “eventlet”
> > > > > executor. We also tried “threading” executor and the system *seems*
> > > > > to work with it but surprisingly p

Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-21 Thread Adam Harwell
Octavia relies heavily on Taskflow and Futurist as well. Personally I agree
with basically everything Monty said earlier. The problem here really isn't
anything besides relaxing the social review policy, which is as simple as
just deciding it as a team and saying "well, ok then". :)

I also use a number of openstack libs outside of openstack to great effect
and have had no problems to speak of, so I don't really think this should
be a concern. I know it can be daunting to first enter the dev/review
process because it is so different from the workflow most people are used
to, but this is a problem that can be solved by having good docs (I think
the existing developer quickstart docs are very effective) and maintaining
an open and welcoming community.

--Adam

On Thu, Oct 18, 2018, 16:32 Dmitry Tantsur  wrote:

> On 10/17/18 5:59 PM, Joshua Harlow wrote:
> > Dmitry Tantsur wrote:
> >> On 10/10/18 7:41 PM, Greg Hill wrote:
> >>> I've been out of the openstack loop for a few years, so I hope this
> >>> reaches the right folks.
> >>>
> >>> Josh Harlow (original author of taskflow and related libraries) and I
> >>> have been discussing the option of moving taskflow out of the
> >>> openstack umbrella recently. This move would likely also include the
> >>> futurist and automaton libraries that are primarily used by taskflow.
> >>
> >> Just for completeness: futurist and automaton are also heavily relied on
> >> by ironic without using taskflow.
> >
> > When did futurist get used??? nice :)
> >
> > (I knew automaton was, but maybe I knew futurist was to and I forgot,
> lol).
>
> I'm pretty sure you did, it happened back in Mitaka :)
>
> >
> >>
> >>> The idea would be to just host them on github and use the regular
> >>> Github features for Issues, PRs, wiki, etc, in the hopes that this
> >>> would spur more development. Taskflow hasn't had any substantial
> >>> contributions in several years and it doesn't really seem that the
> >>> current openstack devs have a vested interest in moving it forward. I
> >>> would like to move it forward, but I don't have an interest in being
> >>> bound by the openstack workflow (this is why the project stagnated as
> >>> core reviewers were pulled on to other projects and couldn't keep up
> >>> with the review backlog, so contributions ground to a halt).
> >>>
> >>> I guess I'm putting it forward to the larger community. Does anyone
> >>> have any objections to us doing this? Are there any non-obvious
> >>> technicalities that might make such a transition difficult? Who would
> >>> need to be made aware so they could adjust their own workflows?
> >>>
> >>> Or would it be preferable to just fork and rename the project so
> >>> openstack can continue to use the current taskflow version without
> >>> worry of us breaking features?
> >>>
> >>> Greg
> >>>
> >>>
> >>>
> __
> >>>
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-21 Thread Harald Jensås
+1 

Bob's review's has been; and continues to be; insightful and on point.
He is very thorough and notice the details. 



On Fri, 2018-10-19 at 09:53 -0400, Alan Bishop wrote:
> +1
> 
> On Fri, Oct 19, 2018 at 9:47 AM John Fulton 
> wrote:
> > +1
> > On Fri, Oct 19, 2018 at 9:46 AM Alex Schultz 
> > wrote:
> > >
> > > +1
> > > On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi <
> > emil...@redhat.com> wrote:
> > > >
> > > > On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles <
> > jaosor...@redhat.com> wrote:
> > > >>
> > > >> I would like to propose Bob Fournier (bfournie) as a core
> > reviewer in
> > > >> TripleO. His patches and reviews have spanned quite a wide
> > range in our
> > > >> project, his reviews show great insight and quality and I
> > think he would
> > > >> be a addition to the core team.
> > > >>
> > > >> What do you folks think?
> > > >
> > > >
> > > > Big +1, Bob is a solid contributor/reviewer. His area of
> > knowledge has been critical in all aspects of Hardware Provisioning
> > integration but also in other TripleO bits.
> > > > --
> > > > Emilien Macchi
> > > >
> > ___
> > ___
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > ___
> > ___
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > subscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]ceph rbd replication group support

2018-10-21 Thread Jay S. Bryant
I would reach out to Lisa Li (lixiaoy1) on Cinder to see if this is 
something they may pick back up.  She has been more active in the 
community lately and may be able to look at this again or at least have 
good guidance for you.


Thanks!

Jay



On 10/19/2018 1:14 AM, 王俊 wrote:


Hi:

I have a question about rbd replication group, I want to know the plan 
or roadmap about it? Anybody work on it?


Blueprint: 
https://blueprints.launchpad.net/cinder/+spec/ceph-rbd-replication-group-support


Thanks



保密:本函仅供收件人使用,如阁下并非抬头标明的收件人,请您即刻删除本函,勿以任何方式使用及传播,并请您能将此误发情形通知发件人,谢谢! 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-21 Thread Jay S. Bryant

Boxiang,

I have not herd any discussion of extending this functionality for Ceph 
to work between different Ceph Clusters.  I wasn't aware, however, that 
the existing spec was limited to one Ceph cluster. So, that is good to know.


I would recommend reaching out to Jon Bernard or Eric Harney for 
guidance on how to proceed.  They work closely with the Ceph driver and 
could provide insight.


Jay


On 10/19/2018 10:21 AM, Boxiang Zhu wrote:


Hi melanie, thanks for your reply.

The version of my cinder and nova is Rocky. The scope of the cinder 
spec[1]
is only for available volume migration between two pools from the same 
ceph cluster.
If the volume is in-use status[2], it will call the generic migration 
function. So that as you
describe it, on the nova side, it raises NotImplementedError(_("Swap 
only supports host devices").

The get_config of net volume[3] has not source_path.

So does anyone try to succeed to migrate volume(in-use) with ceph 
backend or is anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] 
https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] 
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101



Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt 
 wrote:


On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:

When I use the LVM backend to create the volume, then attach
it to a vm.
I can migrate the volume(in-use) from one host to another. The
nova
libvirt will call the 'rebase' to finish it. But if using ceph
backend,
it raises exception 'Swap only supports host devices'. So now
it does
not support to migrate volume(in-use). Does anyone do this
work now? Or
Is there any way to let me migrate volume(in-use) with ceph
backend?


What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:


https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be
something
additional missing that we would need to do to make the migration
work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate

2018-10-21 Thread Ifat Afek
Thanks for your help,
Ifat


On Thu, Oct 18, 2018 at 4:17 PM Jeremy Stanley  wrote:

> On 2018-10-18 14:57:23 +0200 (+0200), Andreas Jaeger wrote:
> > On 18/10/2018 14.15, Ifat Afek wrote:
> > > Hi,
> > >
> > > In the last three days Vitrage gate is broken due to the new
> requirement
> > > of SQLAlchemy-Utils==0.33.6.
> > > We get the following error [1]:
> > >
> > > [...] >
> > > Can we move back to version 0.33.5? or is there another solution?
> >
> > We discussed that on #openstack-infra, and fixed it each day - and then
> it
> > appeared again.
> >
> > https://review.openstack.org/611444 is the proposed fix for that - the
> > issues comes from the fact that we build wheels if there are none
> available
> > and had a race in it.
> >
> > I hope an admin can delete the broken file again and it works again
> tomorrow
> > - if not, best to speak up quickly on #openstack-infra,
>
> It's been deleted (again) and the suspected fix approved so
> hopefully it won't recur.
> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev