Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Doug Hellmann
Excerpts from Mike Bayer's message of 2017-03-15 12:39:48 -0400:
> 
> On 03/15/2017 11:42 AM, Sean Dague wrote:
> > Perhaps, but in doing so oslo.db is going to get the pin and uc from
> > stable/ocata, which is going to force it back to SQLA < 1.1, which will
> > prevent oslo.db changes that require >= 1.1 to work.
> 
> so do we want to make that job non-voting or something like that?

Is that job a holdover from before we had good constraints pinning in
all of our stable branches? Do we still need it? We need to test a
change to oslo.db's stable branch with the other code on that stable
branch, but do we need to test oslo.db's master branch that way?

Someone with more current Oslo memory may remember why we added that job
in the first place, so let's not just remove it until we understand why
it's there.

Doug

> 
> >
> > -Sean
> >
> > On 03/15/2017 11:26 AM, Roman Podoliaka wrote:
> >> Isn't the purpose of that specific job -
> >> gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
> >> change to the library master branch with stable releases (i.e. Ocata)
> >> - of all other components?
> >>
> >> On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
> >>> On 03/15/2017 10:38 AM, Mike Bayer wrote:
> 
> 
>  On 03/15/2017 07:30 AM, Sean Dague wrote:
> >
> > The problem was the original patch kept a cap on SQLA, just moved it up
> > to the next pre-release, not realizing the caps in general are the
> > concern by the requirements team. So instead of upping the cap, I just
> > removed it entirely. (It also didn't help on clarity that there was a
> > completely unrelated fail in the tests which made it look like the
> > system was stopping this.)
> >
> > This should hopefully let new SQLA releases very naturally filter out to
> > all our services and libraries.
> >
> > -Sean
> >
> 
>  so the failure I'm seeing now is *probably* one I saw earlier when we
>  tried to do this, the tempest run fails on trying to run a keystone
>  request, but I can't find the same error in the logs this time.
> 
>  In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>  this:
> 
>  ContextualVersionConflict: (SQLAlchemy 1.1.5
>  (/usr/local/lib/python2.7/dist-packages),
>  Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>  'keystone']))
> 
>  stack trace was in the apache log:  
>  http://paste.openstack.org/show/601583/
> 
> 
>  but now on our own oslo.db build, the same jobs are failing and are
>  halting at keystone, but I can't find any error:
> 
>  the failure is:
> 
> 
>  http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
> 
> 
>  and is on:  https://review.openstack.org/#/c/445930/
> 
> 
>  if someone w/ tempest expertise could help with this that would be great.
> >>>
> >>> It looks like oslo.db master is being used with ocata services?
> >>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
> >>>
> >>>
> >>> I suspect that's the root issue. That should be stable/ocata branch, 
> >>> right?
> >>>
> >>> -Sean
> >>>
> >>> --
> >>> Sean Dague
> >>> http://dague.net
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Mike Bayer


On 03/15/2017 11:42 AM, Sean Dague wrote:

Perhaps, but in doing so oslo.db is going to get the pin and uc from
stable/ocata, which is going to force it back to SQLA < 1.1, which will
prevent oslo.db changes that require >= 1.1 to work.


so do we want to make that job non-voting or something like that?





-Sean

On 03/15/2017 11:26 AM, Roman Podoliaka wrote:

Isn't the purpose of that specific job -
gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
change to the library master branch with stable releases (i.e. Ocata)
- of all other components?

On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:

On 03/15/2017 10:38 AM, Mike Bayer wrote:



On 03/15/2017 07:30 AM, Sean Dague wrote:


The problem was the original patch kept a cap on SQLA, just moved it up
to the next pre-release, not realizing the caps in general are the
concern by the requirements team. So instead of upping the cap, I just
removed it entirely. (It also didn't help on clarity that there was a
completely unrelated fail in the tests which made it look like the
system was stopping this.)

This should hopefully let new SQLA releases very naturally filter out to
all our services and libraries.

-Sean



so the failure I'm seeing now is *probably* one I saw earlier when we
tried to do this, the tempest run fails on trying to run a keystone
request, but I can't find the same error in the logs this time.

In an earlier build of https://review.openstack.org/#/c/423192/, we saw
this:

ContextualVersionConflict: (SQLAlchemy 1.1.5
(/usr/local/lib/python2.7/dist-packages),
Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
'keystone']))

stack trace was in the apache log:  http://paste.openstack.org/show/601583/


but now on our own oslo.db build, the same jobs are failing and are
halting at keystone, but I can't find any error:

the failure is:


http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/


and is on:  https://review.openstack.org/#/c/445930/


if someone w/ tempest expertise could help with this that would be great.


It looks like oslo.db master is being used with ocata services?
http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434


I suspect that's the root issue. That should be stable/ocata branch, right?

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Sean Dague
Perhaps, but in doing so oslo.db is going to get the pin and uc from
stable/ocata, which is going to force it back to SQLA < 1.1, which will
prevent oslo.db changes that require >= 1.1 to work.

-Sean

On 03/15/2017 11:26 AM, Roman Podoliaka wrote:
> Isn't the purpose of that specific job -
> gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
> change to the library master branch with stable releases (i.e. Ocata)
> - of all other components?
> 
> On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
>> On 03/15/2017 10:38 AM, Mike Bayer wrote:
>>>
>>>
>>> On 03/15/2017 07:30 AM, Sean Dague wrote:

 The problem was the original patch kept a cap on SQLA, just moved it up
 to the next pre-release, not realizing the caps in general are the
 concern by the requirements team. So instead of upping the cap, I just
 removed it entirely. (It also didn't help on clarity that there was a
 completely unrelated fail in the tests which made it look like the
 system was stopping this.)

 This should hopefully let new SQLA releases very naturally filter out to
 all our services and libraries.

 -Sean

>>>
>>> so the failure I'm seeing now is *probably* one I saw earlier when we
>>> tried to do this, the tempest run fails on trying to run a keystone
>>> request, but I can't find the same error in the logs this time.
>>>
>>> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>>> this:
>>>
>>> ContextualVersionConflict: (SQLAlchemy 1.1.5
>>> (/usr/local/lib/python2.7/dist-packages),
>>> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>>> 'keystone']))
>>>
>>> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
>>>
>>>
>>> but now on our own oslo.db build, the same jobs are failing and are
>>> halting at keystone, but I can't find any error:
>>>
>>> the failure is:
>>>
>>>
>>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
>>>
>>>
>>> and is on:  https://review.openstack.org/#/c/445930/
>>>
>>>
>>> if someone w/ tempest expertise could help with this that would be great.
>>
>> It looks like oslo.db master is being used with ocata services?
>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
>>
>>
>> I suspect that's the root issue. That should be stable/ocata branch, right?
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Roman Podoliaka
Isn't the purpose of that specific job -
gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
change to the library master branch with stable releases (i.e. Ocata)
- of all other components?

On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
> On 03/15/2017 10:38 AM, Mike Bayer wrote:
>>
>>
>> On 03/15/2017 07:30 AM, Sean Dague wrote:
>>>
>>> The problem was the original patch kept a cap on SQLA, just moved it up
>>> to the next pre-release, not realizing the caps in general are the
>>> concern by the requirements team. So instead of upping the cap, I just
>>> removed it entirely. (It also didn't help on clarity that there was a
>>> completely unrelated fail in the tests which made it look like the
>>> system was stopping this.)
>>>
>>> This should hopefully let new SQLA releases very naturally filter out to
>>> all our services and libraries.
>>>
>>> -Sean
>>>
>>
>> so the failure I'm seeing now is *probably* one I saw earlier when we
>> tried to do this, the tempest run fails on trying to run a keystone
>> request, but I can't find the same error in the logs this time.
>>
>> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>> this:
>>
>> ContextualVersionConflict: (SQLAlchemy 1.1.5
>> (/usr/local/lib/python2.7/dist-packages),
>> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>> 'keystone']))
>>
>> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
>>
>>
>> but now on our own oslo.db build, the same jobs are failing and are
>> halting at keystone, but I can't find any error:
>>
>> the failure is:
>>
>>
>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
>>
>>
>> and is on:  https://review.openstack.org/#/c/445930/
>>
>>
>> if someone w/ tempest expertise could help with this that would be great.
>
> It looks like oslo.db master is being used with ocata services?
> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
>
>
> I suspect that's the root issue. That should be stable/ocata branch, right?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Sean Dague
On 03/15/2017 10:38 AM, Mike Bayer wrote:
> 
> 
> On 03/15/2017 07:30 AM, Sean Dague wrote:
>>
>> The problem was the original patch kept a cap on SQLA, just moved it up
>> to the next pre-release, not realizing the caps in general are the
>> concern by the requirements team. So instead of upping the cap, I just
>> removed it entirely. (It also didn't help on clarity that there was a
>> completely unrelated fail in the tests which made it look like the
>> system was stopping this.)
>>
>> This should hopefully let new SQLA releases very naturally filter out to
>> all our services and libraries.
>>
>> -Sean
>>
> 
> so the failure I'm seeing now is *probably* one I saw earlier when we
> tried to do this, the tempest run fails on trying to run a keystone
> request, but I can't find the same error in the logs this time.
> 
> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
> this:
> 
> ContextualVersionConflict: (SQLAlchemy 1.1.5
> (/usr/local/lib/python2.7/dist-packages),
> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
> 'keystone']))
> 
> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
> 
> 
> but now on our own oslo.db build, the same jobs are failing and are
> halting at keystone, but I can't find any error:
> 
> the failure is:
> 
> 
> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
> 
> 
> and is on:  https://review.openstack.org/#/c/445930/
> 
> 
> if someone w/ tempest expertise could help with this that would be great.

It looks like oslo.db master is being used with ocata services?
http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434


I suspect that's the root issue. That should be stable/ocata branch, right?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Mike Bayer



On 03/15/2017 07:30 AM, Sean Dague wrote:


The problem was the original patch kept a cap on SQLA, just moved it up
to the next pre-release, not realizing the caps in general are the
concern by the requirements team. So instead of upping the cap, I just
removed it entirely. (It also didn't help on clarity that there was a
completely unrelated fail in the tests which made it look like the
system was stopping this.)

This should hopefully let new SQLA releases very naturally filter out to
all our services and libraries.

-Sean



so the failure I'm seeing now is *probably* one I saw earlier when we 
tried to do this, the tempest run fails on trying to run a keystone 
request, but I can't find the same error in the logs this time.


In an earlier build of https://review.openstack.org/#/c/423192/, we saw 
this:


ContextualVersionConflict: (SQLAlchemy 1.1.5 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db', 
'keystone']))


stack trace was in the apache log:  http://paste.openstack.org/show/601583/


but now on our own oslo.db build, the same jobs are failing and are 
halting at keystone, but I can't find any error:


the failure is:


http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/ 



and is on:  https://review.openstack.org/#/c/445930/


if someone w/ tempest expertise could help with this that would be great.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Sean Dague
On 03/15/2017 05:32 AM, Thierry Carrez wrote:
> Mike Bayer wrote:
>> As mentioned previously, SQLAlchemy 1.1 has now been released for about
>> six months.   My work now is on SQLAlchemy 1.2 which should hopefully
>> see initial releases in late spring.SQLAlchemy 1.1 includes tons of
>> features, bugfixes, and improvements, and in particular the most recent
>> versions contain some critical performance improvements focused around
>> the "joined eager loading" feature, most typically encountered when an
>> application makes many, many queries for small, single-row result sets
>> with lots of joined eager loading.   In other words, exactly the kinds
>> of queries that Openstack applications do a lot; the fixes here were
>> identified as a direct result of Neutron query profiling by myself and a
>> few other contributors.
>>
>> For many weeks now, various patches to attempt to bump requirements for
>> SQLAlchemy 1.1 have been languishing with little interest, and I do not
>> have enough knowledge of the requirements system to get exactly the
>> correct patch that will accomplish the goal (nor do others).  The
>> current gerrit is at https://review.openstack.org/#/c/423192/, where you
>> can see that not just me, but a bunch of folks, have no idea what
>> incantations we need to put here that will make this happen.  Tony
>> Breeds has chimed in thusly:
>>
>>> To get this in we'll need to remove the cap in global-requirements
>> *and* at the same time add a heap of entries to
>> upper-constratints-xfails.txt. this will allow us to merge the cap
>> removal and keep the constraint in the 1.0 family while we wait for the
>> requirements sync to propagate out.
>>
>> I'm not readily familiar with what goes into upper-constraints-xfails
>> and this file does not appear to be documented in common places like
>> https://wiki.openstack.org/wiki/Requirements or
>> https://git.openstack.org/cgit/openstack/requirements/tree/README.rst .
>>
>> I'm asking on the list here for some assistance in moving this forward.
>> SQLAlchemy development these days is closely attuned to the needs of
>> Openstack now, a series of Openstack test suites are part of
>> SQLAlchemy's own CI servers to ensure backwards compatibility with all
>> changes, and 1.2 will have even more features that are directly
>> consumable by oslo.db (features everyone will want, I promise you).
>> Being able to bump requirements across Openstack so that new versions
>> can be tested and integrated in a timely manner would be very helpful.
> 
> It sounds like a transition that the requirements team should directly
> help with (since it's tricky). Would be a shame to pass on the
> performance improvements just because the process to work around the cap
> is dark magic. I would help but it's the first time I hear of that
> xfails file, so I'm probably not current enough :)

I think there was just a communication gap here, I'm hoping the updated
review (which I just approved) gets us past it.

The question was: How can we get newer SQLA into the system?

The problem was the original patch kept a cap on SQLA, just moved it up
to the next pre-release, not realizing the caps in general are the
concern by the requirements team. So instead of upping the cap, I just
removed it entirely. (It also didn't help on clarity that there was a
completely unrelated fail in the tests which made it look like the
system was stopping this.)

This should hopefully let new SQLA releases very naturally filter out to
all our services and libraries.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Thierry Carrez
Mike Bayer wrote:
> As mentioned previously, SQLAlchemy 1.1 has now been released for about
> six months.   My work now is on SQLAlchemy 1.2 which should hopefully
> see initial releases in late spring.SQLAlchemy 1.1 includes tons of
> features, bugfixes, and improvements, and in particular the most recent
> versions contain some critical performance improvements focused around
> the "joined eager loading" feature, most typically encountered when an
> application makes many, many queries for small, single-row result sets
> with lots of joined eager loading.   In other words, exactly the kinds
> of queries that Openstack applications do a lot; the fixes here were
> identified as a direct result of Neutron query profiling by myself and a
> few other contributors.
> 
> For many weeks now, various patches to attempt to bump requirements for
> SQLAlchemy 1.1 have been languishing with little interest, and I do not
> have enough knowledge of the requirements system to get exactly the
> correct patch that will accomplish the goal (nor do others).  The
> current gerrit is at https://review.openstack.org/#/c/423192/, where you
> can see that not just me, but a bunch of folks, have no idea what
> incantations we need to put here that will make this happen.  Tony
> Breeds has chimed in thusly:
> 
>> To get this in we'll need to remove the cap in global-requirements
> *and* at the same time add a heap of entries to
> upper-constratints-xfails.txt. this will allow us to merge the cap
> removal and keep the constraint in the 1.0 family while we wait for the
> requirements sync to propagate out.
> 
> I'm not readily familiar with what goes into upper-constraints-xfails
> and this file does not appear to be documented in common places like
> https://wiki.openstack.org/wiki/Requirements or
> https://git.openstack.org/cgit/openstack/requirements/tree/README.rst .
> 
> I'm asking on the list here for some assistance in moving this forward.
> SQLAlchemy development these days is closely attuned to the needs of
> Openstack now, a series of Openstack test suites are part of
> SQLAlchemy's own CI servers to ensure backwards compatibility with all
> changes, and 1.2 will have even more features that are directly
> consumable by oslo.db (features everyone will want, I promise you).
> Being able to bump requirements across Openstack so that new versions
> can be tested and integrated in a timely manner would be very helpful.

It sounds like a transition that the requirements team should directly
help with (since it's tricky). Would be a shame to pass on the
performance improvements just because the process to work around the cap
is dark magic. I would help but it's the first time I hear of that
xfails file, so I'm probably not current enough :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev