Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-19 Thread Crag Wolfe
On 05/18/2016 01:39 PM, Zane Bitter wrote:
> On 17/05/16 20:27, Crag Wolfe wrote:
>> Now getting very Heat-specific. W.r.t. to
>> https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
>> raw_template.files (this is a dict of template filename to contents),
>> both in the DB and in RAM. The approach this patch is taking is that,
>> when one template is created by reference to another, we just re-use the
>> original template's files (ultimately in a new table,
>> raw_template_files). In the case of nested stacks, this saves on quite a
>> bit of duplication.
>>
>> If we follow the 3-step pattern discussed earlier in this thread, we
>> would be looking at P release as to when we start seeing DB storage
>> improvements. As far as RAM is concerned, we would see improvement in
>> the O release since that is when we would start reading from the new
>> column location (and could cache the template files object by its ID).
>> It also means that for the N release, we wouldn't see any RAM or DB
>> improvements, we'll just start writing template files to the new
>> location (in addition to the old location). Is this acceptable, or do
>> impose some sort of downtime restrictions on the next Heat upgrade?
>>
>> A compromise could be to introduce a little bit of downtime:
>>
>> For the N release:
>
> There's also a step 0, which is to run the DB migrations for Newton.
>
>>   1. Add the new column (no need to shut down heat-engine).
>>   2. Shut down all heat-engine's.
>>   3. Upgrade code base to N throughout cluster.
>>   4. Start all heat engine's. Read from new and old template files
>> locations, but only write to the new one.
>>
>> For the O release, we could perform a rolling upgrade with no downtime
>> where we are only reading and writing to the new location, and then drop
>> the old column as a post-upgrade migration (i.e, the typical N+2 pattern
>> [1] that Michal referenced earlier and I'm re-referencing :-).
>>
>> The advantage to the compromise is we would immediately start seeing RAM
>> and DB improvements with the N-release.
>
> +1, and in fact this has been the traditional way of doing it. To be
> able to stop recommending that to operators, we need a solution both
> to the DB problem we're discussing here and to the problem of changes
> to the RPC API parameters. (Before anyone asks, and I know someone
> will... NO, versioned objects do *not* solve either of those problems.)
>
> I've already personally made one backwards-incompatible change to the
> RPC in this version:
>
> https://review.openstack.org/#/c/315275/
>
> So we won't be able to recommend rolling updates from Mitaka->Newton
> anyway.
>
> I suggest that as far as this patch is concerned, we should implement
> the versioning that allows the VO to decide whether to write old or
> new data and leave it at that. That way, if someone manages to
> implement rolling upgrade support in Newton we'll have it, and if we
> don't we'll just fall back to the way we've done it in the past.
>

OK, but in the proper rolling-upgrade path, there are 3 steps (to get to
the point where we are actually de-duplicating data), not 1. So, a way
to do what you are saying, if you are saying that we should optionally
support rolling upgrades (in any way that resembles current usage of
o.vo though it would be trailblazing new ground), is to have a config
value that states whether the operator wants to run at the X+1, X+2, or
X+3 level. I feel like that is complexity for the operator (not to
mention in the code) that we want to avoid. Again, what happens if the
operator goes backwards on one of the heat-engines by accident? It's not
good. Unless there is another way to do it outside of the config
approach I'm not seeing.

Also, a note about the N release in "the compromise" above: it actually
doesn't correspond directly to any of the N / O / P states in the proper
rolling upgrade approach (that's especially why it is attractive for
this particular heat issue, because we can start de-duplicating data
right away at N). So, we can't really punt and just say we want rolling
upgrades as a yes/no switch in a config file. If we want rolling upgrade
support, the operator would need to specify N in the config (and do the
upgrade they have do anyway), then O (then perform rolling upgrade),
then P (then perform another rolling upgrade).

N-compromise: read from old or new, write only to new
O-compromise: read/write only from new, drop old column

N-proper-rolling-upgrade: read from old, read/write to new and old
O-proper-rolling-upgrade: read from new, read/write to new and old
P-proper-rolling-upgrade: read/write only from new, drop old column

Thanks,
--Crag

> cheers,
> Zane.
>
>> [1]
>> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-19 Thread Michał Dulko
On 05/18/2016 10:39 PM, Zane Bitter wrote:
> On 17/05/16 20:27, Crag Wolfe wrote:
>> Now getting very Heat-specific. W.r.t. to
>> https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
>> raw_template.files (this is a dict of template filename to contents),
>> both in the DB and in RAM. The approach this patch is taking is that,
>> when one template is created by reference to another, we just re-use the
>> original template's files (ultimately in a new table,
>> raw_template_files). In the case of nested stacks, this saves on quite a
>> bit of duplication.
>>
>> If we follow the 3-step pattern discussed earlier in this thread, we
>> would be looking at P release as to when we start seeing DB storage
>> improvements. As far as RAM is concerned, we would see improvement in
>> the O release since that is when we would start reading from the new
>> column location (and could cache the template files object by its ID).
>> It also means that for the N release, we wouldn't see any RAM or DB
>> improvements, we'll just start writing template files to the new
>> location (in addition to the old location). Is this acceptable, or do
>> impose some sort of downtime restrictions on the next Heat upgrade?
>>
>> A compromise could be to introduce a little bit of downtime:
>>
>> For the N release:
>
> There's also a step 0, which is to run the DB migrations for Newton.
>
>>   1. Add the new column (no need to shut down heat-engine).
>>   2. Shut down all heat-engine's.
>>   3. Upgrade code base to N throughout cluster.
>>   4. Start all heat engine's. Read from new and old template files
>> locations, but only write to the new one.
>>
>> For the O release, we could perform a rolling upgrade with no downtime
>> where we are only reading and writing to the new location, and then drop
>> the old column as a post-upgrade migration (i.e, the typical N+2 pattern
>> [1] that Michal referenced earlier and I'm re-referencing :-).
>>
>> The advantage to the compromise is we would immediately start seeing RAM
>> and DB improvements with the N-release.
>
> +1, and in fact this has been the traditional way of doing it. To be
> able to stop recommending that to operators, we need a solution both
> to the DB problem we're discussing here and to the problem of changes
> to the RPC API parameters. (Before anyone asks, and I know someone
> will... NO, versioned objects do *not* solve either of those problems.)
>
> I've already personally made one backwards-incompatible change to the
> RPC in this version:
>
> https://review.openstack.org/#/c/315275/

If you want to support rolling upgrades, you need a way to prevent
introduction of such incompatibilities. This particular one seems pretty
easy once you get RPC version pinning framework (either auto or
config-based) in place. Nova and Cinder already have such features.

It would work by just don' send template_id when there are older
services in the deployment and make your RPC server be able to
understand also the requests without template_id.

> So we won't be able to recommend rolling updates from Mitaka->Newton
> anyway.
>
> I suggest that as far as this patch is concerned, we should implement
> the versioning that allows the VO to decide whether to write old or
> new data and leave it at that. That way, if someone manages to
> implement rolling upgrade support in Newton we'll have it, and if we
> don't we'll just fall back to the way we've done it in the past.
>
> cheers,
> Zane.
>
>> [1]
>> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
>>
>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-18 Thread Zane Bitter

On 17/05/16 20:27, Crag Wolfe wrote:

Now getting very Heat-specific. W.r.t. to
https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
raw_template.files (this is a dict of template filename to contents),
both in the DB and in RAM. The approach this patch is taking is that,
when one template is created by reference to another, we just re-use the
original template's files (ultimately in a new table,
raw_template_files). In the case of nested stacks, this saves on quite a
bit of duplication.

If we follow the 3-step pattern discussed earlier in this thread, we
would be looking at P release as to when we start seeing DB storage
improvements. As far as RAM is concerned, we would see improvement in
the O release since that is when we would start reading from the new
column location (and could cache the template files object by its ID).
It also means that for the N release, we wouldn't see any RAM or DB
improvements, we'll just start writing template files to the new
location (in addition to the old location). Is this acceptable, or do
impose some sort of downtime restrictions on the next Heat upgrade?

A compromise could be to introduce a little bit of downtime:

For the N release:


There's also a step 0, which is to run the DB migrations for Newton.


  1. Add the new column (no need to shut down heat-engine).
  2. Shut down all heat-engine's.
  3. Upgrade code base to N throughout cluster.
  4. Start all heat engine's. Read from new and old template files
locations, but only write to the new one.

For the O release, we could perform a rolling upgrade with no downtime
where we are only reading and writing to the new location, and then drop
the old column as a post-upgrade migration (i.e, the typical N+2 pattern
[1] that Michal referenced earlier and I'm re-referencing :-).

The advantage to the compromise is we would immediately start seeing RAM
and DB improvements with the N-release.


+1, and in fact this has been the traditional way of doing it. To be 
able to stop recommending that to operators, we need a solution both to 
the DB problem we're discussing here and to the problem of changes to 
the RPC API parameters. (Before anyone asks, and I know someone will... 
NO, versioned objects do *not* solve either of those problems.)


I've already personally made one backwards-incompatible change to the 
RPC in this version:


https://review.openstack.org/#/c/315275/

So we won't be able to recommend rolling updates from Mitaka->Newton anyway.

I suggest that as far as this patch is concerned, we should implement 
the versioning that allows the VO to decide whether to write old or new 
data and leave it at that. That way, if someone manages to implement 
rolling upgrade support in Newton we'll have it, and if we don't we'll 
just fall back to the way we've done it in the past.


cheers,
Zane.


[1]
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-18 Thread Michał Dulko
On 05/17/2016 09:40 PM, Crag Wolfe wrote:
 


> That helps a lot, thanks! You are right, it would have to be a 3-step
> upgrade to avoid the issue you mentioned in 6.
>
> Another thing I am wondering about: if my particular object is not
> exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
> can do the 3 steps over 3 releases just in the object's .py file -- what
> additional value do I get from o.vo?

Unfortunately Zane's right - none. In case of DB schema compatibility
you can benefit from o.vo if you have something like nova-conductor -
upgraded atomically and being able to backport object to previous
versions getting data from newer DB schema. Also there shouldn't be DB
accesses in your n-cpu-like services.

o.vo are mostly useful in Cinder to model dictionaries sent over RPC
(like request_spec), which we're backporting if there are older versions
of services in the deployment. Versioning and well-defining dict blobs
is essential to control compatibility. Also sending whole o.vo instead
of plain id in RPC methods can give you more flexibility in complicated
compatibility issues, but it turns out in Cinder we haven't yet hit a
case when that would be useful.

> I'm also shying away from the idea of allowing for config-driven
> upgrades. The reason is, suppose an operator updates a config, then does
> a rolling restart to go from X to X+1. Then again (and probably again)
> as needed. Everything works great, run a victory lap. A few weeks later,
> some ansible or puppet automation accidentally blows away the config
> value saying that heat-engine should be running at the X+3 version for
> my_object. Ouch. Probably unlikely, but more likely than say
> accidentally deploying a .py file from three releases ago.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
Now getting very Heat-specific. W.r.t. to
https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
raw_template.files (this is a dict of template filename to contents),
both in the DB and in RAM. The approach this patch is taking is that,
when one template is created by reference to another, we just re-use the
original template's files (ultimately in a new table,
raw_template_files). In the case of nested stacks, this saves on quite a
bit of duplication.

If we follow the 3-step pattern discussed earlier in this thread, we
would be looking at P release as to when we start seeing DB storage
improvements. As far as RAM is concerned, we would see improvement in
the O release since that is when we would start reading from the new
column location (and could cache the template files object by its ID).
It also means that for the N release, we wouldn't see any RAM or DB
improvements, we'll just start writing template files to the new
location (in addition to the old location). Is this acceptable, or do
impose some sort of downtime restrictions on the next Heat upgrade?

A compromise could be to introduce a little bit of downtime:

For the N release:
 1. Add the new column (no need to shut down heat-engine).
 2. Shut down all heat-engine's.
 3. Upgrade code base to N throughout cluster.
 4. Start all heat engine's. Read from new and old template files
locations, but only write to the new one.

For the O release, we could perform a rolling upgrade with no downtime
where we are only reading and writing to the new location, and then drop
the old column as a post-upgrade migration (i.e, the typical N+2 pattern
[1] that Michal referenced earlier and I'm re-referencing :-).

The advantage to the compromise is we would immediately start seeing RAM
and DB improvements with the N-release.

[1]
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Zane Bitter

On 17/05/16 15:40, Crag Wolfe wrote:

Another thing I am wondering about: if my particular object is not
exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
can do the 3 steps over 3 releases just in the object's .py file -- what
additional value do I get from o.vo?


It's more of a cargo-cult thing ;)

(Given that *none* of the versioned objects in Heat are exposed over 
RPC, the additional value you get is consistency and a nice place to 
abstract whatever weirdness is going on in the database at any given 
time so that it doesn't leak into the rest of the code.)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
On 05/17/2016 10:34 AM, Michał Dulko wrote:
> On 05/17/2016 06:30 PM, Crag Wolfe wrote:
>> Hi all,
>>
>> I've read that versioned objects are favored for supporting different
>> versions between RPC services and to support rolling upgrades. I'm
>> looking to follow the pattern for Heat. Basically, it is the classic
>> problem where we want to migrate from writing to a column in one table
>> to having that column live in a different table. Looking at nova code,
>> the version for a given versioned object is a constant in the given
>> object/.py file. To properly support rolling upgrades
>> where we have older and newer heat-engine processes running
>> simultaneously (thus avoiding downtime), we have to write to both the
>> old column and the new column. Once all processes have been upgraded,
>> we can upgrade again to only write to the new location (but still able
>> to read from the old location of course). Following the existing
>> pattern, this means the operator has to upgrade 
>> twice (it may be possible to increment VERSION in 
>> only once, however, the first time).
>>
>> The drawback of the above is it means cutting two releases (since two
>> different .py files). However, I wanted to check if anyone has gone
>> with a different approach so only one release is required. One way to
>> do that would be by specifying a version (or some other flag) in
>> heat.conf. Then, only one .py release would be
>> required -- the logic of whether to write to both the old and new
>> location (the intermediate step) versus just the new location (the
>> final step) would be in .py, dictated by the config
>> value. The advantage to this approach is now there is only one .py
>> file released, though the operator would still have to make a config
>> change and restart heat processes a second time to move from the
>> intermediate step to the final step.
> 
> Nova has the pattern of being able to do all that in one release by
> exercising o.vo, but there are assumptions they are relying on (details
> [1]):
> 
>   * nova-compute accesses the DB through nova-conductor.
>   * nova-conductor gets upgraded atomically.
>   * nova-conductor is able to backport an object if nova-compute is
> older and doesn't understand it.
> 
> Now if you want to have heat-engines running in different versions and
> all of them are freely accessing the DB, then that approach won't work
> as there's no one who can do a backport.
> 
> We've faced same issue in Cinder and developed a way to do such
> modifications in three releases for columns that are writable and two
> releases for columns that are read-only. This is explained in spec [2]
> and devref [3]. And yes, it's a little painful.
> 
> If I got everything correctly, your idea of two-step upgrade will work
> only for read-only columns. Consider this situation:
> 
>  1. We have deployment running h-eng (A and B) in version X.
>  2. We apply X+1 migration moving column `foo` to `bar`.
>  3. We upgrade h-eng A to X+1. Now it writes to both `foo` and `bar`.
>  4. A updates `foo` and `bar`.
>  5. B updates `foo`. Now correct value is in `foo` only.
>  6. A want to read the value. But is latest one in `foo` or `bar`? No
> way to tell that.
> 
> 
> I know Keystone team is trying to solve that with some SQLAlchemy magic,
> but I don't think the design is agreed on yet. There was a presentation
> at the summit [4] that mentions it (and attempts clarification of
> approaches taken by different projects).
> 
> Hopefully this helps a little.
> 
> Thanks,
> Michal (dulek on freenode)
> 
> [1] 
> http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
> 
> [2] 
> http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
> 
> [3] 
> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
> 
> [4] https://www.youtube.com/watch?v=ivcNI7EHyAY
> 

That helps a lot, thanks! You are right, it would have to be a 3-step
upgrade to avoid the issue you mentioned in 6.

Another thing I am wondering about: if my particular object is not
exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
can do the 3 steps over 3 releases just in the object's .py file -- what
additional value do I get from o.vo?

I'm also shying away from the idea of allowing for config-driven
upgrades. The reason is, suppose an operator updates a config, then does
a rolling restart to go from X to X+1. Then again (and probably again)
as needed. Everything works great, run a victory lap. A few weeks later,
some ansible or puppet automation accidentally blows away the config
value saying that heat-engine should be running at the X+3 version for
my_object. Ouch. Probably unlikely, but more likely than say
accidentally deploying a .py file from three releases ago.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Michał Dulko
On 05/17/2016 06:30 PM, Crag Wolfe wrote:
> Hi all,
>
> I've read that versioned objects are favored for supporting different
> versions between RPC services and to support rolling upgrades. I'm
> looking to follow the pattern for Heat. Basically, it is the classic
> problem where we want to migrate from writing to a column in one table
> to having that column live in a different table. Looking at nova code,
> the version for a given versioned object is a constant in the given
> object/.py file. To properly support rolling upgrades
> where we have older and newer heat-engine processes running
> simultaneously (thus avoiding downtime), we have to write to both the
> old column and the new column. Once all processes have been upgraded,
> we can upgrade again to only write to the new location (but still able
> to read from the old location of course). Following the existing
> pattern, this means the operator has to upgrade 
> twice (it may be possible to increment VERSION in 
> only once, however, the first time).
>
> The drawback of the above is it means cutting two releases (since two
> different .py files). However, I wanted to check if anyone has gone
> with a different approach so only one release is required. One way to
> do that would be by specifying a version (or some other flag) in
> heat.conf. Then, only one .py release would be
> required -- the logic of whether to write to both the old and new
> location (the intermediate step) versus just the new location (the
> final step) would be in .py, dictated by the config
> value. The advantage to this approach is now there is only one .py
> file released, though the operator would still have to make a config
> change and restart heat processes a second time to move from the
> intermediate step to the final step.

Nova has the pattern of being able to do all that in one release by
exercising o.vo, but there are assumptions they are relying on (details
[1]):

  * nova-compute accesses the DB through nova-conductor.
  * nova-conductor gets upgraded atomically.
  * nova-conductor is able to backport an object if nova-compute is
older and doesn't understand it.

Now if you want to have heat-engines running in different versions and
all of them are freely accessing the DB, then that approach won't work
as there's no one who can do a backport.

We've faced same issue in Cinder and developed a way to do such
modifications in three releases for columns that are writable and two
releases for columns that are read-only. This is explained in spec [2]
and devref [3]. And yes, it's a little painful.

If I got everything correctly, your idea of two-step upgrade will work
only for read-only columns. Consider this situation:

 1. We have deployment running h-eng (A and B) in version X.
 2. We apply X+1 migration moving column `foo` to `bar`.
 3. We upgrade h-eng A to X+1. Now it writes to both `foo` and `bar`.
 4. A updates `foo` and `bar`.
 5. B updates `foo`. Now correct value is in `foo` only.
 6. A want to read the value. But is latest one in `foo` or `bar`? No
way to tell that.


I know Keystone team is trying to solve that with some SQLAlchemy magic,
but I don't think the design is agreed on yet. There was a presentation
at the summit [4] that mentions it (and attempts clarification of
approaches taken by different projects).

Hopefully this helps a little.

Thanks,
Michal (dulek on freenode)

[1] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/

[2] 
http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html

[3] 
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

[4] https://www.youtube.com/watch?v=ivcNI7EHyAY


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Crag Wolfe
Hi all,

I've read that versioned objects are favored for supporting different
versions between RPC services and to support rolling upgrades. I'm
looking to follow the pattern for Heat. Basically, it is the classic
problem where we want to migrate from writing to a column in one table
to having that column live in a different table. Looking at nova code,
the version for a given versioned object is a constant in the given
object/.py file. To properly support rolling upgrades
where we have older and newer heat-engine processes running
simultaneously (thus avoiding downtime), we have to write to both the
old column and the new column. Once all processes have been upgraded,
we can upgrade again to only write to the new location (but still able
to read from the old location of course). Following the existing
pattern, this means the operator has to upgrade 
twice (it may be possible to increment VERSION in 
only once, however, the first time).

The drawback of the above is it means cutting two releases (since two
different .py files). However, I wanted to check if anyone has gone
with a different approach so only one release is required. One way to
do that would be by specifying a version (or some other flag) in
heat.conf. Then, only one .py release would be
required -- the logic of whether to write to both the old and new
location (the intermediate step) versus just the new location (the
final step) would be in .py, dictated by the config
value. The advantage to this approach is now there is only one .py
file released, though the operator would still have to make a config
change and restart heat processes a second time to move from the
intermediate step to the final step.

Thanks,
--Crag

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev