Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Zane Bitter

On 25/02/15 19:15, Dolph Mathews wrote:


On Wed, Feb 25, 2015 at 5:42 PM, Zane Bitter mailto:zbit...@redhat.com>> wrote:

On 25/02/15 15:37, Joe Gordon wrote:



On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell mailto:tim.b...@cern.ch>
>> wrote:


 A few inline comments and a general point

 How do we handle scenarios like volumes when we have a
per-component
 janitor rather than a single co-ordinator ?

 To be clean,

 1. nova should shutdown the instance
 2. nova should then ask the volume to be detached
 3. cinder could then perform the 'project deletion' action as
 configured by the operator (such as shelve or backup)
 4. nova could then perform the 'project deletion' action as
 configured by the operator (such as VM delete or shelve)

 If we have both cinder and nova responding to a single message,
 cinder would do 3. Immediately and nova would be doing the
shutdown
 which is likely to lead to a volume which could not be
shelved cleanly.

 The problem I see with messages is that co-ordination of
the actions
 may require ordering between the components.  The
disable/enable
 cases would show this in a worse scenario.


You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.
Perhaps the
best way forward is a openstack-specs spec to hash out these
details.


For completeness, if nothing else, it should be noted that another
option is for Keystone to refuse to delete the project until all
resources within it have been removed by a user.


Keystone has no knowledge of the tenant-owned resources in OpenStack
(nor is it a client of the other services), so that's not really feasible.


As pointed out above, Keystone doesn't have any knowledge of how to 
orchestrate the deletion of the tenant-owned resources either (and in 
large part neither do the other services - except Heat, and then only 
for the ones it created), so by that logic neither option is feasible.


Choose your poison ;)



It's hard to know at this point which would be more painful. Both
sound horrific in their own way :D

cheers,
Zane.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Dolph Mathews
On Wed, Feb 25, 2015 at 5:42 PM, Zane Bitter  wrote:

> On 25/02/15 15:37, Joe Gordon wrote:
>
>>
>>
>> On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell > > wrote:
>>
>>
>> A few inline comments and a general point
>>
>> How do we handle scenarios like volumes when we have a per-component
>> janitor rather than a single co-ordinator ?
>>
>> To be clean,
>>
>> 1. nova should shutdown the instance
>> 2. nova should then ask the volume to be detached
>> 3. cinder could then perform the 'project deletion' action as
>> configured by the operator (such as shelve or backup)
>> 4. nova could then perform the 'project deletion' action as
>> configured by the operator (such as VM delete or shelve)
>>
>> If we have both cinder and nova responding to a single message,
>> cinder would do 3. Immediately and nova would be doing the shutdown
>> which is likely to lead to a volume which could not be shelved
>> cleanly.
>>
>> The problem I see with messages is that co-ordination of the actions
>> may require ordering between the components.  The disable/enable
>> cases would show this in a worse scenario.
>>
>>
>> You raise two good points.
>>
>> * How to clean something up may be different for different clouds
>> * Some cleanup operations have to happen in a specific order
>>
>> Not sure what the best way to address those two points is.  Perhaps the
>> best way forward is a openstack-specs spec to hash out these details.
>>
>
> For completeness, if nothing else, it should be noted that another option
> is for Keystone to refuse to delete the project until all resources within
> it have been removed by a user.
>

Keystone has no knowledge of the tenant-owned resources in OpenStack (nor
is it a client of the other services), so that's not really feasible.


>
> It's hard to know at this point which would be more painful. Both sound
> horrific in their own way :D
>
> cheers,
> Zane.
>
>
>> Tim
>>
>>  > -Original Message-
>>  > From: Ian Cordasco [mailto:ian.corda...@rackspace.com
>> ]
>>  > Sent: 19 February 2015 17:49
>>  > To: OpenStack Development Mailing List (not for usage questions);
>> Joe Gordon
>>  > Cc: openstack-operat...@lists.openstack.org
>> 
>>  > Subject: Re: [Openstack-operators] [openstack-dev] Resources
>> owned by a
>>  > project/tenant are not cleaned up after that project is deleted
>> from keystone
>>  >
>>  >
>>  >
>>  > On 2/2/15, 15:41, "Morgan Fainberg" > > wrote:
>>  >
>>  > >
>>  > >On February 2, 2015 at 1:31:14 PM, Joe Gordon
>> (joe.gord...@gmail.com )
>>  > >wrote:
>>  > >
>>  > >
>>  > >
>>  > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
>>  > >mailto:morgan.fainb...@gmail.com>>
>>
>> wrote:
>>  > >
>>  > >I think the simple answer is "yes". We (keystone) should emit
>>  > >notifications. And yes other projects should listen.
>>  > >
>>  > >The only thing really in discussion should be:
>>  > >
>>  > >1: soft delete or hard delete? Does the service mark it as
>> orphaned, or
>>  > >just delete (leave this to nova, cinder, etc to discuss)
>>  > >
>>  > >2: how to cleanup when an event is missed (e.g rabbit bus goes
>> out to
>>  > >lunch).
>>  > >
>>  > >
>>  > >
>>  > >
>>  > >
>>  > >
>>  > >I disagree slightly, I don't think projects should directly
>> listen to
>>  > >the Keystone notifications I would rather have the API be
>> something
>>  > >from a keystone owned library, say keystonemiddleware. So
>> something like
>>  > this:
>>  > >
>>  > >
>>  > >from keystonemiddleware import janitor
>>  > >
>>  > >
>>  > >keystone_janitor = janitor.Janitor()
>>  > >keystone_janitor.register_callback(nova.tenant_cleanup)
>>  > >
>>  > >
>>  > >keystone_janitor.spawn_greenthread()
>>  > >
>>  > >
>>  > >That way each project doesn't have to include a lot of boilerplate
>>  > >code, and keystone can easily modify/improve/upgrade the
>> notification
>>  > mechanism.
>>  > >
>>  > >
>>
>>
>> I assume janitor functions can be used for
>>
>> - enable/disable project
>> - enable/disable user
>>
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >Sure. I’d place this into an implementation detail of where that
>> > >actually lives. I’d be fine with that being a part of Keystone
>> > >Middleware Package (probably something separate from auth_token).
>> > >
>> > >
>> > >—Morgan
>> > >
>> >
>> > I think my only concern is what should other projects do and how
>> much do we
>> > want to 

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Zane Bitter

On 25/02/15 15:37, Joe Gordon wrote:



On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell mailto:tim.b...@cern.ch>> wrote:


A few inline comments and a general point

How do we handle scenarios like volumes when we have a per-component
janitor rather than a single co-ordinator ?

To be clean,

1. nova should shutdown the instance
2. nova should then ask the volume to be detached
3. cinder could then perform the 'project deletion' action as
configured by the operator (such as shelve or backup)
4. nova could then perform the 'project deletion' action as
configured by the operator (such as VM delete or shelve)

If we have both cinder and nova responding to a single message,
cinder would do 3. Immediately and nova would be doing the shutdown
which is likely to lead to a volume which could not be shelved cleanly.

The problem I see with messages is that co-ordination of the actions
may require ordering between the components.  The disable/enable
cases would show this in a worse scenario.


You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.  Perhaps the
best way forward is a openstack-specs spec to hash out these details.


For completeness, if nothing else, it should be noted that another 
option is for Keystone to refuse to delete the project until all 
resources within it have been removed by a user.


It's hard to know at this point which would be more painful. Both sound 
horrific in their own way :D


cheers,
Zane.



Tim

 > -Original Message-
 > From: Ian Cordasco [mailto:ian.corda...@rackspace.com
]
 > Sent: 19 February 2015 17:49
 > To: OpenStack Development Mailing List (not for usage questions);
Joe Gordon
 > Cc: openstack-operat...@lists.openstack.org

 > Subject: Re: [Openstack-operators] [openstack-dev] Resources
owned by a
 > project/tenant are not cleaned up after that project is deleted
from keystone
 >
 >
 >
 > On 2/2/15, 15:41, "Morgan Fainberg" mailto:morgan.fainb...@gmail.com>> wrote:
 >
 > >
 > >On February 2, 2015 at 1:31:14 PM, Joe Gordon
(joe.gord...@gmail.com )
 > >wrote:
 > >
 > >
 > >
 > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
 > >mailto:morgan.fainb...@gmail.com>>
wrote:
 > >
 > >I think the simple answer is "yes". We (keystone) should emit
 > >notifications. And yes other projects should listen.
 > >
 > >The only thing really in discussion should be:
 > >
 > >1: soft delete or hard delete? Does the service mark it as
orphaned, or
 > >just delete (leave this to nova, cinder, etc to discuss)
 > >
 > >2: how to cleanup when an event is missed (e.g rabbit bus goes
out to
 > >lunch).
 > >
 > >
 > >
 > >
 > >
 > >
 > >I disagree slightly, I don't think projects should directly
listen to
 > >the Keystone notifications I would rather have the API be something
 > >from a keystone owned library, say keystonemiddleware. So
something like
 > this:
 > >
 > >
 > >from keystonemiddleware import janitor
 > >
 > >
 > >keystone_janitor = janitor.Janitor()
 > >keystone_janitor.register_callback(nova.tenant_cleanup)
 > >
 > >
 > >keystone_janitor.spawn_greenthread()
 > >
 > >
 > >That way each project doesn't have to include a lot of boilerplate
 > >code, and keystone can easily modify/improve/upgrade the
notification
 > mechanism.
 > >
 > >


I assume janitor functions can be used for

- enable/disable project
- enable/disable user

> >
> >
> >
> >
> >
> >
> >
> >
> >
> >Sure. I’d place this into an implementation detail of where that
> >actually lives. I’d be fine with that being a part of Keystone
> >Middleware Package (probably something separate from auth_token).
> >
> >
> >—Morgan
> >
>
> I think my only concern is what should other projects do and how much do 
we
> want to allow operators to configure this? I can imagine it being 
preferable to
> have safe (without losing much data) policies for this as a default and 
to allow
> operators to configure more destructive policies as part of deploying 
certain
> services.
>

Depending on the cloud, an operator could want different semantics
for delete project's impact, between delete or 'shelve' style or
maybe disable.

 >
 > >
 > >
 > >
 > >
 > >
 > >--Morgan
 > >
 > >Sent via mobile
 > >
 > >> On Feb 2, 2015, at 10:16, Matthew Treinish
mailto:mtrein...@kortar.o

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Dolph Mathews
On Wed, Feb 25, 2015 at 3:02 PM, Matt Joyce  wrote:

> Wondering if heat should be performing this orchestration.
>

I wouldn't expect heat to have access to everything that needs to be
cleaned up.


>
> Would provide for a more pluggable front end to the action set.
>
> -matt
>
> On Feb 25, 2015 2:37 PM, Joe Gordon  wrote:
> >
> >
> >
> > On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell  wrote:
> >>
> >>
> >> A few inline comments and a general point
> >>
> >> How do we handle scenarios like volumes when we have a per-component
> janitor rather than a single co-ordinator ?
> >>
> >> To be clean,
> >>
> >> 1. nova should shutdown the instance
> >> 2. nova should then ask the volume to be detached
> >> 3. cinder could then perform the 'project deletion' action as
> configured by the operator (such as shelve or backup)
> >> 4. nova could then perform the 'project deletion' action as configured
> by the operator (such as VM delete or shelve)
> >>
> >> If we have both cinder and nova responding to a single message, cinder
> would do 3. Immediately and nova would be doing the shutdown which is
> likely to lead to a volume which could not be shelved cleanly.
> >>
> >> The problem I see with messages is that co-ordination of the actions
> may require ordering between the components.  The disable/enable cases
> would show this in a worse scenario.
> >
> >
> > You raise two good points.
> >
> > * How to clean something up may be different for different clouds
> > * Some cleanup operations have to happen in a specific order
> >
> > Not sure what the best way to address those two points is.  Perhaps the
> best way forward is a openstack-specs spec to hash out these details.
> >
> >
> >>
> >> Tim
> >>
> >> > -Original Message-
> >> > From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> >> > Sent: 19 February 2015 17:49
> >> > To: OpenStack Development Mailing List (not for usage questions); Joe
> Gordon
> >> > Cc: openstack-operat...@lists.openstack.org
> >> > Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by
> a
> >> > project/tenant are not cleaned up after that project is deleted from
> keystone
> >> >
> >> >
> >> >
> >> > On 2/2/15, 15:41, "Morgan Fainberg" 
> wrote:
> >> >
> >> > >
> >> > >On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com
> )
> >> > >wrote:
> >> > >
> >> > >
> >> > >
> >> > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
> >> > > wrote:
> >> > >
> >> > >I think the simple answer is "yes". We (keystone) should emit
> >> > >notifications. And yes other projects should listen.
> >> > >
> >> > >The only thing really in discussion should be:
> >> > >
> >> > >1: soft delete or hard delete? Does the service mark it as orphaned,
> or
> >> > >just delete (leave this to nova, cinder, etc to discuss)
> >> > >
> >> > >2: how to cleanup when an event is missed (e.g rabbit bus goes out to
> >> > >lunch).
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >I disagree slightly, I don't think projects should directly listen to
> >> > >the Keystone notifications I would rather have the API be something
> >> > >from a keystone owned library, say keystonemiddleware. So something
> like
> >> > this:
> >> > >
> >> > >
> >> > >from keystonemiddleware import janitor
> >> > >
> >> > >
> >> > >keystone_janitor = janitor.Janitor()
> >> > >keystone_janitor.register_callback(nova.tenant_cleanup)
> >> > >
> >> > >
> >> > >keystone_janitor.spawn_greenthread()
> >> > >
> >> > >
> >> > >That way each project doesn't have to include a lot of boilerplate
> >> > >code, and keystone can easily modify/improve/upgrade the notification
> >> > mechanism.
> >> > >
> >> > >
> >>
> >>
> >> I assume janitor functions can be used for
> >>
> >> - enable/disable project
> >> - enable/disable user
> >>
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >Sure. I’d place this into an implementation detail of where that
> >> > >actually lives. I’d be fine with that being a part of Keystone
> >> > >Middleware Package (probably something separate from auth_token).
> >> > >
> >> > >
> >> > >—Morgan
> >> > >
> >> >
> >> > I think my only concern is what should other projects do and how much
> do we
> >> > want to allow operators to configure this? I can imagine it being
> preferable to
> >> > have safe (without losing much data) policies for this as a default
> and to allow
> >> > operators to configure more destructive policies as part of deploying
> certain
> >> > services.
> >> >
> >>
> >> Depending on the cloud, an operator could want different semantics for
> delete project's impact, between delete or 'shelve' style or maybe disable.
> >>
> >> >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >--Morgan
> >> > >
> >> > >Sent via mobile
> >> > >
> >> > >> On Feb 2, 2015, at 10:16, Matthew Treinish 
> wrote:
> >> > >>
> >> > >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> >> > >>> This came up in the operators mailing list back in J

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Matt Joyce
Wondering if heat should be performing this orchestration.

Would provide for a more pluggable front end to the action set.

-matt

On Feb 25, 2015 2:37 PM, Joe Gordon  wrote:
>
>
>
> On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell  wrote:
>>
>>
>> A few inline comments and a general point
>>
>> How do we handle scenarios like volumes when we have a per-component janitor 
>> rather than a single co-ordinator ?
>>
>> To be clean,
>>
>> 1. nova should shutdown the instance
>> 2. nova should then ask the volume to be detached
>> 3. cinder could then perform the 'project deletion' action as configured by 
>> the operator (such as shelve or backup)
>> 4. nova could then perform the 'project deletion' action as configured by 
>> the operator (such as VM delete or shelve)
>>
>> If we have both cinder and nova responding to a single message, cinder would 
>> do 3. Immediately and nova would be doing the shutdown which is likely to 
>> lead to a volume which could not be shelved cleanly.
>>
>> The problem I see with messages is that co-ordination of the actions may 
>> require ordering between the components.  The disable/enable cases would 
>> show this in a worse scenario.
>
>
> You raise two good points. 
>
> * How to clean something up may be different for different clouds
> * Some cleanup operations have to happen in a specific order
>
> Not sure what the best way to address those two points is.  Perhaps the best 
> way forward is a openstack-specs spec to hash out these details.
>
>  
>>
>> Tim
>>
>> > -Original Message-
>> > From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
>> > Sent: 19 February 2015 17:49
>> > To: OpenStack Development Mailing List (not for usage questions); Joe 
>> > Gordon
>> > Cc: openstack-operat...@lists.openstack.org
>> > Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
>> > project/tenant are not cleaned up after that project is deleted from 
>> > keystone
>> >
>> >
>> >
>> > On 2/2/15, 15:41, "Morgan Fainberg"  wrote:
>> >
>> > >
>> > >On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
>> > >wrote:
>> > >
>> > >
>> > >
>> > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
>> > > wrote:
>> > >
>> > >I think the simple answer is "yes". We (keystone) should emit
>> > >notifications. And yes other projects should listen.
>> > >
>> > >The only thing really in discussion should be:
>> > >
>> > >1: soft delete or hard delete? Does the service mark it as orphaned, or
>> > >just delete (leave this to nova, cinder, etc to discuss)
>> > >
>> > >2: how to cleanup when an event is missed (e.g rabbit bus goes out to
>> > >lunch).
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >I disagree slightly, I don't think projects should directly listen to
>> > >the Keystone notifications I would rather have the API be something
>> > >from a keystone owned library, say keystonemiddleware. So something like
>> > this:
>> > >
>> > >
>> > >from keystonemiddleware import janitor
>> > >
>> > >
>> > >keystone_janitor = janitor.Janitor()
>> > >keystone_janitor.register_callback(nova.tenant_cleanup)
>> > >
>> > >
>> > >keystone_janitor.spawn_greenthread()
>> > >
>> > >
>> > >That way each project doesn't have to include a lot of boilerplate
>> > >code, and keystone can easily modify/improve/upgrade the notification
>> > mechanism.
>> > >
>> > >
>>
>>
>> I assume janitor functions can be used for
>>
>> - enable/disable project
>> - enable/disable user
>>
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >Sure. I’d place this into an implementation detail of where that
>> > >actually lives. I’d be fine with that being a part of Keystone
>> > >Middleware Package (probably something separate from auth_token).
>> > >
>> > >
>> > >—Morgan
>> > >
>> >
>> > I think my only concern is what should other projects do and how much do we
>> > want to allow operators to configure this? I can imagine it being 
>> > preferable to
>> > have safe (without losing much data) policies for this as a default and to 
>> > allow
>> > operators to configure more destructive policies as part of deploying 
>> > certain
>> > services.
>> >
>>
>> Depending on the cloud, an operator could want different semantics for 
>> delete project's impact, between delete or 'shelve' style or maybe disable.
>>
>> >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >--Morgan
>> > >
>> > >Sent via mobile
>> > >
>> > >> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
>> > >>
>> > >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> > >>> This came up in the operators mailing list back in June [1] but
>> > >>>given the  subject probably didn't get much attention.
>> > >>>
>> > >>> Basically there is a really old bug [2] from Grizzly that is still a
>> > >>>problem  and affects multiple projects.  A tenant can be deleted in
>> > >>>Keystone even  though other resources in other projects are under
>> > >>>that project, and those  resources aren't cleaned up.
>> > >>
>> > >> I agree thi

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Joe Gordon
On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell  wrote:

>
> A few inline comments and a general point
>
> How do we handle scenarios like volumes when we have a per-component
> janitor rather than a single co-ordinator ?
>
> To be clean,
>
> 1. nova should shutdown the instance
> 2. nova should then ask the volume to be detached
> 3. cinder could then perform the 'project deletion' action as configured
> by the operator (such as shelve or backup)
> 4. nova could then perform the 'project deletion' action as configured by
> the operator (such as VM delete or shelve)
>
> If we have both cinder and nova responding to a single message, cinder
> would do 3. Immediately and nova would be doing the shutdown which is
> likely to lead to a volume which could not be shelved cleanly.
>
> The problem I see with messages is that co-ordination of the actions may
> require ordering between the components.  The disable/enable cases would
> show this in a worse scenario.
>

You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.  Perhaps the
best way forward is a openstack-specs spec to hash out these details.



> Tim
>
> > -Original Message-
> > From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> > Sent: 19 February 2015 17:49
> > To: OpenStack Development Mailing List (not for usage questions); Joe
> Gordon
> > Cc: openstack-operat...@lists.openstack.org
> > Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
> > project/tenant are not cleaned up after that project is deleted from
> keystone
> >
> >
> >
> > On 2/2/15, 15:41, "Morgan Fainberg"  wrote:
> >
> > >
> > >On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
> > >wrote:
> > >
> > >
> > >
> > >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
> > > wrote:
> > >
> > >I think the simple answer is "yes". We (keystone) should emit
> > >notifications. And yes other projects should listen.
> > >
> > >The only thing really in discussion should be:
> > >
> > >1: soft delete or hard delete? Does the service mark it as orphaned, or
> > >just delete (leave this to nova, cinder, etc to discuss)
> > >
> > >2: how to cleanup when an event is missed (e.g rabbit bus goes out to
> > >lunch).
> > >
> > >
> > >
> > >
> > >
> > >
> > >I disagree slightly, I don't think projects should directly listen to
> > >the Keystone notifications I would rather have the API be something
> > >from a keystone owned library, say keystonemiddleware. So something like
> > this:
> > >
> > >
> > >from keystonemiddleware import janitor
> > >
> > >
> > >keystone_janitor = janitor.Janitor()
> > >keystone_janitor.register_callback(nova.tenant_cleanup)
> > >
> > >
> > >keystone_janitor.spawn_greenthread()
> > >
> > >
> > >That way each project doesn't have to include a lot of boilerplate
> > >code, and keystone can easily modify/improve/upgrade the notification
> > mechanism.
> > >
> > >
>
>
> I assume janitor functions can be used for
>
> - enable/disable project
> - enable/disable user
>
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >Sure. I’d place this into an implementation detail of where that
> > >actually lives. I’d be fine with that being a part of Keystone
> > >Middleware Package (probably something separate from auth_token).
> > >
> > >
> > >—Morgan
> > >
> >
> > I think my only concern is what should other projects do and how much do
> we
> > want to allow operators to configure this? I can imagine it being
> preferable to
> > have safe (without losing much data) policies for this as a default and
> to allow
> > operators to configure more destructive policies as part of deploying
> certain
> > services.
> >
>
> Depending on the cloud, an operator could want different semantics for
> delete project's impact, between delete or 'shelve' style or maybe disable.
>
> >
> > >
> > >
> > >
> > >
> > >
> > >--Morgan
> > >
> > >Sent via mobile
> > >
> > >> On Feb 2, 2015, at 10:16, Matthew Treinish 
> wrote:
> > >>
> > >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> > >>> This came up in the operators mailing list back in June [1] but
> > >>>given the  subject probably didn't get much attention.
> > >>>
> > >>> Basically there is a really old bug [2] from Grizzly that is still a
> > >>>problem  and affects multiple projects.  A tenant can be deleted in
> > >>>Keystone even  though other resources in other projects are under
> > >>>that project, and those  resources aren't cleaned up.
> > >>
> > >> I agree this probably can be a major pain point for users. We've had
> > >>to work around it  in tempest by creating things like:
> > >>
> > >>
> > >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
> > >p_s
> > >ervice.py
> > > > >up_
> > >service.py>
> > >> and
> > >>
> > >http://gi

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-21 Thread Tim Bell

A few inline comments and a general point

How do we handle scenarios like volumes when we have a per-component janitor 
rather than a single co-ordinator ?

To be clean,

1. nova should shutdown the instance
2. nova should then ask the volume to be detached
3. cinder could then perform the 'project deletion' action as configured by the 
operator (such as shelve or backup)
4. nova could then perform the 'project deletion' action as configured by the 
operator (such as VM delete or shelve)

If we have both cinder and nova responding to a single message, cinder would do 
3. Immediately and nova would be doing the shutdown which is likely to lead to 
a volume which could not be shelved cleanly.

The problem I see with messages is that co-ordination of the actions may 
require ordering between the components.  The disable/enable cases would show 
this in a worse scenario.

Tim

> -Original Message-
> From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
> Sent: 19 February 2015 17:49
> To: OpenStack Development Mailing List (not for usage questions); Joe Gordon
> Cc: openstack-operat...@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
> project/tenant are not cleaned up after that project is deleted from keystone
> 
> 
> 
> On 2/2/15, 15:41, "Morgan Fainberg"  wrote:
> 
> >
> >On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
> >wrote:
> >
> >
> >
> >On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
> > wrote:
> >
> >I think the simple answer is "yes". We (keystone) should emit
> >notifications. And yes other projects should listen.
> >
> >The only thing really in discussion should be:
> >
> >1: soft delete or hard delete? Does the service mark it as orphaned, or
> >just delete (leave this to nova, cinder, etc to discuss)
> >
> >2: how to cleanup when an event is missed (e.g rabbit bus goes out to
> >lunch).
> >
> >
> >
> >
> >
> >
> >I disagree slightly, I don't think projects should directly listen to
> >the Keystone notifications I would rather have the API be something
> >from a keystone owned library, say keystonemiddleware. So something like
> this:
> >
> >
> >from keystonemiddleware import janitor
> >
> >
> >keystone_janitor = janitor.Janitor()
> >keystone_janitor.register_callback(nova.tenant_cleanup)
> >
> >
> >keystone_janitor.spawn_greenthread()
> >
> >
> >That way each project doesn't have to include a lot of boilerplate
> >code, and keystone can easily modify/improve/upgrade the notification
> mechanism.
> >
> >


I assume janitor functions can be used for

- enable/disable project
- enable/disable user

> >
> >
> >
> >
> >
> >
> >
> >
> >
> >Sure. I’d place this into an implementation detail of where that
> >actually lives. I’d be fine with that being a part of Keystone
> >Middleware Package (probably something separate from auth_token).
> >
> >
> >—Morgan
> >
> 
> I think my only concern is what should other projects do and how much do we
> want to allow operators to configure this? I can imagine it being preferable 
> to
> have safe (without losing much data) policies for this as a default and to 
> allow
> operators to configure more destructive policies as part of deploying certain
> services.
> 

Depending on the cloud, an operator could want different semantics for delete 
project's impact, between delete or 'shelve' style or maybe disable.

> 
> >
> >
> >
> >
> >
> >--Morgan
> >
> >Sent via mobile
> >
> >> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> >>
> >>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> >>> This came up in the operators mailing list back in June [1] but
> >>>given the  subject probably didn't get much attention.
> >>>
> >>> Basically there is a really old bug [2] from Grizzly that is still a
> >>>problem  and affects multiple projects.  A tenant can be deleted in
> >>>Keystone even  though other resources in other projects are under
> >>>that project, and those  resources aren't cleaned up.
> >>
> >> I agree this probably can be a major pain point for users. We've had
> >>to work around it  in tempest by creating things like:
> >>
> >>
> >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
> >p_s
> >ervice.py
> > >up_
> >service.py>
> >> and
> >>
> >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
> >p.p
> >y
> > .
> >py>
> >>
> >> to ensure we aren't dangling resources after a run. But, this doesn't
> >>work in  all cases either. (like with tenant isolation enabled)
> >>
> >> I also know there is a stackforge project that is attempting
> >>something similar
> >> here:
> >>
> >> http://git.openstack.org/cgit/stackforge/ospurge/
> >>
> >> It would be much nicer if the burden for doing this was taken off
> >>users and this  was just handled cleanly under the covers.
> >>
> >>>
> >>> Key

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-19 Thread Ian Cordasco


On 2/2/15, 15:41, "Morgan Fainberg"  wrote:

>
>On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
>wrote:
>
>
>
>On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
> wrote:
>
>I think the simple answer is "yes". We (keystone) should emit
>notifications. And yes other projects should listen.
>
>The only thing really in discussion should be:
>
>1: soft delete or hard delete? Does the service mark it as orphaned, or
>just delete (leave this to nova, cinder, etc to discuss)
>
>2: how to cleanup when an event is missed (e.g rabbit bus goes out to
>lunch).
>
>
>
>
>
>
>I disagree slightly, I don't think projects should directly listen to the
>Keystone notifications I would rather have the API be something from a
>keystone owned library, say keystonemiddleware. So something like this:
>
>
>from keystonemiddleware import janitor
>
>
>keystone_janitor = janitor.Janitor()
>keystone_janitor.register_callback(nova.tenant_cleanup)
>
>
>keystone_janitor.spawn_greenthread()
>
>
>That way each project doesn't have to include a lot of boilerplate code,
>and keystone can easily modify/improve/upgrade the notification mechanism.
>
>
>
>
>
>
>
>
>
>
>
>Sure. I’d place this into an implementation detail of where that actually
>lives. I’d be fine with that being a part of Keystone Middleware Package
>(probably something separate from auth_token).
>
>
>—Morgan
>

I think my only concern is what should other projects do and how much do
we want to allow operators to configure this? I can imagine it being
preferable to have safe (without losing much data) policies for this as a
default and to allow operators to configure more destructive policies as
part of deploying certain services.


>
> 
>
>
>
>--Morgan
>
>Sent via mobile
>
>> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
>>
>>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>>> This came up in the operators mailing list back in June [1] but given
>>>the
>>> subject probably didn't get much attention.
>>>
>>> Basically there is a really old bug [2] from Grizzly that is still a
>>>problem
>>> and affects multiple projects.  A tenant can be deleted in Keystone
>>>even
>>> though other resources in other projects are under that project, and
>>>those
>>> resources aren't cleaned up.
>>
>> I agree this probably can be a major pain point for users. We've had to
>>work around it
>> in tempest by creating things like:
>>
>> 
>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_s
>ervice.py 
>service.py>
>> and
>> 
>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.p
>y 
>py>
>>
>> to ensure we aren't dangling resources after a run. But, this doesn't
>>work in
>> all cases either. (like with tenant isolation enabled)
>>
>> I also know there is a stackforge project that is attempting something
>>similar
>> here:
>>
>> http://git.openstack.org/cgit/stackforge/ospurge/
>>
>> It would be much nicer if the burden for doing this was taken off users
>>and this
>> was just handled cleanly under the covers.
>>
>>>
>>> Keystone implemented event notifications back in Havana [3] but the
>>>other
>>> projects aren't listening on them to know when a project has been
>>>deleted
>>> and act accordingly.
>>>
>>> The bug has several people saying "we should talk about this at the
>>>summit"
>>> for several summits, but I can't find any discussion or summit sessions
>>> related back to the bug.
>>>
>>> Given this is an operations and cross-project issue, I'd like to bring
>>>it up
>>> again for the Vancouver summit if there is still interest (which I'm
>>> assuming there is from operators).
>>
>> I'd definitely support having a cross-project session on this.
>>
>>>
>>> There is a blueprint specifically for the tenant deletion case but it's
>>> targeted at only Horizon [4].
>>>
>>> Is anyone still working on this? Is there sufficient interest in a
>>> cross-project session at the L summit?
>>>
>>> Thinking out loud, even if nova doesn't listen to events from
>>>keystone, we
>>> could at least have a periodic task that looks for instances where the
>>> tenant no longer exists in keystone and then take some action (log a
>>> warning, shutdown/archive/, reap, etc).
>>>
>>> There is also a spec for L to transfer instance ownership [5] which
>>>could
>>> maybe come into play, but I wouldn't depend on it.
>>>
>>> [1] 
>http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.
>html 
>.html>
>>> [2] https://bugs.launchpad.net/nova/+bug/967832
>>> [3] 
>https://blueprints.launchpad.net/keystone/+spec/notifications
>
>>> [4] 
>https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com) wrote:


On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg  
wrote:
I think the simple answer is "yes". We (keystone) should emit notifications. 
And yes other projects should listen.

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch).


I disagree slightly, I don't think projects should directly listen to the 
Keystone notifications I would rather have the API be something from a keystone 
owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code, and 
keystone can easily modify/improve/upgrade the notification mechanism.


Sure. I’d place this into an implementation detail of where that actually 
lives. I’d be fine with that being a part of Keystone Middleware Package 
(probably something separate from auth_token).

—Morgan

 

--Morgan

Sent via mobile

> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
>
>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> This came up in the operators mailing list back in June [1] but given the
>> subject probably didn't get much attention.
>>
>> Basically there is a really old bug [2] from Grizzly that is still a problem
>> and affects multiple projects.  A tenant can be deleted in Keystone even
>> though other resources in other projects are under that project, and those
>> resources aren't cleaned up.
>
> I agree this probably can be a major pain point for users. We've had to work 
> around it
> in tempest by creating things like:
>
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> and
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
>
> to ensure we aren't dangling resources after a run. But, this doesn't work in
> all cases either. (like with tenant isolation enabled)
>
> I also know there is a stackforge project that is attempting something similar
> here:
>
> http://git.openstack.org/cgit/stackforge/ospurge/
>
> It would be much nicer if the burden for doing this was taken off users and 
> this
> was just handled cleanly under the covers.
>
>>
>> Keystone implemented event notifications back in Havana [3] but the other
>> projects aren't listening on them to know when a project has been deleted
>> and act accordingly.
>>
>> The bug has several people saying "we should talk about this at the summit"
>> for several summits, but I can't find any discussion or summit sessions
>> related back to the bug.
>>
>> Given this is an operations and cross-project issue, I'd like to bring it up
>> again for the Vancouver summit if there is still interest (which I'm
>> assuming there is from operators).
>
> I'd definitely support having a cross-project session on this.
>
>>
>> There is a blueprint specifically for the tenant deletion case but it's
>> targeted at only Horizon [4].
>>
>> Is anyone still working on this? Is there sufficient interest in a
>> cross-project session at the L summit?
>>
>> Thinking out loud, even if nova doesn't listen to events from keystone, we
>> could at least have a periodic task that looks for instances where the
>> tenant no longer exists in keystone and then take some action (log a
>> warning, shutdown/archive/, reap, etc).
>>
>> There is also a spec for L to transfer instance ownership [5] which could
>> maybe come into play, but I wouldn't depend on it.
>>
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
>> [2] https://bugs.launchpad.net/nova/+bug/967832
>> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
>> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>> [5] https://review.openstack.org/#/c/105367/
>
> -Matt Treinish
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Joe Gordon
On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg 
wrote:

> I think the simple answer is "yes". We (keystone) should emit
> notifications. And yes other projects should listen.
>
> The only thing really in discussion should be:
>
> 1: soft delete or hard delete? Does the service mark it as orphaned, or
> just delete (leave this to nova, cinder, etc to discuss)
>
> 2: how to cleanup when an event is missed (e.g rabbit bus goes out to
> lunch).
>


I disagree slightly, I don't think projects should directly listen to the
Keystone notifications I would rather have the API be something from a
keystone owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code,
and keystone can easily modify/improve/upgrade the notification mechanism.



>
> --Morgan
>
> Sent via mobile
>
> > On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> >
> >> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
> >> This came up in the operators mailing list back in June [1] but given
> the
> >> subject probably didn't get much attention.
> >>
> >> Basically there is a really old bug [2] from Grizzly that is still a
> problem
> >> and affects multiple projects.  A tenant can be deleted in Keystone even
> >> though other resources in other projects are under that project, and
> those
> >> resources aren't cleaned up.
> >
> > I agree this probably can be a major pain point for users. We've had to
> work around it
> > in tempest by creating things like:
> >
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> > and
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
> >
> > to ensure we aren't dangling resources after a run. But, this doesn't
> work in
> > all cases either. (like with tenant isolation enabled)
> >
> > I also know there is a stackforge project that is attempting something
> similar
> > here:
> >
> > http://git.openstack.org/cgit/stackforge/ospurge/
> >
> > It would be much nicer if the burden for doing this was taken off users
> and this
> > was just handled cleanly under the covers.
> >
> >>
> >> Keystone implemented event notifications back in Havana [3] but the
> other
> >> projects aren't listening on them to know when a project has been
> deleted
> >> and act accordingly.
> >>
> >> The bug has several people saying "we should talk about this at the
> summit"
> >> for several summits, but I can't find any discussion or summit sessions
> >> related back to the bug.
> >>
> >> Given this is an operations and cross-project issue, I'd like to bring
> it up
> >> again for the Vancouver summit if there is still interest (which I'm
> >> assuming there is from operators).
> >
> > I'd definitely support having a cross-project session on this.
> >
> >>
> >> There is a blueprint specifically for the tenant deletion case but it's
> >> targeted at only Horizon [4].
> >>
> >> Is anyone still working on this? Is there sufficient interest in a
> >> cross-project session at the L summit?
> >>
> >> Thinking out loud, even if nova doesn't listen to events from keystone,
> we
> >> could at least have a periodic task that looks for instances where the
> >> tenant no longer exists in keystone and then take some action (log a
> >> warning, shutdown/archive/, reap, etc).
> >>
> >> There is also a spec for L to transfer instance ownership [5] which
> could
> >> maybe come into play, but I wouldn't depend on it.
> >>
> >> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
> >> [2] https://bugs.launchpad.net/nova/+bug/967832
> >> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
> >> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
> >> [5] https://review.openstack.org/#/c/105367/
> >
> > -Matt Treinish
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
I think the simple answer is "yes". We (keystone) should emit notifications. 
And yes other projects should listen. 

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch). 

--Morgan 

Sent via mobile

> On Feb 2, 2015, at 10:16, Matthew Treinish  wrote:
> 
>> On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
>> This came up in the operators mailing list back in June [1] but given the
>> subject probably didn't get much attention.
>> 
>> Basically there is a really old bug [2] from Grizzly that is still a problem
>> and affects multiple projects.  A tenant can be deleted in Keystone even
>> though other resources in other projects are under that project, and those
>> resources aren't cleaned up.
> 
> I agree this probably can be a major pain point for users. We've had to work 
> around it
> in tempest by creating things like:
> 
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
> and
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
> 
> to ensure we aren't dangling resources after a run. But, this doesn't work in
> all cases either. (like with tenant isolation enabled)
> 
> I also know there is a stackforge project that is attempting something similar
> here:
> 
> http://git.openstack.org/cgit/stackforge/ospurge/
> 
> It would be much nicer if the burden for doing this was taken off users and 
> this
> was just handled cleanly under the covers.
> 
>> 
>> Keystone implemented event notifications back in Havana [3] but the other
>> projects aren't listening on them to know when a project has been deleted
>> and act accordingly.
>> 
>> The bug has several people saying "we should talk about this at the summit"
>> for several summits, but I can't find any discussion or summit sessions
>> related back to the bug.
>> 
>> Given this is an operations and cross-project issue, I'd like to bring it up
>> again for the Vancouver summit if there is still interest (which I'm
>> assuming there is from operators).
> 
> I'd definitely support having a cross-project session on this.
> 
>> 
>> There is a blueprint specifically for the tenant deletion case but it's
>> targeted at only Horizon [4].
>> 
>> Is anyone still working on this? Is there sufficient interest in a
>> cross-project session at the L summit?
>> 
>> Thinking out loud, even if nova doesn't listen to events from keystone, we
>> could at least have a periodic task that looks for instances where the
>> tenant no longer exists in keystone and then take some action (log a
>> warning, shutdown/archive/, reap, etc).
>> 
>> There is also a spec for L to transfer instance ownership [5] which could
>> maybe come into play, but I wouldn't depend on it.
>> 
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
>> [2] https://bugs.launchpad.net/nova/+bug/967832
>> [3] https://blueprints.launchpad.net/keystone/+spec/notifications
>> [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
>> [5] https://review.openstack.org/#/c/105367/
> 
> -Matt Treinish
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev