Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-21 Thread Michael Bayer



 On Feb 21, 2015, at 9:49 PM, Joshua Harlow harlo...@outlook.com wrote:
 
 Some comments/questions inline...
 
 Mike Bayer wrote:
 
 Yuriy Taradayyorik@gmail.com  wrote:
 
 On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlowharlo...@outlook.com  
 wrote:
 This feels like something we could do in the service manager base class,
 maybe by adding a post fork hook or something.
 +1 to that.
 
 I think it'd be nice to have the service __init__() maybe be something like:
 
   def __init__(self, threads=1000, prefork_callbacks=None,
postfork_callbacks=None):
  self.postfork_callbacks = postfork_callbacks or []
  self.prefork_callbacks = prefork_callbacks or []
  # always ensure we are closing any left-open fds last...
  self.prefork_callbacks.append(self._close_descriptors)
  ...
 
 (you must've meant postfork_callbacks.append)
 
 Note that multiprocessing module already have 
 `multiprocessing.util.register_after_fork` method that allows to register 
 callback that will be called every time a Process object is run. If we 
 remove explicit use of `os.fork` in oslo.service (replace it with Process 
 class) we'll be able to specify any after-fork callbacks in libraries that 
 they need.
 For example, EngineFacade could register `pool.dispose()` callback there 
 (it should have some proper finalization logic though).
 
 +1 to use Process and the callback system for required initialization steps
 and so forth, however I don’t know that an oslo lib should silently register
 global events on the assumption of how its constructs are to be used.
 
 I think whatever Oslo library is responsible for initiating the Process/fork
 should be where it ensures that resources from other Oslo libraries are set
 up correctly. So oslo.service might register its own event handler with
 
 Sounds like some kind of new entrypoint + discovery service that oslo.service 
 (eck can we name it something else, something that makes it useable for 
 others on pypi...) would need to plug-in to. It would seems like this is a 
 general python problem (who is to say that only oslo libraries use resources 
 that need to be fixed/closed after forking); are there any recommendations 
 that the python community has in general for this (aka, a common entrypoint 
 *all* libraries export that allows them to do things when a fork is about to 
 occur)?
 
 oslo.db such that it gets notified of new database engines so that it can
 associate a disposal with it; it would do something similar for
 oslo.messaging and other systems that use file handles.   The end
 result might be that it uses register_after_fork(), but the point is that
 oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
 apply a hook so that oslo.service can do it on behalf of oslo.db.
 
 Sounds sort of like global state/a 'open resource' pool that each library 
 needs to maintain internally to it that tracks how applications/other 
 libraries are using it; that feels sorta odd IMHO.
 
 Wouldn't that mean libraries that provide back resource objects, or resource 
 containing objects..., for others to use would now need to capture who is 
 using what (weakref pools?) to retain what all the resources are being used 
 and by whom (so that they can fix/close them on fork); not every library has 
 a pool (like sqlalchemy afaik does) to track these kind(s) of things (for 
 better or worse...). And what if those libraries use other libraries that use 
 resources (who owns what?); seems like this just gets very messy/impractical 
 pretty quickly once you start using any kind of 3rd party library that 
 doesn't follow the same pattern... (which brings me back to the question of 
 isn't there a common python way/entrypoint that deal with forks that works 
 better than ^).
 
 
 So, instead of oslo.service cutting through and closing out the file
 descriptors from underneath other oslo libraries that opened them, we set up
 communication channels between oslo libs that maintain a consistent layer of
 abstraction, and instead of making all libraries responsible for the side
 effects that might be introduced from other oslo libraries, we make the
 side-effect-causing library the point at which those effects are
 ameliorated as a service to other oslo libraries.   This allows us to keep
 the knowledge of what it means to use “multiprocessing” in one
 place, rather than spreading out its effects.
 
 If only we didn't have all those other libraries[1] that people use to (that 
 afaik highly likely also have resources they open); so even with getting 
 oslo.db and oslo.messaging into this kind of pattern, we are still left with 
 the other 200+ that aren't/haven't been following this pattern ;-)

I'm only trying to solve well known points like this one between two Oslo 
libraries.   Obviously trying to multiply out this pattern times all libraries, 
including non Oslo ones, is infeasible.

The issue here is simple.   Does oslo.service have to worry that 

Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-21 Thread Mike Bayer


Yuriy Taraday yorik@gmail.com wrote:

 On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlow harlo...@outlook.com wrote:
  This feels like something we could do in the service manager base class,
  maybe by adding a post fork hook or something.
 
 +1 to that.
 
 I think it'd be nice to have the service __init__() maybe be something like:
 
   def __init__(self, threads=1000, prefork_callbacks=None,
postfork_callbacks=None):
  self.postfork_callbacks = postfork_callbacks or []
  self.prefork_callbacks = prefork_callbacks or []
  # always ensure we are closing any left-open fds last...
  self.prefork_callbacks.append(self._close_descriptors)
  ...
 
 (you must've meant postfork_callbacks.append)
 
 Note that multiprocessing module already have 
 `multiprocessing.util.register_after_fork` method that allows to register 
 callback that will be called every time a Process object is run. If we remove 
 explicit use of `os.fork` in oslo.service (replace it with Process class) 
 we'll be able to specify any after-fork callbacks in libraries that they 
 need. 
 For example, EngineFacade could register `pool.dispose()` callback there (it 
 should have some proper finalization logic though).

+1 to use Process and the callback system for required initialization steps
and so forth, however I don’t know that an oslo lib should silently register
global events on the assumption of how its constructs are to be used. 

I think whatever Oslo library is responsible for initiating the Process/fork
should be where it ensures that resources from other Oslo libraries are set
up correctly. So oslo.service might register its own event handler with
oslo.db such that it gets notified of new database engines so that it can
associate a disposal with it; it would do something similar for
oslo.messaging and other systems that use file handles.   The end 
result might be that it uses register_after_fork(), but the point is that 
oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
apply a hook so that oslo.service can do it on behalf of oslo.db.

So, instead of oslo.service cutting through and closing out the file
descriptors from underneath other oslo libraries that opened them, we set up
communication channels between oslo libs that maintain a consistent layer of
abstraction, and instead of making all libraries responsible for the side
effects that might be introduced from other oslo libraries, we make the
side-effect-causing library the point at which those effects are
ameliorated as a service to other oslo libraries.   This allows us to keep
the knowledge of what it means to use “multiprocessing” in one
place, rather than spreading out its effects.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-21 Thread Joshua Harlow

Some comments/questions inline...

Mike Bayer wrote:


Yuriy Taradayyorik@gmail.com  wrote:


On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlowharlo...@outlook.com  wrote:

This feels like something we could do in the service manager base class,
maybe by adding a post fork hook or something.

+1 to that.

I think it'd be nice to have the service __init__() maybe be something like:

   def __init__(self, threads=1000, prefork_callbacks=None,
postfork_callbacks=None):
  self.postfork_callbacks = postfork_callbacks or []
  self.prefork_callbacks = prefork_callbacks or []
  # always ensure we are closing any left-open fds last...
  self.prefork_callbacks.append(self._close_descriptors)
  ...

(you must've meant postfork_callbacks.append)

Note that multiprocessing module already have 
`multiprocessing.util.register_after_fork` method that allows to register 
callback that will be called every time a Process object is run. If we remove 
explicit use of `os.fork` in oslo.service (replace it with Process class) we'll 
be able to specify any after-fork callbacks in libraries that they need.
For example, EngineFacade could register `pool.dispose()` callback there (it 
should have some proper finalization logic though).


+1 to use Process and the callback system for required initialization steps
and so forth, however I don’t know that an oslo lib should silently register
global events on the assumption of how its constructs are to be used.

I think whatever Oslo library is responsible for initiating the Process/fork
should be where it ensures that resources from other Oslo libraries are set
up correctly. So oslo.service might register its own event handler with


Sounds like some kind of new entrypoint + discovery service that 
oslo.service (eck can we name it something else, something that makes it 
useable for others on pypi...) would need to plug-in to. It would seems 
like this is a general python problem (who is to say that only oslo 
libraries use resources that need to be fixed/closed after forking); are 
there any recommendations that the python community has in general for 
this (aka, a common entrypoint *all* libraries export that allows them 
to do things when a fork is about to occur)?



oslo.db such that it gets notified of new database engines so that it can
associate a disposal with it; it would do something similar for
oslo.messaging and other systems that use file handles.   The end
result might be that it uses register_after_fork(), but the point is that
oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
apply a hook so that oslo.service can do it on behalf of oslo.db.


Sounds sort of like global state/a 'open resource' pool that each 
library needs to maintain internally to it that tracks how 
applications/other libraries are using it; that feels sorta odd IMHO.


Wouldn't that mean libraries that provide back resource objects, or 
resource containing objects..., for others to use would now need to 
capture who is using what (weakref pools?) to retain what all the 
resources are being used and by whom (so that they can fix/close them on 
fork); not every library has a pool (like sqlalchemy afaik does) to 
track these kind(s) of things (for better or worse...). And what if 
those libraries use other libraries that use resources (who owns what?); 
seems like this just gets very messy/impractical pretty quickly once you 
start using any kind of 3rd party library that doesn't follow the same 
pattern... (which brings me back to the question of isn't there a common 
python way/entrypoint that deal with forks that works better than ^).




So, instead of oslo.service cutting through and closing out the file
descriptors from underneath other oslo libraries that opened them, we set up
communication channels between oslo libs that maintain a consistent layer of
abstraction, and instead of making all libraries responsible for the side
effects that might be introduced from other oslo libraries, we make the
side-effect-causing library the point at which those effects are
ameliorated as a service to other oslo libraries.   This allows us to keep
the knowledge of what it means to use “multiprocessing” in one
place, rather than spreading out its effects.


If only we didn't have all those other libraries[1] that people use to 
(that afaik highly likely also have resources they open); so even with 
getting oslo.db and oslo.messaging into this kind of pattern, we are 
still left with the other 200+ that aren't/haven't been following this 
pattern ;-)


[1] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt 
(+ ~40 more that are transitive dependencies).





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-21 Thread Hongbin Lu
Hi all,

I tried to go through the new redis example at the quickstart guide [1],
but was not able to go through. I was blocked by connecting to the redis
slave container:

*$ docker exec -i -t $REDIS_ID redis-cli*
*Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
Connection refused*

Here is the container log:

*$ docker logs $REDIS_ID*
*Error: Server closed the connection*
*Failed to find master.*

It looks like the redis master disappeared at some point. I tried to check
the status in about every one minute. Below is the output.

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Pending*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Pending*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*redis-master   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Failed*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
http://10.0.0.5/   name=redis
 Running*
*3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
http://10.0.0.4/
name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

Is anyone able to reproduce the problem above? If yes, I am going to file a
bug.

Thanks,
Hongbin

[1]
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS - question about drivers

2015-02-21 Thread Sławek Kapłoński

Hello,

Thanks a lot for explanation. Now is is more clear for me :)

--
Pozrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

W dniu 2015-02-21 o 01:20, Sumit Naiksatam pisze:

Inline...

On Fri, Feb 20, 2015 at 3:38 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:

Hello,

Thx guys. Now it is clear for me :)
One more question. I saw that in this service plugin there is hardcoded quota
1 firewall per tenant. Do you know why it is so limited? Is there any
important reason for that?


This is a current limitation of the reference implementation, since we
associate the FWaaS firewall resource with all the neutron routers.
Note that this is not a limitation of the FWaaS model, hence, if your
backend can support it, you can override this limitation.


And second thing. As there is only one firewall per tenant so all rules from
it will be applied on all routers (L3 agents) from this tenant and for all
tenant networks, am I right? If yes, how it is solved to set firewall rules


In general, this limitation is going away in the Kilo release. See the
following patch under review which removes the limitation of one
router per tenant:
https://review.openstack.org/#/c/152697/


when for example new router is created? L3 agent is asking about rules via rpc
or FwaaS is sending such notification to L3 agent?


In the current implementation this is automatically reconciled.
Whenever a new router comes up, the FWaaS agent pulls the rules, and
applies it on the interfaces of the new router.


Sorry if my questions are silly but I didn't do anything with this service
plugins yet :)

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia piątek, 20 lutego 2015 16:27:33 Doug Wiegley pisze:

Same project, shiny new repo.

doug


On Feb 20, 2015, at 4:05 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:

Hello,

Thx for tips. I have one more question. You point me fo neutron-fwaas
project which for me looks like different project then neutron. I saw
fwaas service plugin directly in neutron in Juno. So which version
should I use: this neutron-fwaas or service plugin from neutron? Or maybe
it is the same or I misunderstand something?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

Dnia piątek, 20 lutego 2015 14:44:21 Sumit Naiksatam pisze:

Inline...

On Wed, Feb 18, 2015 at 7:48 PM, Vikram Choudhary

vikram.choudh...@huawei.com wrote:

Hi,

You can write your own driver. You can refer to below links for getting
some idea about the architecture.

https://wiki.openstack.org/wiki/Neutron/ServiceTypeFramework


This is a legacy construct and should not be used.


https://wiki.openstack.org/wiki/Neutron/LBaaS/Agent


The above pointer is to a LBaaS Agent which is very different from a
FWaaS driver (which was the original question in the email).

FWaaS does use pluggable drivers and the default is configured here:
https://github.com/openstack/neutron-fwaas/blob/master/etc/fwaas_driver.i
ni

For example for FWaaS driver implementation you can check here:
https://github.com/openstack/neutron-fwaas/tree/master/neutron_fwaas/serv
ice s/firewall/drivers


Thanks
Vikram

-Original Message-
From: Sławek Kapłoński [mailto: ]
Sent: 19 February 2015 02:33
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] FWaaS - question about drivers

Hello,

I'm looking to use FWaaS service plugin with my own router solution (I'm
not using L3 agent at all). If I want to use FWaaS plugin also, should I
write own driver to it, or should I write own service plugin? I will be
grateful for any links to some description about this FWaaS and it's
architecture :) Thx a lot for any help


--
Best regards
Sławek Kapłoński
sla...@kaplonski.pl


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-21 Thread Yuriy Taraday
On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlow harlo...@outlook.com wrote:

  This feels like something we could do in the service manager base class,
  maybe by adding a post fork hook or something.

 +1 to that.

 I think it'd be nice to have the service __init__() maybe be something
 like:

   def __init__(self, threads=1000, prefork_callbacks=None,
postfork_callbacks=None):
  self.postfork_callbacks = postfork_callbacks or []
  self.prefork_callbacks = prefork_callbacks or []
  # always ensure we are closing any left-open fds last...
  self.prefork_callbacks.append(self._close_descriptors)
  ...


(you must've meant postfork_callbacks.append)

Note that multiprocessing module already have
`multiprocessing.util.register_after_fork` method that allows to register
callback that will be called every time a Process object is run. If we
remove explicit use of `os.fork` in oslo.service (replace it with Process
class) we'll be able to specify any after-fork callbacks in libraries that
they need. For example, EngineFacade could register `pool.dispose()`
callback there (it should have some proper finalization logic though).

I'd also suggest to avoid closing any fds in library that it doesn't own.
This would definitely give some woes to developers who would expect shared
descriptors to work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] Querries Regarding Health Monitor parameter in loadbalancer

2015-02-21 Thread Brandon Logan
Hi Rattenpal,
Could you elaborate on this more.  I haven't seen any issues with the API being 
able to parse type.  it's using the json standard library.  Is this in regard 
to another json parser?

thanks,
brandon

On Feb 19, 2015 11:24 PM, Rattenpal Amandeep rattenpal.amand...@tcs.com wrote:
Hi Team

As per V2 API specification load balancer healthmonitor has a parameter type
which can not be parsed by JSON parser.
So, it must be replaced by healthmonitor_type as per the OpenDayLight Bug-
( https://bugs.opendaylight.org/show_bug.cgi?id=1674 )

I reported a bug related to this 
(https://bugs.launchpad.net/neutron/+bug/1415336 )
Should i make changes on the health monitor parameter ..?

Thanks
Amandeep
Mail to: rattenpal.amand...@tcs.com

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-21 Thread Tim Bell

A few inline comments and a general point

How do we handle scenarios like volumes when we have a per-component janitor 
rather than a single co-ordinator ?

To be clean,

1. nova should shutdown the instance
2. nova should then ask the volume to be detached
3. cinder could then perform the 'project deletion' action as configured by the 
operator (such as shelve or backup)
4. nova could then perform the 'project deletion' action as configured by the 
operator (such as VM delete or shelve)

If we have both cinder and nova responding to a single message, cinder would do 
3. Immediately and nova would be doing the shutdown which is likely to lead to 
a volume which could not be shelved cleanly.

The problem I see with messages is that co-ordination of the actions may 
require ordering between the components.  The disable/enable cases would show 
this in a worse scenario.

Tim

 -Original Message-
 From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
 Sent: 19 February 2015 17:49
 To: OpenStack Development Mailing List (not for usage questions); Joe Gordon
 Cc: openstack-operat...@lists.openstack.org
 Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
 project/tenant are not cleaned up after that project is deleted from keystone
 
 
 
 On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
 
 On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
 wrote:
 
 
 
 On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
 morgan.fainb...@gmail.com wrote:
 
 I think the simple answer is yes. We (keystone) should emit
 notifications. And yes other projects should listen.
 
 The only thing really in discussion should be:
 
 1: soft delete or hard delete? Does the service mark it as orphaned, or
 just delete (leave this to nova, cinder, etc to discuss)
 
 2: how to cleanup when an event is missed (e.g rabbit bus goes out to
 lunch).
 
 
 
 
 
 
 I disagree slightly, I don't think projects should directly listen to
 the Keystone notifications I would rather have the API be something
 from a keystone owned library, say keystonemiddleware. So something like
 this:
 
 
 from keystonemiddleware import janitor
 
 
 keystone_janitor = janitor.Janitor()
 keystone_janitor.register_callback(nova.tenant_cleanup)
 
 
 keystone_janitor.spawn_greenthread()
 
 
 That way each project doesn't have to include a lot of boilerplate
 code, and keystone can easily modify/improve/upgrade the notification
 mechanism.
 
 


I assume janitor functions can be used for

- enable/disable project
- enable/disable user

 
 
 
 
 
 
 
 
 
 Sure. I’d place this into an implementation detail of where that
 actually lives. I’d be fine with that being a part of Keystone
 Middleware Package (probably something separate from auth_token).
 
 
 —Morgan
 
 
 I think my only concern is what should other projects do and how much do we
 want to allow operators to configure this? I can imagine it being preferable 
 to
 have safe (without losing much data) policies for this as a default and to 
 allow
 operators to configure more destructive policies as part of deploying certain
 services.
 

Depending on the cloud, an operator could want different semantics for delete 
project's impact, between delete or 'shelve' style or maybe disable.

 
 
 
 
 
 
 --Morgan
 
 Sent via mobile
 
  On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org wrote:
 
  On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
  This came up in the operators mailing list back in June [1] but
 given the  subject probably didn't get much attention.
 
  Basically there is a really old bug [2] from Grizzly that is still a
 problem  and affects multiple projects.  A tenant can be deleted in
 Keystone even  though other resources in other projects are under
 that project, and those  resources aren't cleaned up.
 
  I agree this probably can be a major pain point for users. We've had
 to work around it  in tempest by creating things like:
 
 
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
 p_s
 ervice.py
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean
 up_
 service.py
  and
 
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
 p.p
 y
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup
 .
 py
 
  to ensure we aren't dangling resources after a run. But, this doesn't
 work in  all cases either. (like with tenant isolation enabled)
 
  I also know there is a stackforge project that is attempting
 something similar
  here:
 
  http://git.openstack.org/cgit/stackforge/ospurge/
 
  It would be much nicer if the burden for doing this was taken off
 users and this  was just handled cleanly under the covers.
 
 
  Keystone implemented event notifications back in Havana [3] but the
 other  projects aren't listening on them to know when a project has
 been deleted  and act accordingly.
 
  The bug has several people saying we should talk about 

Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 1.0.6 for LXC driver ?

2015-02-21 Thread Dmitry Guryanov
Let's put off this cleanup to L release. There is a problem with mounting loop 
device with enabled user namespaces. so we can't commit the change and broke 
containers with user namespaces.

I going on vacation until 6th march, when I'll return I'm going to learn LXC 
code and figure out, what should be done so that containers with user 
namespaces will start from images over loop devices.



От: Dmitry Guryanov dgurya...@parallels.com
Отправлено: 16 февраля 2015 г. 16:46
Кому: Daniel P. Berrange
Копия: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Тема: Re: [openstack-dev] [Openstack-operators] RFC: Increasing min libvirt to 
1.0.6 for LXC driver ?

On 02/16/2015 04:36 PM, Daniel P. Berrange wrote:
 On Mon, Feb 16, 2015 at 04:31:21PM +0300, Dmitry Guryanov wrote:
 On 02/13/2015 05:50 PM, Jay Pipes wrote:
 On 02/13/2015 09:20 AM, Daniel P. Berrange wrote:
 On Fri, Feb 13, 2015 at 08:49:26AM -0500, Jay Pipes wrote:
 On 02/13/2015 07:04 AM, Daniel P. Berrange wrote:
 Historically Nova has had a bunch of code which mounted images on the
 host OS using qemu-nbd before passing them to libvirt to setup the
 LXC container. Since 1.0.6, libvirt is able todo this itself and it
 would simplify the codepaths in Nova if we can rely on that

 In general, without use of user namespaces, LXC can't really be
 considered secure in OpenStack, and this already requires libvirt
 version 1.1.1 and Nova Juno release.

 As such I'd be surprised if anyone is running OpenStack with libvirt
  LXC in production on libvirt  1.1.1 as it would be pretty insecure,
 but stranger things have happened.

 The general libvirt min requirement for LXC, QEMU and KVM currently
 is 0.9.11. We're *not* proposing to change the QEMU/KVM min libvirt,
 but feel it is worth increasing the LXC min libvirt to 1.0.6

 So would anyone object if we increased min libvirt to 1.0.6 when
 running the LXC driver ?
 Thanks for raising the question, Daniel!

 Since there are no objections, I'd like to make 1.1.1 the minimal required
 version. Let's also make parameters uid_maps and gid_maps mandatory and
 always add them to libvirt XML.
 I think it is probably not enough prior warning to actually turn on user
 namespace by default in Kilo. So I think what we should do for Kilo is to
 issue a warning message on nova startup if userns is not enabled in the
 config, telling users that this will become mandatory in Liberty. Then
 when Liberty dev opens, we make it mandatory.

 Regards,
 Daniel

OK, seems reasonable.

--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev