Re: [openstack-dev] [Ironic] [Oslo] Question about Futurist executors

2015-07-23 Thread Joshua Harlow
An example/poc that adds a basic rejection mechanism to various 
executors (the process pool excecutor internals are complicated and 
doesn't implement it for that).


https://review.openstack.org/205262

With that the following (or similar) could be done:

http://paste.openstack.org/show/404591/

Comments welcome, ideally https://bugs.python.org/issue22737  would be 
much more improved than that, but the general idea is the same...


Joshua Harlow wrote:

Dmitry Tantsur wrote:

Jim,

I'm redirecting your question to oslo folks, as I'm afraid my answer can
be wrong.

On 07/23/2015 01:55 PM, Jim Rollenhagen wrote:

On Wed, Jul 22, 2015 at 02:40:47PM +0200, Dmitry Tantsur wrote:

Hi all!

Currently _spawn_worker in the conductor manager raises
NoFreeConductorWorker if pool is already full. That's not very user
friendly
(potential source of retries in client) and does not map well on common
async worker patterns.

My understanding is that it was done to prevent the conductor thread
from
waiting on pool to become free. If this is true, we no longer need it
after
switch to Futurist, as Futurist maintains internal queue for its green
executor, just like thread and process executors in stdlib do.
Instead of
blocking the conductor the request will be queued, and a user won't
have to
retry vague (and rare!) HTTP 503 error.

WDYT about me dropping this exception with move to Futurist?



I kind of like this, but with my operator hat on this is a bit scary.
Does Futurist just queue all requests indefinitely? Is it configurable?
Am I able to get any insight into the current state of that queue?


I believe answer is no, and the reason IIUC is that Futurist executors
are modeled after stdlib executors, but I may be wrong.


So this is correct, currently executors will queue things up, and that
queue may get very large. In futurist we can work on making this better,
although to do it correctly we really need
https://bugs.python.org/issue22737 and that needs upstream python
adjustments to make it possible.

Without that https://bugs.python.org/issue22737 being implemented its
not to hard to limit the work queue yourself, but it will have to be
something extra and require tracking of your own.

For example:

dispatched = set()

def on_done(fut):
dispatched.discard(fut)

executor = futurist.GreenThreadPoolExecutor()

...

if len(dispatched) < MAX_DISPATCH:
raise IamToOverWorkedExeception(...)

fut = executor.submit(some_work)
dispatched.add(fut)
fut.add_done_callback(on_done)

The above will limit how much work is done at the same time
(https://bugs.python.org/issue22737 makes it work more like java, which
has executor rejection policies,
http://download.java.net/jdk7/archive/b123/docs/api/java/util/concurrent/RejectedExecutionHandler.html)
but you can limit this yourself pretty easily... Making issue22737
happen would be great; I just haven't had enough time to pull that off...





Just indefinitely queueing up everything seems like it could end with a
system that's backlogged to death, with no way to determine if that's
actually the problem or not.


As for metrics, one of the additions of the futurist executor subclasses
was to bolt-on the following being gathered,
https://github.com/openstack/futurist/blob/master/futurist/_futures.py#L387
(at executor property '.statistics') so hopefully that can help u learn
about what your executors are doing as well..



// jim

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Oslo] Question about Futurist executors

2015-07-23 Thread Joshua Harlow

Dmitry Tantsur wrote:

Jim,

I'm redirecting your question to oslo folks, as I'm afraid my answer can
be wrong.

On 07/23/2015 01:55 PM, Jim Rollenhagen wrote:

On Wed, Jul 22, 2015 at 02:40:47PM +0200, Dmitry Tantsur wrote:

Hi all!

Currently _spawn_worker in the conductor manager raises
NoFreeConductorWorker if pool is already full. That's not very user
friendly
(potential source of retries in client) and does not map well on common
async worker patterns.

My understanding is that it was done to prevent the conductor thread
from
waiting on pool to become free. If this is true, we no longer need it
after
switch to Futurist, as Futurist maintains internal queue for its green
executor, just like thread and process executors in stdlib do.
Instead of
blocking the conductor the request will be queued, and a user won't
have to
retry vague (and rare!) HTTP 503 error.

WDYT about me dropping this exception with move to Futurist?



I kind of like this, but with my operator hat on this is a bit scary.
Does Futurist just queue all requests indefinitely? Is it configurable?
Am I able to get any insight into the current state of that queue?


I believe answer is no, and the reason IIUC is that Futurist executors
are modeled after stdlib executors, but I may be wrong.


So this is correct, currently executors will queue things up, and that 
queue may get very large. In futurist we can work on making this better, 
although to do it correctly we really need 
https://bugs.python.org/issue22737 and that needs upstream python 
adjustments to make it possible.


Without that https://bugs.python.org/issue22737 being implemented its 
not to hard to limit the work queue yourself, but it will have to be 
something extra and require tracking of your own.


For example:

dispatched = set()

def on_done(fut):
   dispatched.discard(fut)

executor = futurist.GreenThreadPoolExecutor()

...

if len(dispatched) < MAX_DISPATCH:
   raise IamToOverWorkedExeception(...)

fut = executor.submit(some_work)
dispatched.add(fut)
fut.add_done_callback(on_done)

The above will limit how much work is done at the same time 
(https://bugs.python.org/issue22737 makes it work more like java, which 
has executor rejection policies, 
http://download.java.net/jdk7/archive/b123/docs/api/java/util/concurrent/RejectedExecutionHandler.html) 
but you can limit this yourself pretty easily... Making issue22737 
happen would be great; I just haven't had enough time to pull that off...






Just indefinitely queueing up everything seems like it could end with a
system that's backlogged to death, with no way to determine if that's
actually the problem or not.


As for metrics, one of the additions of the futurist executor subclasses 
was to bolt-on the following being gathered, 
https://github.com/openstack/futurist/blob/master/futurist/_futures.py#L387 
(at executor property '.statistics') so hopefully that can help u learn 
about what your executors are doing as well..




// jim

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [Oslo] Question about Futurist executors (was: NoFreeConductorWorker going away with move to Futurist?)

2015-07-23 Thread Dmitry Tantsur

Jim,

I'm redirecting your question to oslo folks, as I'm afraid my answer can 
be wrong.


On 07/23/2015 01:55 PM, Jim Rollenhagen wrote:

On Wed, Jul 22, 2015 at 02:40:47PM +0200, Dmitry Tantsur wrote:

Hi all!

Currently _spawn_worker in the conductor manager raises
NoFreeConductorWorker if pool is already full. That's not very user friendly
(potential source of retries in client) and does not map well on common
async worker patterns.

My understanding is that it was done to prevent the conductor thread from
waiting on pool to become free. If this is true, we no longer need it after
switch to Futurist, as Futurist maintains internal queue for its green
executor, just like thread and process executors in stdlib do. Instead of
blocking the conductor the request will be queued, and a user won't have to
retry vague (and rare!) HTTP 503 error.

WDYT about me dropping this exception with move to Futurist?



I kind of like this, but with my operator hat on this is a bit scary.
Does Futurist just queue all requests indefinitely? Is it configurable?
Am I able to get any insight into the current state of that queue?


I believe answer is no, and the reason IIUC is that Futurist executors 
are modeled after stdlib executors, but I may be wrong.




Just indefinitely queueing up everything seems like it could end with a
system that's backlogged to death, with no way to determine if that's
actually the problem or not.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev