Josh,
You pointed out correct! magnum-conductor has monkey-patched code, so the underlying thread module is actually using greenthread. - I would use eventlet.greenthread explicitly, as that would enhance the readability - greenthread has a potential of not yielding by itself, if no i/o, blocking call is made. But in the present scenario, it is not much of a concern, as the container-operation execution is lighter on the client side, and mostly block for the response from the server, after issuing the request.

I will update the proposal with this change.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 11:57 PM, Joshua Harlow wrote:
SURO wrote:
Josh,
Please find my reply inline.

Regards,
SURO
irc//freenode: suro-patz

On 12/16/15 6:37 PM, Joshua Harlow wrote:


SURO wrote:
Hi all,
Please review and provide feedback on the following design proposal for
implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0]
2. Every time, Magnum-conductor(Mcon) receives a
container-operation-request from Magnum-API(Mapi), it will do the
initial validation, housekeeping and then pick a thread from the
executor_threadpool to execute the rest of the operations. Thus Mcon
will return from the RPC request context much faster without blocking
the Mapi. If the executor_threadpool is empty, Mcon will execute in a
manner it does today, i.e. synchronously - this will be the
rate-limiting mechanism - thus relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of operations,
which can not be made async, as they are supposed to return
result/content inline, e.g. 'container-logs'. [Phase0]
4. Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing thread,
for current execution. If we find a request for an operation for a
container-in-execution, we will block till the thread completes the
execution. [Phase0]
This mechanism can be further refined to achieve more asynchronous
behavior. [Phase2]
The approach above puts a prerequisite that operations for a given
container on a given Bay would go to the same Magnum-conductor instance.
[Phase0]
5. The hand-off between Mcon and a thread from executor_threadpool can
be reflected through new states on the 'container' object. These states
can be helpful to recover/audit, in case of Mcon restart. [Phase1]

Other considerations -
1. Using eventlet.greenthread instead of real threads => This approach
would require further refactoring the execution code and embed yield
logic, otherwise a single greenthread would block others to progress.
Given, we will extend the mechanism for multiple COEs, and to keep the
approach straight forward to begin with, we will use 'threading.Thread'
instead of 'eventlet.greenthread'.


Also unsure about the above, not quite sure I connect how greenthread
usage requires more yield logic (I'm assuming you mean the yield
statement here)? Btw if magnum is running with all things monkey
patched (which it seems like
https://github.com/openstack/magnum/blob/master/magnum/common/rpc_service.py#L33
does) then magnum usage of 'threading.Thread' is a
'eventlet.greenthread' underneath the covers, just fyi.

SURO> Let's consider this -
function A () {
block B; // validation
block C; // Blocking op
}
Now, if we make C a greenthread, as it is, would it not block the entire
thread that runs through all the greenthreads? I assumed, it would and
that's why we have to incorporate finer grain yield into C to leverage
greenthread. If the answer is no, then we can use greenthread.
I will validate which version of threading.Thread was getting used.

Unsure how to answer this one.

If all things are monkey patched then any time a blocking operation (i/o, lock acquisition...) is triggered the internals of eventlet go through a bunch of jumping around to then switch to another green thread (http://eventlet.net/doc/hubs.html). Once u start partially using greenthreads and mixing real threads then you have to start trying to reason about yielding in certain places (and at that point you might as well go to py3.4+ since it has syntax made just for this kind of thinking).

Pointer for the thread monkey patching btw:

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L346

https://github.com/eventlet/eventlet/blob/master/eventlet/patcher.py#L212

Easy way to see this:

>>> import eventlet
>>> eventlet.monkey_patch()
>>> import thread
>>> thread.start_new_thread.__module__
'eventlet.green.thread'
>>> thread.allocate_lock.__module__
'eventlet.green.thread'


In that case, keeping the code for thread.Threading is portable, as it
would work as desired, even if we remove monkey_patching, right?

Yes, use `thread.Threading` (if u can) so that maybe magnum could switch off monkey patching someday, although typically unless u are already testing that turning it off in unit tests/functional tests it wouldn't be an easy flip that will typically 'just work' (especially since afaik magnum is using some oslo libraries which only work under greenthreads/eventlet).


Refs:-
[1] -
https://blueprints.launchpad.net/magnum/+spec/async-container-operations


__________________________________________________________________________

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to