Hongbin,
Very useful pointers! Thanks for bringing up the relevant contexts!

The proposal to block here for consecutive operations on same container, is the approach to start with. We can have a wait queue implementation following - that way the approach will be amortized over time. If you feel strongly, I am okay implementing the wait queue on the first go itself.
[ I felt step-by-step approach carries in sizable code, easier to review ]

By the way, I think the scope of bay lock and scope of per-bay-per-container operation is different too, in terms of blocking.

I have a confusion about non-blocking bay-operations for horizontal scale [1] - " Heat will be having concurrency support, so we can rely on heat for the concurrency issue for now and drop the baylock implementation." - if user issues two consecutive updates on a Bay, and if the updates go through different magnum-conductors, they can land up at Heat in different order, resulting in different state of the bay. How Heat-concurrency will prevent that I am not very clear. [ Take an example of 'magnum bay-update k8sbay replace node_count=100' followed by 'magnum bay-update k8sbay replace node_count=10']


[1] - https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale (Line 33)

Regards,
SURO
irc//freenode: suro-patz

On 12/17/15 8:10 AM, Hongbin Lu wrote:
Suro,

FYI. In before, we tried a distributed lock implementation for bay operations 
(here are the patches [1,2,3,4,5]). However, after several discussions online 
and offline, we decided to drop the blocking implementation for bay operations, 
in favor of non-blocking implementation (which is not implemented yet). You can 
find more discussion in here [6,7].

For the async container operations, I would suggest to consider a non-blocking 
approach first. If it is impossible and we need a blocking implementation, 
suggest to use the bay operations patches below as a reference.

[1] https://review.openstack.org/#/c/171921/
[2] https://review.openstack.org/#/c/172603/
[3] https://review.openstack.org/#/c/172772/
[4] https://review.openstack.org/#/c/172773/
[5] https://review.openstack.org/#/c/172774/
[6] https://blueprints.launchpad.net/magnum/+spec/horizontal-scale
[7] https://etherpad.openstack.org/p/liberty-work-magnum-horizontal-scale

Best regards,
Hongbin

-----Original Message-----
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: December-16-15 10:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: s...@yahoo-inc.com
Subject: Re: [openstack-dev] [magnum] Magnum conductor async container 
operations


On Dec 16, 2015, at 6:24 PM, Joshua Harlow <harlo...@fastmail.com> wrote:

SURO wrote:
Hi all,
Please review and provide feedback on the following design proposal
for implementing the blueprint[1] on async-container-operations -

1. Magnum-conductor would have a pool of threads for executing the
container operations, viz. executor_threadpool. The size of the
executor_threadpool will be configurable. [Phase0] 2. Every time,
Magnum-conductor(Mcon) receives a container-operation-request from
Magnum-API(Mapi), it will do the initial validation, housekeeping and
then pick a thread from the executor_threadpool to execute the rest
of the operations. Thus Mcon will return from the RPC request context
much faster without blocking the Mapi. If the executor_threadpool is
empty, Mcon will execute in a manner it does today, i.e.
synchronously - this will be the rate-limiting mechanism - thus
relaying the feedback of exhaustion.
[Phase0]
How often we are hitting this scenario, may be indicative to the
operator to create more workers for Mcon.
3. Blocking class of operations - There will be a class of
operations, which can not be made async, as they are supposed to
return result/content inline, e.g. 'container-logs'. [Phase0] 4.
Out-of-order considerations for NonBlocking class of operations -
there is a possible race around condition for create followed by
start/delete of a container, as things would happen in parallel. To
solve this, we will maintain a map of a container and executing
thread, for current execution. If we find a request for an operation
for a container-in-execution, we will block till the thread completes
the execution. [Phase0]
Does whatever do these operations (mcon?) run in more than one process?
Yes, there may be multiple copies of magnum-conductor running on separate hosts.

Can it be requested to create in one process then delete in another?
If so is that map some distributed/cross-machine/cross-process map
that will be inspected to see what else is manipulating a given
container (so that the thread can block until that is not the case...
basically the map is acting like a operation-lock?)
That’s how I interpreted it as well. This is a race prevention technique so 
that we don’t attempt to act on a resource until it is ready. Another way to 
deal with this is check the state of the resource, and return a “not ready” 
error if it’s not ready yet. If this happens in a part of the system that is 
unattended by a user, we can re-queue the call to retry after a minimum delay 
so that it proceeds only when the ready state is reached in the resource, or 
terminated after a maximum number of attempts, or if the resource enters an 
error state. This would allow other work to proceed while the retry waits in 
the queue.

If it's just local in one process, then I have a library for u that
can solve the problem of correctly ordering parallel operations ;)
What we are aiming for is a bit more distributed.

Adrian

This mechanism can be further refined to achieve more asynchronous
behavior. [Phase2] The approach above puts a prerequisite that
operations for a given container on a given Bay would go to the same
Magnum-conductor instance.
[Phase0]
5. The hand-off between Mcon and a thread from executor_threadpool
can be reflected through new states on the 'container' object. These
states can be helpful to recover/audit, in case of Mcon restart.
[Phase1]

Other considerations -
1. Using eventlet.greenthread instead of real threads => This
approach would require further refactoring the execution code and
embed yield logic, otherwise a single greenthread would block others to 
progress.
Given, we will extend the mechanism for multiple COEs, and to keep
the approach straight forward to begin with, we will use 'threading.Thread'
instead of 'eventlet.greenthread'.


Refs:-
[1] -
https://blueprints.launchpad.net/magnum/+spec/async-container-operati
ons

______________________________________________________________________
____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to