Excerpts from Victor Stinner's message of 2015-02-25 02:12:05 -0800:
> Hi,
> > I also just put up another proposal to consider:
> > https://review.openstack.org/#/c/156711/
> > """Sew over eventlet + patching with threads"""
> My asyncio spec is unclear about WSGI, I just wrote
> "The spec doesn't change OpenStack components running WSGI servers
> like nova-api. The specific problem of using asyncio with WSGI will
> need a separated spec."
> Joshua's threads spec proposes:
> "I would prefer to let applications such as apache or others handle
> the request as they see fit and just make sure that our applications
> provide wsgi entrypoints that are stateless and can be horizontally
> scaled as needed (aka remove all eventlet and thread ... semantics
> and usage from these entrypoints entirely)."
> Keystone wants to do the same:
> https://review.openstack.org/#/c/157495/
> "Deprecate Eventlet Deployment in favor of wsgi containers
> This deprecates Eventlet support in documentation and on invocation
> of keystone-all."
> I agree: we don't need concurrency in the code handling a single HTTP 
> request: use blocking functions calls. You should rely on highly efficient 
> HTTP servers like Apache, nginx, werkzeug, etc. There is a lot of choice, 
> just pick your favorite server ;-) Each HTTP request is handled in a thread. 
> You can use N processes and each process running M threads. It's a common 
> architecture design which is efficient.
> For database accesses, just use regular blocking calls (no need to modify 
> SQLAchemy). According to Mike Bayer's benchmark (*), it's even the fastest 
> method if your code is database intensive. You may share a pool of database 
> connections between the threads, but a connection should only be used by a 
> single thread.
> (*) http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/
> I don't think that we need a spec if everybody already agree on the design :-)


This leaves a few pieces of python which don't operate via HTTP
requests. There are likely more, but these come to mind:

* Nova conductor
* Nova scheduler/Gantt
* Nova compute
* Neutron agents
* Heat engine

I don't have a good answer for them, but my gut says none of these
gets as crazy with concurrency as the API services which have to talk
to all the clients with their terrible TCP stacks, and awful network
connectivity. The list above is always just talking on local buses, and
thus can likely just stay on eventlet, or use a multiprocessing model to
take advantage of local CPUs too. I know for Heat's engine, we saw quite
an improvement in performance of Heat just by running multiple engines.

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to