I wrote a second version of my cross-project specification "Replace eventlet with
asyncio". It's now open for review:
I copied it below if you prefer to read it and/or comment it by email. Sorry,
I'm not sure that the spec will be correctly formatted in this email. Use the
URL if it's not case.
This work is licensed under a Creative Commons Attribution 3.0 Unported
Replace eventlet with asyncio
This specification proposes to replace eventlet, implicit async programming,
with asyncio, explicit async programming. It should fix eventlet issues,
prepare OpenStack for the future (asyncio is now part of the Python language)
and may improve overall OpenStack performances. It also makes usage of native
threads simpler and more natural.
Even if the title contains "asyncio", the spec proposes to use trollius. The
name asyncio is used in the spec because it is more well known than trollius,
and because trollius is almost the same thing than asyncio.
The spec doesn't change OpenStack components running WSGI servers like
nova-api. Compatibility issue between WSGI and asyncio should be solved first.
The spec is focused on Oslo Messaging and Ceilometer projects. More OpenStack
components may be modified later if the Ceilometer port to asyncio is
successful. Ceilometer will be used to find and solve technical issues with
asyncio, so the same solutions can be used on other OpenStack components.
Note: Since Trollius will be used, this spec is unrelated to Python 3. See the
`OpenStack Python 3 wiki page<https://wiki.openstack.org/wiki/Python3>`_ to
get the status of the port.
OpenStack components are designed to "scale". There are differenet options
to support a lot of concurrent requests: implicit asynchronous programming,
explicit programming, threads, processes, and combination of these options.
In the past, the Nova project used Tornado, then Twisted and it is now using
eventlet which also became the defacto standard in OpenStack. The rationale to
switch from Twisted to eventlet in Nova can be found in the old `eventlet vs
This section only gives some examples of eventlet issues. There are more
eventlet issues, but tricky issues are not widely discussed and so not well
known. Most interesting issues are issues caused by the design of eventlet,
especially the monkey-patching of the Python standard library.
Eventlet itself is not really evil. Most issues come from the monkey-patching.
The problem is that eventlet is almost always used with monkey-patching in
The implementation of the monkey-patching is fragile. It's easy to forget to
patch a function or have issues when the standard library is modified. The
eventlet port to Python 3 showed how the patcher highly depends on the standard
library. A recent eventlet change (v0.16) "turns off __builtin__ monkey
patching by default" to fix a tricky race condition: see `eventlet recursion
error after RPC timeout
<https://bugs.launchpad.net/oslo.messaging/+bug/1369999>`_ and `Second
simultaneous read on fileno can be raised on a closed socket #94
<https://github.com/eventlet/eventlet/issues/94>`_ issues. Modules implemented
in C cannot be fully monkey-patched. Recent example: the `Fix
threading.Condition with monkey-patching on Python 3.3 and newer #187
<https://github.com/eventlet/eventlet/pull/187>`_ change forces to use the
Python implementation of ``threading.RLock``, because the C implementation
doesn't use the monkey-patched ``threading.get_ident()`` function to get the
thread identifier, but directly a C function.
Depending on the import order, modules may or may not be monkey-patched. It's a
common trap with eventlet. Monkey-patching makes writing unit tests harder.
Some libraries must be modified to support eventlet monkey-patching. Because
they have to use original modules, not patched modules, for example. Some
patched functions behave differently, which causes issues
in applications using them. Example of an OpenStack issue report to the qpid
mailing list, `QPID and eventlet.monkey_patch()
"The lock-up occurs because select() returns that the pipe is ready to be read
from before anything has been written to the pipe".
Since eventlet uses threads, "green" threads, concurrent code must be carefully
written to avoid race condition. The section `Explicit async versus implicit
async programming`_ below explains this problem.
See also drawbacks in the `Eventlet`_ section.
Explicit async versus implicit async programming
Implicit asynchronous code causes a new kind of race condition issues which are
difficult to understand and to fix. It is hard to guess where the scheduler may
switch tasks. The source code of a function should be carefully read to check
if it may yield control to another coroutine or not. A function of an external
module may be modified later to use a blocking function.
When asyncio coroutines access data shared with other coroutines, it's possible
to avoid locks is most cases.
Read the "Ca(sh|che Coherent) Money" section of the `Unyielding
<https://glyph.twistedmatrix.com/2014/02/unyielding.html>`_ article (Glyph,
February 2014): it explains how a simple log (call to a ``log()``
function) can introduce subtle race conditions. It explains how explicit
asychronous programming reduces the risk of introducing bugs. With eventlet, if
log() becomes asynchronous, you have no reminder that you have to take care of
race conditions during a review of the change. With asyncio, you must add
"yield from" before log(): it's a nice reminder to say "hey, be careful: your
code becomes asynchronous."
* Michael Bayer disagrees with Glyph's post. Threads are used virtually
everywhere in software, for decades. They aren't perfect but they are
certainly not as awful as Glyph describes.
See also general articles about asynchronous programming:
* `Async I/O and Python
<http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/>`_ by Mark
McLoughlin (June 2013)
* `Some Thoughts on Asynchronous Programming
by Nick Coghlan: A good rundown of the general problem
Asyncio and Trollius
asyncio is a new module introduced in the standard library of Python 3.4 (March
2014), it was designed (`PEP 3156
<https://www.python.org/dev/peps/pep-3156/>`_) to be a compatible with existing
frameworks like Twisted or Tornado. The main difference with Twisted is that
coroutines are first class citizen.
* Explicit asynchronous programming reduces the risk of race conditions: the
developer can identify easily where a function (coroutine) can be
* asyncio is maintained by the Python project and rely on existing modules of
the standard library like select, selectors or concurrent.futures. No
need for assembler code (as greenlet).
* asyncio has a good design
* asyncio requires Python 3.3 or newer, while OpenStack must support Python
2.7. This issue was solved with the Trollius project, see below.
* asyncio is young and may have more bugs than other projects. However the
project is actively developed and its community is active.
Trollius has a large test suite of 883 tests. It is tested on Linux, Windows,
Mac OS X, FreeBSD, etc. asyncio is tested in a continuous integration
infrastruction by buildbots on even more operation systems and various
architectures. At the beginning of February 2015, there are 47 open "issues"
(bugs, enhancements, feature requests, etc.) and 262 closed issues in Python
and Tulip issue trackers. Only one open issue is a bug: `Cancelling wait()
after notification leaves Condition in an inconsistent state
<http://bugs.python.org/issue22970>`_, but ``asyncio.Condition`` is not widely
Trollius is released more often than Python: each time a major bug fixed or a
new cool feature is added. See the `Trollius changelog
See also `Oslo/blueprints/asyncio
First part (done): add support for trollius coroutines
Prepare OpenStack (Oslo Messaging) to support trollius coroutines using
``yield``: explicit asynchronous programming. Eventlet is still supported,
used by default, and applications and libraries don't need to be modified at
* Write the trollius project: port asyncio to Python 2
* Stabilize trollius API
* Add trollius dependency to OpenStack
* Write the aioeventlet project to provide the asyncio API on top of eventlet
* Stabilize aioeventlet API
* Add aioeventlet dependency to OpenStack
* Write an aioeventlet executor for Oslo Messaging: code written, change
approved, but not merged yet (need aioeventlet dependency)
Second part (to do): rewrite code as trollius coroutines
Switch from implicit asynchronous programming (eventlet using greenthreads) to
explicit asynchronous programming (trollius coroutines using ``yield``). Need
to modify OpenStack Common Libraries and applications. Modifications can be
done step by step, the switch will take more than 6 months.
The first application candidate is Ceilometer. The Ceilometer project is young,
developers are aware of eventlet issues and like Python 3, and Ceilometer don't
rely so much on asynchronous programming: most time is spent into waiting the
The goal is to port Ceilometer to explicit asynchronous programming during the
cycle of OpenStack L.
Some applications may continue to use implicit asynchronous programming. For
example, nova is probably the most complex part beacuse it is and old project
with a lot of legacy code, it has many drivers and the code base is large.
* Ceilometer: add trollius dependency and set the trollius event loop policy to
* Ceilometer: change Oslo Messaging executor from "eventlet" to "aioeventlet"
* Redesign the service class of Oslo Incubator to support aioeventlet and/or
trollius. Currently, the class is designed for eventlet. The service class
is instanciated before forking, which requires hacks on eventlet to update
* In Ceilometer and its OpenStack depedencencies: add new functions which
are written with explicit asynchronous programming in mind (ex: trollius
coroutines written with ``yield``). It doesn't make sense to port all Python
libraries to asyncio. Only libraries which are part of performance bottleneck
and doing I/O operations may be ported to asyncio. There is always the
option of running blocking operations in ``loop.run_in_executor()`` to use
a pool of threads.
* Rewrite Ceilometer endpoints (RPC methods) as trollius coroutines.
* Add a new storage implementation compatible with asyncio. Maybe using
* The quantity of code which need to be ported to asynchronous programming is
unknown right now.
* We should be prepared to see deadlocks. OpenStack was designed for eventlet
which implicitly switch on blocking operations. Critical sections may not be
protected with locks, or not the right kind of lock.
* For performances, blocking operations can be executed in threads. OpenStack
code is probably not thread-safe, which means new kinds of race conditions.
But the code executed in threads will be explicitly scheduled to be executed
in a thread (with ``loop.run_in_executor()``), so regressions can be easily
* This part will take a lot of time. We may need to split it into subparts
to have milestones, which is more attractive for developers.
Last part (to do): drop eventlet
Replace aioeventlet event loop with trollius event loop, drop aioeventlet and
drop eventlet at the end.
This change will be done on applications one by one. This is no need to port
all applications at once. The work will start on Ceilometer, as a follow up
of the second part.
* Port remaining code to trollius
* Write a "trollius" executor for Oslo Messaging
* Ceilometer: Add a blocking call to ``loop.run_forever()`` in the ``main()``
* Ceilometer: Replace "aioeventlet" executor with "trollius" executor
* Ceilometer: Use the standard trollius event loop policy
* Service class: launcher.wait() must now call ``loop.run_forever()``
* Ceilometer: drop the eventlet dependency
Optimization, can be done later:
* Oslo Messaging: watch directly the underlying file descriptor of sockets,
instead of using a busy loop polling the notifier
* Ceilometer: use libraries supporting directly trollius to be able to run
parallel tasks (ex: send multiple requests to a database)
Later: replace trollius with asyncio
When a project will be fully Python 3 compatible and OpenStack will be ready to
drop Python 2 support, it will be possible to replace trollius with asyncio.
Trollius has been designed to be able to easily convert a code base from
trollius to asyncio. For example, ``From(obj)`` just returns ``obj``: it's a
no-op just to be able to replace the ``yield From(...)`` pattern with ``yield
Before that, it may be possible to import asyncio instead of trollius, since
the API is the same, except of the syntax of coroutines.
Use directly asyncio, not trollius
Trollius is just a "temporary" solution until OpenStack port to Python 3
completes and OpenStack decides to drop Python 2 support.
Libraries only start to support asyncio, supporting trollius may require extra
effort (the effort is not well quantified right now, it may be easier than
An alternative is to directly replace eventlet with asyncio, without a
temporary step using trollius.
The first requirement is to have an application fully Python 3 compatible.
Unlikely, eventlet is not yet fully Python 3 compatible. Before eventlet 0.15,
released in july 2014, it was not possible to install eventlet on Python 3. So
eventlet was a blocker point to port OpenStack components to Python 3. The
eventlet port to Python 3 is now almost done, see the `Eventlet`_ section
The second requirement is to drop Python 2 application. It may not be
acceptable right now for the whole OpenStack, but it might be acceptable for
some specific applications. RHEL 7 require SCL (Software Collection Library) to
get Python 3.3. Debian Wheezy (latest Debian stable) only provides Python 3.2
which lacks support of ``yield from`` syntax, required by asyncio.
See the `OpenStack Python 3 wiki page
<https://wiki.openstack.org/wiki/Python3>`_ to get the status of the port.
Keep eventlet which is already used.
eventlet is based on greenlet which allows to interrupt any function and
restart it later, as coroutines but implicitly.
* Code just looks sequential, no need for extra effort to write async code.
* Almost all OpenStack component are already using eventlet.
* See the `Eventlet issues`_ section
* The developer cannot know where its function will be interrupted. Basically,
it can be interrupted anywhere. Writing code without race conditions requires
deep knownledge of how Python and eventlet are implemented.
* Race conditions are unlikely and so are usually only seen on production.
It's hard to reproduce them. More generally, eventlet is not reliable.
* Not compatible with Python 3. This issue is almost fixed: eventlet 0.16
mostly work on Python 3 with monkey-patching. Example of remaining issue:
`Fix threading.Condition with monkey-patching on Python 3.3 and newer
* In january 2015, Eventlet still doesn't support support IPv6:
`IPv6 support #8<https://github.com/eventlet/eventlet/issues/8>`_
(issue open since January 2013)
See also the `Eventlet Best Practices
* Native threads: implemented in the kernel
* No requirement of non-blocking sockets or asynchronous functions: just use
any kind of blocking function.
* The code is sequential, no callback hell
* Programing with threads (native threads or green threads) is hard because the
code can be interrupted anywhere. It is harder to write "thread-safe" code
(ex: protecting shared data with locks) than to write asyncio code.
* CPython (the reference Python implementation) has a Global Interpreter Lock
("GIL") which reduces the performances of native threads. Only one Python
instruction can be executed at the same time. The GIL is released for I/O
operation. PyPy has a STM project to drop it, but this project is
experimental and it is not yet faster than CPython with GIL.
The `C10K program<http://www.kegel.com/c10k.html>`_ showed that asynchronous
event-driven is the more efficient than threads to handle concurrent requests,
at least for web servers.
article explains how bad are threads and that threads should be avoided. Native
threads should be avoided, but also implicit coroutines like the green threads
* Michael Bayer added: "Improving upon GIL has nothing to do with async
programming. Both the GIL, and explicit async, squeeze all operations through
a single CPU serially. The GIL does not block on IO. There are no performance
gains to be had in this regard by async."
asyncio uses native threads. For example, by default, resolving a hostname
calls the ``getaddrinfo()`` function which is blocking: the function is
executed in a thread pool using ``loop.run_in_executor()``. There are
asynchronous DNS resolvers available to avoid threads when resolving hostnames.
More generally, any blocking function can be executed with
``loop.run_in_executor()`` in asyncio to not block the event loop.
David Beazley identified performances issues related to the GIL: see
`Understanding the Python GIL<http://www.dabeaz.com/GIL/>`_. Splitting a
CPU-bound task into multiple threads may slow down the task instead of making
it faster, just because of the GIL. CPU-bound code is the worst case for the
GIL. The GIL has been rewritten in Python 3.2 to enhance performances. The
optimization will not be backported to Python 2.7.
Data model impact
REST API impact
Other end user impact
The performance impact of rewrite coding as trollius coroutines is unknow yet.
If there is a overhead of using coroutines, it is expected to be low.
We can expect better performances with fully asynchronous clients. See for
example `API Hour benchmark
which compares synchronous and asynchronous code for DB requests (using aiopg)
and JSON serialization in a web server: asynchronous code can handle much more
client requests in 30 seconds.
* Michael Bayer is concerned on performance overhead of coroutines. On the
consuming a generator takes 951 ns whereas calling a function takes 228 ns:
consuming a generator is 4.2x slower than calling a function. Basically, the
microbenchmark measures the time to raise an exception (``StopIteration``)
and then to catch it: 723 nanoseconds. A database query typically takes 50 ms
Other deployer impact
To write efficient code, developers have to learn how to write asyncio code,
but only on functions which must be asynchronous.
Only projects which chose to use asyncio will have to be modified. Other
projects are free to continue to use eventlet.
Assignee is for moving these guidelines through the review process to
something that we all agree on. The expectation is that these become
review criteria that we can reference and are implemented by a large
number of people. Once approved, will also drive collecting volunteers
to help fix in multiple projects.
Work items or tasks -- break the feature up into the things that need
to be done to implement it. Those parts might end up being done by
different people, but we're mostly trying to understand the time-line
Recently merged changes:
* `Add aioeventlet dependency<https://review.openstack.org/#/c/138750/>`_
* `Add a new aioeventlet executor<https://review.openstack.org/#/c/136653/>`_:
The implementation requires a new dependency: the ``aioeventlet`` module. It is
already added to global requirements.
The ``trollius`` module was already added to global requirements.
Comparison of eventlet and asyncio code
Call a function
Schedule a function in 10 seconds
eventlet.spawn_after(10, func, arg)
loop.call_later(10, func, arg)
# interrupt the execution of the current greenthread
return arg * 2
gt = eventlet.spawn(async_multiply, 5)
# block the current greenthread
result = gt.wait()
print("5 * 2 = %s" % result)
# interrupt the execution of the current task
yield from asyncio.sleep(1.0)
return arg * 2
# block the current coroutine
result = yield from async_multiply(5)
print("5 * 2 = %s" % result)
Trollius: asyncio port to Python 2
asyncio requires Python 3.3 and newer. asyncio was ported to Python 2 in a new
project called `trollius<http://trollius.readthedocs.org/>`_. Changes made in
asyncio are merged in trollius, trollius is a branch of the mercurial
repository of tulip (asyncio upstream).
The major difference between Trollius and Tulip is the syntax of coroutines:
``yield from ...`` ``yield From(...)``
``yield from `` ``yield From(None)``
``return`` ``raise Return()``
``return x`` ``raise Return(x)``
``return x, y`` ``raise Return(x, y)``
It is possible to write code working with trollius and asyncio in the same code
base if coroutines are not used, but only callbacks and futures. Some libraries
already support asyncio and trollius like AutobahnPython (Websockets and WAMP),
Pulsar and Tornado.
Another option is to provide functions returning ``Future`` objects, so the
caller can decide to use callback using ``fut.add_done_callback(callback)`` or
to use coroutines (``yield From(fut)`` for Trollius, or ``yield from fut`` for
Tulip). This option is used by the `aiodns<https://github.com/saghul/aiodns>`_
project for example.
On Python 3.3 and newer, Trollius supports also asyncio coroutines. The
trollius module is compatible with asyncio, the opposite is false.
Trollius works on Python 2.6-3.5.
aioeventlet: asyncio API on top of eventlet
In OpenStack, eventlet cannot be replaced with asyncio in all projects in a
single commit. The OpenStack development is made with small and limited
changes. To have a smooth transition, the `aioeventlet project
<http://aioeventlet.readthedocs.org/>`_ was written to support the asyncio API
on top of eventlet. It makes possible to add support for asyncio coroutines to
an existing OpenStack component without having to replace immediatly its
``main()`` function with the blocking call ``loop.run_forever()``.
aioeventlet supports waiting for a task from a greenthread (``yield_future()``)
and waiting for a greenthread from a task (``wrap_greenthread()``).
asyncio database drivers
List of database drivers compatible with asyncio:
* MongoDB: `asyncio-mongo<https://bitbucket.org/mrdon/asyncio-mongo>`_
* MySQL: `aiomysql<https://github.com/aio-libs/aiomysql>`_ (based on PyMySQL)
* PostgreSQL: `aiopg<http://aiopg.readthedocs.org/>`_
* PostgreSQL: `psycotulip<https://github.com/fafhrd91/psycotulip>`_ (based on
* memcached: `aiomemcache<https://github.com/fafhrd91/aiomemcache>`_
* redis: `aioredis<http://aioredis.readthedocs.org/>`_
* redis: `asyncio-redis<http://asyncio-redis.readthedocs.org/>`_
The aiopg project includes a subset of SQLAlchemy which works with asyncio:
While most SQLAlchemy functions could be modified to support asyncio, the most
important problem is the lazy loading in the ORM. For example, ``user =
session.query(User).get(1)`` doesn't run any database query, but the following
``user.addresses`` instruction runs a query.
Until SQLAlchemy fully support asyncio, explicit or implicit database queries
can be executed in ``loop.run_in_executor()`` to run them in a thread pool, to
not block the asyncio event loop.
See also the `Performance Impact`_ section: Michael Bayer is concerned by the
performance overhead of coroutines. He spent a lot of time to optimize
WSGI and HTTP servers
The WSGI protocol is synchronous and so incompatible with asyncio. There are
"hacks" to support asyncio coroutines with WSGI, like monkey-patching.
Using asyncio, there are other efficient ways to write an HTTP server without
WSGI: see the HTTP server included in aiohttp for example. The problem is that
many OpenStack components rely on the WSGI protocol to support middlewares.
Replacing WSGI protocol is not at option right now.
For these reasons, this spec doesn't concern OpenStack components running WSGI
servers. First, the WSGI protocol should be enhanced or replaced with
something having a native support for asynchronous programming.
* gunicorn: see ``gaiohttp`` worker
* `Pulsar: Asynchronous WSGI
* `uwsgi: uWSGI asynchronous/non-blocking modes
* `bottle: Greenlets to the rescue
aioeventlet project (3rd try, patch now merged!):
* December 3, 2014: two patches posted to requirements:
`Add aioeventlet dependency<https://review.openstack.org/#/c/138750/>`_
and `Drop greenio dependency<https://review.openstack.org/#/c/138748/>`_.
* Novembre 23, 2014: two patches posted to Oslo Messaging:
`Add a new aioeventlet executor<https://review.openstack.org/#/c/136653/>`_
and `Add an optional executor callback to dispatcher
* November 19, 2014: First release of the aioeventlet project
OpenStack Kilo Summit, November 3-7, 2014, at Paris:
* `Python 3 in Oslo<https://etherpad.openstack.org/p/kilo-oslo-python-3>`_:
* add a new greenio executor to Oslo Messaging
* port eventlet to Python 3 (with monkey-patching)
* `What should we do about oslo.messaging?
<https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging>`_: add the new
* `Python 3.4 transition<https://etherpad.openstack.org/p/py34-transition>`_
greenio executor for Oslo Messaging (second try):
* July 29, 2014: Doug Hellmann proposed the blueprint
`A 'greenio' executor for oslo.messaging
approved by Mark McLoughlin.
* July 24, 2014: `Add greenio dependency<https://review.openstack.org/108637>`_
merged into openstack/requirements
* July 22, 2014: Patch `Add a new greenio executor
<https://review.openstack.org/#/c/108652/>`_ proposed to Oslo Messaging
* July 21, 2014: Release of greenio 0.6 which is now compatible with Trollius
* July 21, 2014: Release of Trollius 1.0
* July 14, 2014: Patch `Add a 'greenio' oslo.messaging executor (spec)
<https://review.openstack.org/#/c/104792/>`_ merged into openstack/oslo-specs.
* July 7, 2014: Patch `Fix AMQPListener for polling with timeout
<https://review.openstack.org/#/c/104964/>`_ merged into Oslo Messaging
* July 2014: greenio executor, `[openstack-dev] [oslo] Asyncio and
trollius executor for Oslo Messaging (first try):
* June 20, 2014: Patch `Add an optional timeout parameter to Listener.poll
<https://review.openstack.org/#/c/71003/>`_ merged into Oslo Messaging
* May 28, 2014: Meeting at OpenStack in action with Doug Hellman, Julien
Danjou, Mehdi Abaakouk, Victor Stinner and Christophe to discuss the plan to
port OpenStack to Python 3 and switch from eventlet to asyncio.
* April 23, 2014: Patch `Allow trollius 0.2
<https://review.openstack.org/#/c/79901/>`_ merged into
* March 21, 2014: Patch `Replace ad-hoc coroutines with Trollius coroutines
<https://review.openstack.org/#/c/77925/>`_ proposed to Heat. Heat coroutines
are close to Trollius coroutines. Patch abandonned, need to be rewritten,
maybe with aioeventlet.
* February 20, 2014: The full specification of the blueprint was written:
* February 8, 2014: Patch `Add a new dependency: trollius
<https://review.openstack.org/#/c/70983/>`_ merged into
* February 27, 2014: Article `Use the new asyncio module and Trollius in
* February 4, 2014: Patch `Add a new asynchronous executor based on Trollius
<https://review.openstack.org/#/c/70948/>`_ proposed to Oslo Messaging,
but it was abandonned. Running a classic Trollius event loop in a dedicated
thread doesn't fit well into eventlet event loop.
First discussion around asyncio and OpenStack:
* December 19, 2013: Article `Why should OpenStack move to Python 3 right now?
* December 4, 2013: Blueprint `Add a asyncio executor to oslo.messaging
proposed by Flavio Percoco and accepted for OpenStack Icehouse by Mark
Threads on the openstack-dev mailing list:
* `[oslo] Progress of the port to Python 3
(Victor Stinner, Jan 6 2015)
* `[oslo] Add a new aiogreen executor for Oslo Messaging
(Victor Stinner, Nov 23 2014)
* `[oslo] Asyncio and oslo.messaging
(Mark McLoughlin, Jul 3 2014)
* `SQLAlchemy and asynchronous programming
by Mike Bayer (author and maintainer of SQLAlchemy)
* `[Solum][Oslo] Next Release of oslo.messaging?
(Victor Stinner, Mar 18 2014)
* `[solum] async / threading for python 2 and 3
(Victor Stinner, Feb 20 2014)
* `Asynchrounous programming: replace eventlet with asyncio
(Victor Stinner, Feb 4 2014)
OpenStack Development Mailing List (not for usage questions)