For asyncio_redis, we noticed that there's quite a lot of overhead involved 
in having a whole asynchronous flow.

Our bottleneck is the queue between the protocol parser and a 
"MultiBulkReply" object that the end-user receives after doing a query.

A redis query can return a *lot* of small objects, but as well larger 
objects. But in any case, the parser processes the data stream and puts all 
the answers on the queue. The consumer of the queue can then process the 
query result asynchronously. When queries cause many small answers (some, 
like pubsub even return an "infinite" stream of small answers), there are a 
lot of `Future` instances created to be passed over the queue and depending 
on the user of the library, the data that one such `Future` caries can be 
either small or larger.

This results in a situation where some users will notice a lot of overhead 
in asynchronous code, while others won't notice it....


Do we have any other libraries facing this problem?

What about an SQL driver, in the case of a "SELECT * from table;" query.
Should it return a proxy that generates futures? Or a queryset that has a 
```fetch_next_rows(amount)``` method, so that the caller can decide how 
many rows to process synchronously before being delivering.

I'm very interested if anyone has faced the same problem.

Reply via email to