Re: [Async-sig] Tips and tricks to track memory leaks in realtime inside an AsyncIO daemon ?

2018-06-18 Thread Ludovic Gasc
Hi Alex,

Thanks for the tip, I will dig in it.

Have a nice week-end.
--
Ludovic Gasc (GMLudo)


Le lun. 18 juin 2018 à 09:52,  a écrit :

> I've done something similar but using aioconsole instead of epdb.
>
> su, 2018-06-17 kello 17:29 +0200, Ludovic Gasc kirjoitti:
>
> Hi,
>
> We have now an AsyncIO daemon with memory leaks.
>
> To track this, I'm thinking to use this:
> https://pythonhosted.org/Pympler/muppy.html
> and objgraph.
>
> But, because it's a live daemon and not a script, instead to implement an
> HTTP endpoint to launch the memory snapshot locally, I'm thinking to use
> epdb:
> https://github.com/sassoftware/epdb
> To have directly a Python console to explore interactively inside the
> process to have a more flexible way to debug.
>
> I did a quick'n'dirty lab', it seems to work more or less to run epdb
> inside a thread executor, but before to continue on this idea, I'm
> interested in how you track your memory leaks inside your AsyncIO daemons ?
>
> Thanks for your feedbacks.
> --
> Ludovic Gasc (GMLudo)
>
> ___
>
> Async-sig mailing list
>
> Async-sig@python.org
>
> https://mail.python.org/mailman/listinfo/async-sig
>
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>
>
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


[Async-sig] Tips and tricks to track memory leaks in realtime inside an AsyncIO daemon ?

2018-06-17 Thread Ludovic Gasc
Hi,

We have now an AsyncIO daemon with memory leaks.

To track this, I'm thinking to use this:
https://pythonhosted.org/Pympler/muppy.html
and objgraph.

But, because it's a live daemon and not a script, instead to implement an
HTTP endpoint to launch the memory snapshot locally, I'm thinking to use
epdb:
https://github.com/sassoftware/epdb
To have directly a Python console to explore interactively inside the
process to have a more flexible way to debug.

I did a quick'n'dirty lab', it seems to work more or less to run epdb
inside a thread executor, but before to continue on this idea, I'm
interested in how you track your memory leaks inside your AsyncIO daemons ?

Thanks for your feedbacks.
--
Ludovic Gasc (GMLudo)
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Indeed, thanks for the suggestion :-)

Le mer. 18 avr. 2018 à 01:21, Nathaniel Smith <n...@pobox.com> a écrit :

> Pretty sure you want to add a try/finally around that yield, so you
> release the lock on errors.
>
> On Tue, Apr 17, 2018, 14:39 Ludovic Gasc <gml...@gmail.com> wrote:
>
>> 2018-04-17 15:16 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>:
>>
>>>
>>>
>>> You could simply use something like the first 64 bits of
>>> sha1("myapp:")
>>>
>>
>> I have followed your idea, except I used hashtext directly, it's an
>> internal postgresql function that generates an integer directly.
>>
>> For now, it seems to work pretty well but I didn't yet finished all tests.
>> The final result is literally 3 lines of Python inside an async
>> contextmanager, I like this solution ;-) :
>>
>> @asynccontextmanager
>> async def lock(env, category='global', name='global'):
>> # Alternative lock id with 'mytable'::regclass::integer OID
>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock(
>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>>
>> yield None
>>
>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock(
>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>>
>>
>>
>>>
>>> Regards
>>>
>>> Antoine.
>>>
>>>
>>> On Tue, 17 Apr 2018 15:04:37 +0200
>>> Ludovic Gasc <gml...@gmail.com> wrote:
>>> > Hi Antoine & Chris,
>>> >
>>> > Thanks a lot for the advisory lock, I didn't know this feature in
>>> > PostgreSQL.
>>> > Indeed, it seems to fit my problem.
>>> >
>>> > The small latest problem I have is that we have string names for locks,
>>> > but advisory locks accept only integers.
>>> > Nevertheless, it isn't a problem, I will do a mapping between names and
>>> > integers.
>>> >
>>> > Yours.
>>> >
>>> > --
>>> > Ludovic Gasc (GMLudo)
>>> >
>>> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>:
>>> >
>>> > > On Tue, 17 Apr 2018 13:34:47 +0200
>>> > > Ludovic Gasc <gml...@gmail.com> wrote:
>>> > > > Hi Nickolai,
>>> > > >
>>> > > > Thanks for your suggestions, especially for the file system lock:
>>> We
>>> > > don't
>>> > > > have often locks, but we must be sure it's locked.
>>> > > >
>>> > > > For 1) and 4) suggestions, in fact we have several systems to sync
>>> and
>>> > > also
>>> > > > a PostgreSQL transaction, the request must be treated by the same
>>> worker
>>> > > > from beginning to end and the other systems aren't idempotent at
>>> all,
>>> > > it's
>>> > > > "old-school" proprietary systems, good luck to change that ;-)
>>> > >
>>> > > If you already have a PostgreSQL connection, can't you use a
>>> PostgreSQL
>>> > > lock?  e.g. an "advisory lock" as described in
>>> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
>>> > >
>>> > > Regards
>>> > >
>>> > > Antoine.
>>> > >
>>> > >
>>> > >
>>> >
>>>
>>>
>>>
>>> ___
>>> Async-sig mailing list
>>> Async-sig@python.org
>>> https://mail.python.org/mailman/listinfo/async-sig
>>> Code of Conduct: https://www.python.org/psf/codeofconduct/
>>>
>>
>> ___
>> Async-sig mailing list
>> Async-sig@python.org
>> https://mail.python.org/mailman/listinfo/async-sig
>> Code of Conduct: https://www.python.org/psf/codeofconduct/
>>
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
2018-04-17 15:16 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>:

>
>
> You could simply use something like the first 64 bits of
> sha1("myapp:")
>

I have followed your idea, except I used hashtext directly, it's an
internal postgresql function that generates an integer directly.

For now, it seems to work pretty well but I didn't yet finished all tests.
The final result is literally 3 lines of Python inside an async
contextmanager, I like this solution ;-) :

@asynccontextmanager
async def lock(env, category='global', name='global'):
# Alternative lock id with 'mytable'::regclass::integer OID
await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock(
hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})

yield None

await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock(
hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})



>
> Regards
>
> Antoine.
>
>
> On Tue, 17 Apr 2018 15:04:37 +0200
> Ludovic Gasc <gml...@gmail.com> wrote:
> > Hi Antoine & Chris,
> >
> > Thanks a lot for the advisory lock, I didn't know this feature in
> > PostgreSQL.
> > Indeed, it seems to fit my problem.
> >
> > The small latest problem I have is that we have string names for locks,
> > but advisory locks accept only integers.
> > Nevertheless, it isn't a problem, I will do a mapping between names and
> > integers.
> >
> > Yours.
> >
> > --
> > Ludovic Gasc (GMLudo)
> >
> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>:
> >
> > > On Tue, 17 Apr 2018 13:34:47 +0200
> > > Ludovic Gasc <gml...@gmail.com> wrote:
> > > > Hi Nickolai,
> > > >
> > > > Thanks for your suggestions, especially for the file system lock:
> We
> > > don't
> > > > have often locks, but we must be sure it's locked.
> > > >
> > > > For 1) and 4) suggestions, in fact we have several systems to sync
> and
> > > also
> > > > a PostgreSQL transaction, the request must be treated by the same
> worker
> > > > from beginning to end and the other systems aren't idempotent at
> all,
> > > it's
> > > > "old-school" proprietary systems, good luck to change that ;-)
> > >
> > > If you already have a PostgreSQL connection, can't you use a PostgreSQL
> > > lock?  e.g. an "advisory lock" as described in
> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
> > >
> > > Regards
> > >
> > > Antoine.
> > >
> > >
> > >
> >
>
>
>
> ___
> Async-sig mailing list
> Async-sig@python.org
> https://mail.python.org/mailman/listinfo/async-sig
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Hi Dima,

Thanks for your time and explanations :-)
However, I have the intuition that it will take me more time to implement
your idea compare to the builtin feature of PostgreSQL.

Nevertheless, I keep your idea in mind in case of I have problems with
PostgreSQL.

Have a nice day.

--
Ludovic Gasc (GMLudo)

2018-04-17 14:17 GMT+02:00 Dima Tisnek <dim...@gmail.com>:

> Hi Ludovic,
>
> I believe it's relatively straightforward to implement the core
> functionality, if you can at first reduce it to:
> * allow only one coro to wait on lock at a given time (i.e. one user
> per process / event loop)
> * decide explicitly if you want other coros to continue (I assume so,
> as blocking entire process would be trivial)
> * don't care about performance too much :)
>
> Once that's done, you can allow multiple users per event loop by
> wrapping your inter-process lock in a regular async lock.
>
> Wrt. performance, you can start with a simple client-server
> implementation, for example where:
> * single-threaded server listens on some port, accepts 1 connection at
> a time, writes something on the connection and waits for connection to
> be closed
> * each client connects (not informative due to listen backlog) and
> waits for data, when client gets the data, it has the lock
> * when client wants to release the lock, it closes the connection,
> which unblocks the server
> * socket communication is relatively easy to marry to the event loop :)
>
> If you want high performance (i.e. low latency), you'd probably want
> to go with futex, but that may prove hard to marry to asyncio
> internals.
> I guess locking can always be proxied through a thread, at some cost
> to performance.
>
>
> If performance is important, I'd suggest starting with a thread proxy
> from the start. It could go like this:
> Each named lock gets own thread (in each process / event loop), a sync
> lock and condition variable.
> When a coro want to take the lock, it creates an empty Future,
> ephemerally takes the sync lock, adds this future to waiters, and
> signals on the condition variable and awaits this Future.
> Thread wakes up, validates there's someone in the queue under sync
> lock, tries to take classical inter-process lock (sysv or file or
> whatever), and when that succeeds, resolves the future using
> loop.call_soon_threadsafe().
> I'm omitting implementation details, like what if Future is leaked
> (discarded before it's resolved), how release is orchestrated, etc.
> The key point is that offloading locking to a dedicated thread allows
> to reduce original problem to synchronous interprocess locking
> problem.
>
>
> Cheers!
>
>
> On 17 April 2018 at 06:05, Ludovic Gasc <gml...@gmail.com> wrote:
> > Hi,
> >
> > I'm looking for a equivalent of asyncio.Lock
> > (https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
> > shared between several processes on the same server, because I'm
> migrating a
> > daemon from mono-worker to multi-worker pattern.
> >
> > For now, the closest solution in term of API seems aioredlock:
> > https://github.com/joanvila/aioredlock#aioredlock
> > But I'm not a big fan to use polling nor with a timeout because the lock
> I
> > need is very critical, I prefer to block the code than unlock with
> timeout.
> >
> > Do I miss a new awesome library or do you have an easier approach ?
> >
> > Thanks for your responses.
> > --
> > Ludovic Gasc (GMLudo)
> >
> > ___
> > Async-sig mailing list
> > Async-sig@python.org
> > https://mail.python.org/mailman/listinfo/async-sig
> > Code of Conduct: https://www.python.org/psf/codeofconduct/
> >
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] [python-tulip] Re: asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Hi Antoine & Chris,

Thanks a lot for the advisory lock, I didn't know this feature in
PostgreSQL.
Indeed, it seems to fit my problem.

The small latest problem I have is that we have string names for locks,
but advisory locks accept only integers.
Nevertheless, it isn't a problem, I will do a mapping between names and
integers.

Yours.

--
Ludovic Gasc (GMLudo)

2018-04-17 13:41 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>:

> On Tue, 17 Apr 2018 13:34:47 +0200
> Ludovic Gasc <gml...@gmail.com> wrote:
> > Hi Nickolai,
> >
> > Thanks for your suggestions, especially for the file system lock: We
> don't
> > have often locks, but we must be sure it's locked.
> >
> > For 1) and 4) suggestions, in fact we have several systems to sync and
> also
> > a PostgreSQL transaction, the request must be treated by the same worker
> > from beginning to end and the other systems aren't idempotent at all,
> it's
> > "old-school" proprietary systems, good luck to change that ;-)
>
> If you already have a PostgreSQL connection, can't you use a PostgreSQL
> lock?  e.g. an "advisory lock" as described in
> https://www.postgresql.org/docs/9.1/static/explicit-locking.html
>
> Regards
>
> Antoine.
>
>
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] [python-tulip] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Hi Nickolai,

Thanks for your suggestions, especially for the file system lock: We don't
have often locks, but we must be sure it's locked.

For 1) and 4) suggestions, in fact we have several systems to sync and also
a PostgreSQL transaction, the request must be treated by the same worker
from beginning to end and the other systems aren't idempotent at all, it's
"old-school" proprietary systems, good luck to change that ;-)

Regards.
--
Ludovic Gasc (GMLudo)

2018-04-17 12:46 GMT+02:00 Nickolai Novik <nickolaino...@gmail.com>:

> Hi, redis lock has own limitations and depending on your use case it may
> or may not be suitable [1]. If possible I would redefine problem and also
> considered:
> 1) create worker per specific resource type to avoid locking
> 2) optimistic locking
> 3) File system lock like in twisted, but not sure about performance and
> edge cases there
> 4) make operation on resource idempotent
>
> [1] http://martin.kleppmann.com/2016/02/08/how-to-do-
> distributed-locking.html
> [2] https://github.com/twisted/twisted/blob/e38cc25a67747899c6984d6ebaa8d3
> d134799415/src/twisted/python/lockfile.py
>
> On Tue, 17 Apr 2018 at 13:01 Ludovic Gasc <gml...@gmail.com> wrote:
>
>> Hi Roberto,
>>
>> Thanks for the pointer, it's exactly the type of feedbacks I'm looking
>> for: Ideas that are out-of-box of my confort zone.
>> However, in our use case, we are using gunicorn, that uses forks instead
>> of multiprocessing to my knowledge, I can't use multiprocessing without to
>> remove gunicorn.
>>
>> If somebody is using aioredlock in his project, I'm interested by
>> feedbacks.
>>
>> Have a nice week.
>>
>>
>> --
>> Ludovic Gasc (GMLudo)
>>
>> 2018-04-17 7:19 GMT+02:00 Roberto Martínez <robertomartin...@gmail.com>:
>>
>>>
>>> Hi,
>>>
>>> I don't know if there is a third party solution for this.
>>>
>>> I think the closest you can get today using the standard library is
>>> using a multiprocessing.manager().Lock (which can be shared among
>>> processes) and call the lock.acquire() function with
>>> asyncio.run_in_executor(), using a ThreadedPoolExecutor to avoid blocking
>>> the asyncio event loop.
>>>
>>> Best regards,
>>> Roberto
>>>
>>>
>>> El mar., 17 abr. 2018 a las 0:05, Ludovic Gasc (<gml...@gmail.com>)
>>> escribió:
>>>
>>>> Hi,
>>>>
>>>> I'm looking for a equivalent of asyncio.Lock (
>>>> https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
>>>> shared between several processes on the same server, because I'm migrating
>>>> a daemon from mono-worker to multi-worker pattern.
>>>>
>>>> For now, the closest solution in term of API seems aioredlock:
>>>> https://github.com/joanvila/aioredlock#aioredlock
>>>> But I'm not a big fan to use polling nor with a timeout because the
>>>> lock I need is very critical, I prefer to block the code than unlock with
>>>> timeout.
>>>>
>>>> Do I miss a new awesome library or do you have an easier approach ?
>>>>
>>>> Thanks for your responses.
>>>> --
>>>> Ludovic Gasc (GMLudo)
>>>>
>>>
>>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] [python-tulip] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Hi Roberto,

Thanks for the pointer, it's exactly the type of feedbacks I'm looking for:
Ideas that are out-of-box of my confort zone.
However, in our use case, we are using gunicorn, that uses forks instead of
multiprocessing to my knowledge, I can't use multiprocessing without to
remove gunicorn.

If somebody is using aioredlock in his project, I'm interested by feedbacks.

Have a nice week.


--
Ludovic Gasc (GMLudo)

2018-04-17 7:19 GMT+02:00 Roberto Martínez <robertomartin...@gmail.com>:

>
> Hi,
>
> I don't know if there is a third party solution for this.
>
> I think the closest you can get today using the standard library is using
> a multiprocessing.manager().Lock (which can be shared among processes) and
> call the lock.acquire() function with asyncio.run_in_executor(), using a
> ThreadedPoolExecutor to avoid blocking the asyncio event loop.
>
> Best regards,
> Roberto
>
>
> El mar., 17 abr. 2018 a las 0:05, Ludovic Gasc (<gml...@gmail.com>)
> escribió:
>
>> Hi,
>>
>> I'm looking for a equivalent of asyncio.Lock (https://docs.python.org/3/
>> library/asyncio-sync.html#asyncio.Lock) but shared between several
>> processes on the same server, because I'm migrating a daemon from
>> mono-worker to multi-worker pattern.
>>
>> For now, the closest solution in term of API seems aioredlock:
>> https://github.com/joanvila/aioredlock#aioredlock
>> But I'm not a big fan to use polling nor with a timeout because the lock
>> I need is very critical, I prefer to block the code than unlock with
>> timeout.
>>
>> Do I miss a new awesome library or do you have an easier approach ?
>>
>> Thanks for your responses.
>> --
>> Ludovic Gasc (GMLudo)
>>
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


[Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-16 Thread Ludovic Gasc
Hi,

I'm looking for a equivalent of asyncio.Lock (
https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
shared between several processes on the same server, because I'm migrating
a daemon from mono-worker to multi-worker pattern.

For now, the closest solution in term of API seems aioredlock:
https://github.com/joanvila/aioredlock#aioredlock
But I'm not a big fan to use polling nor with a timeout because the lock I
need is very critical, I prefer to block the code than unlock with timeout.

Do I miss a new awesome library or do you have an easier approach ?

Thanks for your responses.
--
Ludovic Gasc (GMLudo)
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] PEP: asynchronous generators

2016-08-08 Thread Ludovic Gasc
2016-08-08 13:24 GMT+02:00 Cory Benfield <c...@lukasa.co.uk>:

>
> On 8 Aug 2016, at 11:16, Ludovic Gasc <gml...@gmail.com> wrote:
>
> Certainly some protocols/transports should be easier to have this split
> than others: Interesting to know if somebody has already tried to have QUIC
> and HTTP/2 in the same time with Python.
>
>
> AFAIK they haven’t. This is partly because there’s no good QUIC
> implementation to bind from Python at this time. Chromium’s QUIC library
> requires a giant pool of custom C++ to bind it appropriately, and Go’s
> implementation includes a gigantic runtime and is quite large.
>

I had the same conclusion.
For now, I don't know what's the most complex: Try to do a Python binding
or reimplement QUIC in Python ;-)


> As and when a good OSS QUIC library starts to surface, I’ll be able to
> answer this question more effectively. But I’m not expecting a huge issue.
> =)
>

We'll see when it will happen ;-)
Implemented in 2012, pushed on production by Google in 2013, and 3 years
later, only one Web browser and one programming language have the support,
to my knowledge.
Nobody uses that except Google, or everybody already migrated on Go ? ;-)
Or simply, it's too much complicated to use/debug/... ?
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/

Re: [Async-sig] PEP: asynchronous generators

2016-08-07 Thread Ludovic Gasc
+1 for PEP, nothing more to add from technical point of view.
An extra step to the right direction, at least to me.
Thank you Yury for that :-)

About side conversation on sync/async split world, except to force
coroutines pattern usage like in Go, I don't see how we can become more
implicit.
Even if the zen of Python recommands to prefer an explicit approach, I see
more explicit/implicit as a balance you must adjust between
Simplicity/Flexibility than a binary choice.

To me, the success of Python as language is also because you have a
good balance between theses approaches, and the last move from "yield from"
to "await" illustrates that: Hide the internal mechanisms of
implementation, but keep the explicit way to declare that.

Like Andrew Svetlov, I don't believe a lot in the implicit approach of
Gevent, because very quickly, you need to understand the extra tools, like
synchronization primitives. The fact to know if you need to prefix with
"await" or not the call of the functions is the tree that hides the forest.

With the async pattern, it's impossible to hide everything and everything
will work automagically: You must understand a little bit what's happening,
or it will be very complicated to debug.

To me, you can hide everything only if you are really sure it will work
100% of time without human intervention, like with autonomous Google cars.

However, it might be interesting to have an async "linter", that should
list all blocking I/O code in async coroutines, to help new comers to find
this type of bugs.
But with the dynamic nature of Python, I don't know if it's realistic to
try to implement that.
To me, it should be a better answer than to try to remove all sync/async
code differences.

Moreover, I see the need of async libs as an extra opportunity to challenge
and simplify the Python toolbox.

For now, with aiohttp, you have an unified API for HTTP in general,
contrary in sync world with requests and flask for example.
At least to me, a client and a server are only two sides of the same piece.
More true with p2p protocols.

As discussed several times, the next level might be more code reuse like
suggested by David Beazley with SansIO, split protocol and I/O handling:
https://twitter.com/dabeaz/status/761599925444550656?lang=fr

https://github.com/brettcannon/sans-io

I don't know yet if the benefit to share more code between implementations
will be more important than the potential complexity code increase.

The only point I'm sure for now: I'm preparing the pop-corn to watch the
next episodes: curious to see what are the next ideas/implementations will
emerge ;-)
At least to me, it's more interesting than follow a TV serie, thank you for
that :-)

Have a nice week.

Ludovic Gasc (GMLudo)
http://www.gmludo.eu/

On 29 Jul 2016 20:50, "Yarko Tymciurak" <yark...@gmail.com> wrote:

>
>
> On Friday, July 29, 2016, Yury Selivanov <yseliva...@gmail.com> wrote:
>
>> Comments inlined:
>>
>>
>> > On Jul 29, 2016, at 2:20 PM, Yarko Tymciurak <yark...@gmail.com> wrote:
>> >
>> > Hmm...  I think we need to think about a future where,
>> programmatically, there's little-to no distinction between async and
>> synchronous functions. Pushing this down deeper in the system is the way to
>> go. For one, it will serve simple multi-core use once gilectomy is
>> completed (it, or something effectively equivalent will complete).  For
>> another, this is the path to reducing the functionally "useless" rewrite
>> efforts of libraries (e.g. github.com/aio-libs), which somehow resemble
>> all the efforts of migrating libraries from 2-3 (loosely).  The resistance
>> and unexpected time that 2-3 migration experienced won't readily be
>> mimicked in async tasks - too much effort to get computer and I/O bound
>> benefits?  Maintain two versions of needed libraries, or jump languages is
>> what will increasingly happen in the distributed (and more so IOT) world.
>>
>> When and *if* gilectomy is completed (or another project to remove the
>> GIL), we will be able to do this:
>>
>> 1) Run existing async/await applications as is, but instead of running a
>> process per core, we will be able to run a single process with many
>> threads.  Likely one asyncio (or other) event loop per thread.  This is
>> very speculative, but possible in theory.
>>
>> 2) Run existing blocking IO applications in several threads in one
>> process.  This is something that only sounds like an easy thing to do, I
>> suspect that a lot of code will break (or dead lock) when the GIL is
>> removed.  Even if everything works perfectly well, threads aren’t answer to
>> all problems — try to manage 1000s of them.
>>
>> Long story short, even if we had no G