Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Indeed, thanks for the suggestion :-)

Le mer. 18 avr. 2018 à 01:21, Nathaniel Smith  a écrit :

> Pretty sure you want to add a try/finally around that yield, so you
> release the lock on errors.
>
> On Tue, Apr 17, 2018, 14:39 Ludovic Gasc  wrote:
>
>> 2018-04-17 15:16 GMT+02:00 Antoine Pitrou :
>>
>>>
>>>
>>> You could simply use something like the first 64 bits of
>>> sha1("myapp:")
>>>
>>
>> I have followed your idea, except I used hashtext directly, it's an
>> internal postgresql function that generates an integer directly.
>>
>> For now, it seems to work pretty well but I didn't yet finished all tests.
>> The final result is literally 3 lines of Python inside an async
>> contextmanager, I like this solution ;-) :
>>
>> @asynccontextmanager
>> async def lock(env, category='global', name='global'):
>> # Alternative lock id with 'mytable'::regclass::integer OID
>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock(
>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>>
>> yield None
>>
>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock(
>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>>
>>
>>
>>>
>>> Regards
>>>
>>> Antoine.
>>>
>>>
>>> On Tue, 17 Apr 2018 15:04:37 +0200
>>> Ludovic Gasc  wrote:
>>> > Hi Antoine & Chris,
>>> >
>>> > Thanks a lot for the advisory lock, I didn't know this feature in
>>> > PostgreSQL.
>>> > Indeed, it seems to fit my problem.
>>> >
>>> > The small latest problem I have is that we have string names for locks,
>>> > but advisory locks accept only integers.
>>> > Nevertheless, it isn't a problem, I will do a mapping between names and
>>> > integers.
>>> >
>>> > Yours.
>>> >
>>> > --
>>> > Ludovic Gasc (GMLudo)
>>> >
>>> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou :
>>> >
>>> > > On Tue, 17 Apr 2018 13:34:47 +0200
>>> > > Ludovic Gasc  wrote:
>>> > > > Hi Nickolai,
>>> > > >
>>> > > > Thanks for your suggestions, especially for the file system lock:
>>> We
>>> > > don't
>>> > > > have often locks, but we must be sure it's locked.
>>> > > >
>>> > > > For 1) and 4) suggestions, in fact we have several systems to sync
>>> and
>>> > > also
>>> > > > a PostgreSQL transaction, the request must be treated by the same
>>> worker
>>> > > > from beginning to end and the other systems aren't idempotent at
>>> all,
>>> > > it's
>>> > > > "old-school" proprietary systems, good luck to change that ;-)
>>> > >
>>> > > If you already have a PostgreSQL connection, can't you use a
>>> PostgreSQL
>>> > > lock?  e.g. an "advisory lock" as described in
>>> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
>>> > >
>>> > > Regards
>>> > >
>>> > > Antoine.
>>> > >
>>> > >
>>> > >
>>> >
>>>
>>>
>>>
>>> ___
>>> Async-sig mailing list
>>> Async-sig@python.org
>>> https://mail.python.org/mailman/listinfo/async-sig
>>> Code of Conduct: https://www.python.org/psf/codeofconduct/
>>>
>>
>> ___
>> Async-sig mailing list
>> Async-sig@python.org
>> https://mail.python.org/mailman/listinfo/async-sig
>> Code of Conduct: https://www.python.org/psf/codeofconduct/
>>
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Nathaniel Smith
Pretty sure you want to add a try/finally around that yield, so you release
the lock on errors.

On Tue, Apr 17, 2018, 14:39 Ludovic Gasc  wrote:

> 2018-04-17 15:16 GMT+02:00 Antoine Pitrou :
>
>>
>>
>> You could simply use something like the first 64 bits of
>> sha1("myapp:")
>>
>
> I have followed your idea, except I used hashtext directly, it's an
> internal postgresql function that generates an integer directly.
>
> For now, it seems to work pretty well but I didn't yet finished all tests.
> The final result is literally 3 lines of Python inside an async
> contextmanager, I like this solution ;-) :
>
> @asynccontextmanager
> async def lock(env, category='global', name='global'):
> # Alternative lock id with 'mytable'::regclass::integer OID
> await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock(
> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>
> yield None
>
> await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock(
> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})
>
>
>
>>
>> Regards
>>
>> Antoine.
>>
>>
>> On Tue, 17 Apr 2018 15:04:37 +0200
>> Ludovic Gasc  wrote:
>> > Hi Antoine & Chris,
>> >
>> > Thanks a lot for the advisory lock, I didn't know this feature in
>> > PostgreSQL.
>> > Indeed, it seems to fit my problem.
>> >
>> > The small latest problem I have is that we have string names for locks,
>> > but advisory locks accept only integers.
>> > Nevertheless, it isn't a problem, I will do a mapping between names and
>> > integers.
>> >
>> > Yours.
>> >
>> > --
>> > Ludovic Gasc (GMLudo)
>> >
>> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou :
>> >
>> > > On Tue, 17 Apr 2018 13:34:47 +0200
>> > > Ludovic Gasc  wrote:
>> > > > Hi Nickolai,
>> > > >
>> > > > Thanks for your suggestions, especially for the file system lock:
>> We
>> > > don't
>> > > > have often locks, but we must be sure it's locked.
>> > > >
>> > > > For 1) and 4) suggestions, in fact we have several systems to sync
>> and
>> > > also
>> > > > a PostgreSQL transaction, the request must be treated by the same
>> worker
>> > > > from beginning to end and the other systems aren't idempotent at
>> all,
>> > > it's
>> > > > "old-school" proprietary systems, good luck to change that ;-)
>> > >
>> > > If you already have a PostgreSQL connection, can't you use a
>> PostgreSQL
>> > > lock?  e.g. an "advisory lock" as described in
>> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
>> > >
>> > > Regards
>> > >
>> > > Antoine.
>> > >
>> > >
>> > >
>> >
>>
>>
>>
>> ___
>> Async-sig mailing list
>> Async-sig@python.org
>> https://mail.python.org/mailman/listinfo/async-sig
>> Code of Conduct: https://www.python.org/psf/codeofconduct/
>>
>
> ___
> Async-sig mailing list
> Async-sig@python.org
> https://mail.python.org/mailman/listinfo/async-sig
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
2018-04-17 15:16 GMT+02:00 Antoine Pitrou :

>
>
> You could simply use something like the first 64 bits of
> sha1("myapp:")
>

I have followed your idea, except I used hashtext directly, it's an
internal postgresql function that generates an integer directly.

For now, it seems to work pretty well but I didn't yet finished all tests.
The final result is literally 3 lines of Python inside an async
contextmanager, I like this solution ;-) :

@asynccontextmanager
async def lock(env, category='global', name='global'):
# Alternative lock id with 'mytable'::regclass::integer OID
await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock(
hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})

yield None

await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock(
hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)})



>
> Regards
>
> Antoine.
>
>
> On Tue, 17 Apr 2018 15:04:37 +0200
> Ludovic Gasc  wrote:
> > Hi Antoine & Chris,
> >
> > Thanks a lot for the advisory lock, I didn't know this feature in
> > PostgreSQL.
> > Indeed, it seems to fit my problem.
> >
> > The small latest problem I have is that we have string names for locks,
> > but advisory locks accept only integers.
> > Nevertheless, it isn't a problem, I will do a mapping between names and
> > integers.
> >
> > Yours.
> >
> > --
> > Ludovic Gasc (GMLudo)
> >
> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou :
> >
> > > On Tue, 17 Apr 2018 13:34:47 +0200
> > > Ludovic Gasc  wrote:
> > > > Hi Nickolai,
> > > >
> > > > Thanks for your suggestions, especially for the file system lock:
> We
> > > don't
> > > > have often locks, but we must be sure it's locked.
> > > >
> > > > For 1) and 4) suggestions, in fact we have several systems to sync
> and
> > > also
> > > > a PostgreSQL transaction, the request must be treated by the same
> worker
> > > > from beginning to end and the other systems aren't idempotent at
> all,
> > > it's
> > > > "old-school" proprietary systems, good luck to change that ;-)
> > >
> > > If you already have a PostgreSQL connection, can't you use a PostgreSQL
> > > lock?  e.g. an "advisory lock" as described in
> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
> > >
> > > Regards
> > >
> > > Antoine.
> > >
> > >
> > >
> >
>
>
>
> ___
> Async-sig mailing list
> Async-sig@python.org
> https://mail.python.org/mailman/listinfo/async-sig
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Antoine Pitrou


You could simply use something like the first 64 bits of
sha1("myapp:")

Regards

Antoine.


On Tue, 17 Apr 2018 15:04:37 +0200
Ludovic Gasc  wrote:
> Hi Antoine & Chris,
> 
> Thanks a lot for the advisory lock, I didn't know this feature in
> PostgreSQL.
> Indeed, it seems to fit my problem.
> 
> The small latest problem I have is that we have string names for locks,
> but advisory locks accept only integers.
> Nevertheless, it isn't a problem, I will do a mapping between names and
> integers.
> 
> Yours.
> 
> --
> Ludovic Gasc (GMLudo)
> 
> 2018-04-17 13:41 GMT+02:00 Antoine Pitrou :
> 
> > On Tue, 17 Apr 2018 13:34:47 +0200
> > Ludovic Gasc  wrote:  
> > > Hi Nickolai,
> > >
> > > Thanks for your suggestions, especially for the file system lock: We  
> > don't  
> > > have often locks, but we must be sure it's locked.
> > >
> > > For 1) and 4) suggestions, in fact we have several systems to sync and  
> > also  
> > > a PostgreSQL transaction, the request must be treated by the same worker
> > > from beginning to end and the other systems aren't idempotent at all,  
> > it's  
> > > "old-school" proprietary systems, good luck to change that ;-)  
> >
> > If you already have a PostgreSQL connection, can't you use a PostgreSQL
> > lock?  e.g. an "advisory lock" as described in
> > https://www.postgresql.org/docs/9.1/static/explicit-locking.html
> >
> > Regards
> >
> > Antoine.
> >
> >
> >  
> 



___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Ludovic Gasc
Hi Dima,

Thanks for your time and explanations :-)
However, I have the intuition that it will take me more time to implement
your idea compare to the builtin feature of PostgreSQL.

Nevertheless, I keep your idea in mind in case of I have problems with
PostgreSQL.

Have a nice day.

--
Ludovic Gasc (GMLudo)

2018-04-17 14:17 GMT+02:00 Dima Tisnek :

> Hi Ludovic,
>
> I believe it's relatively straightforward to implement the core
> functionality, if you can at first reduce it to:
> * allow only one coro to wait on lock at a given time (i.e. one user
> per process / event loop)
> * decide explicitly if you want other coros to continue (I assume so,
> as blocking entire process would be trivial)
> * don't care about performance too much :)
>
> Once that's done, you can allow multiple users per event loop by
> wrapping your inter-process lock in a regular async lock.
>
> Wrt. performance, you can start with a simple client-server
> implementation, for example where:
> * single-threaded server listens on some port, accepts 1 connection at
> a time, writes something on the connection and waits for connection to
> be closed
> * each client connects (not informative due to listen backlog) and
> waits for data, when client gets the data, it has the lock
> * when client wants to release the lock, it closes the connection,
> which unblocks the server
> * socket communication is relatively easy to marry to the event loop :)
>
> If you want high performance (i.e. low latency), you'd probably want
> to go with futex, but that may prove hard to marry to asyncio
> internals.
> I guess locking can always be proxied through a thread, at some cost
> to performance.
>
>
> If performance is important, I'd suggest starting with a thread proxy
> from the start. It could go like this:
> Each named lock gets own thread (in each process / event loop), a sync
> lock and condition variable.
> When a coro want to take the lock, it creates an empty Future,
> ephemerally takes the sync lock, adds this future to waiters, and
> signals on the condition variable and awaits this Future.
> Thread wakes up, validates there's someone in the queue under sync
> lock, tries to take classical inter-process lock (sysv or file or
> whatever), and when that succeeds, resolves the future using
> loop.call_soon_threadsafe().
> I'm omitting implementation details, like what if Future is leaked
> (discarded before it's resolved), how release is orchestrated, etc.
> The key point is that offloading locking to a dedicated thread allows
> to reduce original problem to synchronous interprocess locking
> problem.
>
>
> Cheers!
>
>
> On 17 April 2018 at 06:05, Ludovic Gasc  wrote:
> > Hi,
> >
> > I'm looking for a equivalent of asyncio.Lock
> > (https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
> > shared between several processes on the same server, because I'm
> migrating a
> > daemon from mono-worker to multi-worker pattern.
> >
> > For now, the closest solution in term of API seems aioredlock:
> > https://github.com/joanvila/aioredlock#aioredlock
> > But I'm not a big fan to use polling nor with a timeout because the lock
> I
> > need is very critical, I prefer to block the code than unlock with
> timeout.
> >
> > Do I miss a new awesome library or do you have an easier approach ?
> >
> > Thanks for your responses.
> > --
> > Ludovic Gasc (GMLudo)
> >
> > ___
> > Async-sig mailing list
> > Async-sig@python.org
> > https://mail.python.org/mailman/listinfo/async-sig
> > Code of Conduct: https://www.python.org/psf/codeofconduct/
> >
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


Re: [Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-17 Thread Dima Tisnek
Hi Ludovic,

I believe it's relatively straightforward to implement the core
functionality, if you can at first reduce it to:
* allow only one coro to wait on lock at a given time (i.e. one user
per process / event loop)
* decide explicitly if you want other coros to continue (I assume so,
as blocking entire process would be trivial)
* don't care about performance too much :)

Once that's done, you can allow multiple users per event loop by
wrapping your inter-process lock in a regular async lock.

Wrt. performance, you can start with a simple client-server
implementation, for example where:
* single-threaded server listens on some port, accepts 1 connection at
a time, writes something on the connection and waits for connection to
be closed
* each client connects (not informative due to listen backlog) and
waits for data, when client gets the data, it has the lock
* when client wants to release the lock, it closes the connection,
which unblocks the server
* socket communication is relatively easy to marry to the event loop :)

If you want high performance (i.e. low latency), you'd probably want
to go with futex, but that may prove hard to marry to asyncio
internals.
I guess locking can always be proxied through a thread, at some cost
to performance.


If performance is important, I'd suggest starting with a thread proxy
from the start. It could go like this:
Each named lock gets own thread (in each process / event loop), a sync
lock and condition variable.
When a coro want to take the lock, it creates an empty Future,
ephemerally takes the sync lock, adds this future to waiters, and
signals on the condition variable and awaits this Future.
Thread wakes up, validates there's someone in the queue under sync
lock, tries to take classical inter-process lock (sysv or file or
whatever), and when that succeeds, resolves the future using
loop.call_soon_threadsafe().
I'm omitting implementation details, like what if Future is leaked
(discarded before it's resolved), how release is orchestrated, etc.
The key point is that offloading locking to a dedicated thread allows
to reduce original problem to synchronous interprocess locking
problem.


Cheers!


On 17 April 2018 at 06:05, Ludovic Gasc  wrote:
> Hi,
>
> I'm looking for a equivalent of asyncio.Lock
> (https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
> shared between several processes on the same server, because I'm migrating a
> daemon from mono-worker to multi-worker pattern.
>
> For now, the closest solution in term of API seems aioredlock:
> https://github.com/joanvila/aioredlock#aioredlock
> But I'm not a big fan to use polling nor with a timeout because the lock I
> need is very critical, I prefer to block the code than unlock with timeout.
>
> Do I miss a new awesome library or do you have an easier approach ?
>
> Thanks for your responses.
> --
> Ludovic Gasc (GMLudo)
>
> ___
> Async-sig mailing list
> Async-sig@python.org
> https://mail.python.org/mailman/listinfo/async-sig
> Code of Conduct: https://www.python.org/psf/codeofconduct/
>
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/


[Async-sig] asyncio.Lock equivalent for multiple processes

2018-04-16 Thread Ludovic Gasc
Hi,

I'm looking for a equivalent of asyncio.Lock (
https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
shared between several processes on the same server, because I'm migrating
a daemon from mono-worker to multi-worker pattern.

For now, the closest solution in term of API seems aioredlock:
https://github.com/joanvila/aioredlock#aioredlock
But I'm not a big fan to use polling nor with a timeout because the lock I
need is very critical, I prefer to block the code than unlock with timeout.

Do I miss a new awesome library or do you have an easier approach ?

Thanks for your responses.
--
Ludovic Gasc (GMLudo)
___
Async-sig mailing list
Async-sig@python.org
https://mail.python.org/mailman/listinfo/async-sig
Code of Conduct: https://www.python.org/psf/codeofconduct/