Hi Nickolai,
Thanks for your suggestions, especially for the file system lock: We don't
have often locks, but we must be sure it's locked.
For 1) and 4) suggestions, in fact we have several systems to sync and also
a PostgreSQL transaction, the request must be treated by the same worker
from begi
Hi, redis lock has own limitations and depending on your use case it may or
may not be suitable [1]. If possible I would redefine problem and also
considered:
1) create worker per specific resource type to avoid locking
2) optimistic locking
3) File system lock like in twisted, but not sure about p
Hi Roberto,
Thanks for the pointer, it's exactly the type of feedbacks I'm looking for:
Ideas that are out-of-box of my confort zone.
However, in our use case, we are using gunicorn, that uses forks instead of
multiprocessing to my knowledge, I can't use multiprocessing without to
remove gunicorn.
Hi,
I don't know if there is a third party solution for this.
I think the closest you can get today using the standard library is using a
multiprocessing.manager().Lock (which can be shared among processes) and
call the lock.acquire() function with asyncio.run_in_executor(), using a
ThreadedPoolE
Hi,
I'm looking for a equivalent of asyncio.Lock (
https://docs.python.org/3/library/asyncio-sync.html#asyncio.Lock) but
shared between several processes on the same server, because I'm migrating
a daemon from mono-worker to multi-worker pattern.
For now, the closest solution in term of API seems