--
me at github: https://github.com/radiospiel
me at linked.in: https://www.linkedin.com/in/radiospiel

​You probably considered this but the queuing mechanism I use doesn't hold locks on records during processing. Workers claim tasks by locking them,
setting a claimed flag of some sort, the releasing the lock (including
worker identity if desired) - repeating the general procedure once
completed.

My volume is such that the bloat the extra update causes is not meaningful
and is easily handled by (auto-)vacuum​.

David J.

Hi David,

well, I though about it and then put it to rest initially, since I liked the idea that with a running job kept “inside” the transaction I would never have zombie entries there: if somehow the network connection gets lost for the client machines the database would just rollback the transaction, the job would revert to its “ready-to-run” state, and the next worker would pick it up.

However, I will probably reconsider this, because it has quite some advantages; setting a “processing” state, let alone keeping a worker identitiy next to the job seems much more straightforward.

So how do you solve the “zombie” situation?

Best,
Enrico


Reply via email to