Just curious how others have dealt with the problem of duplicate jobs being 
added to beanstalkd. I'm designing a system that has to process jobs using 
beanstalkd but have come across the problem of how to handle duplicate jobs 
being put in. I've kinda settled on using memcachedb as a key value store 
that I use to hash the job contents and check to see if the hash already 
exists before putting it into beanstalkd. I've gone as far as investigate 
to see if it would be simple to write a beanstalkd proxy which does all 
this on the put command before passing the job through to the real 
beanstalkd backend.

I was curious as to how beanstalkd handles jobs internally; does beanstalkd 
hash the job an use that as a unique identifier or does it simply use 
numerical ordering internally (I assume the latter as all command require 
you to specify a job by numerical id).

How has anyone else dealt with this particular problem? I'd appreciate any 
advice.

Cheers!
Ben

-- 
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/beanstalk-talk.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to