Hi Chad

On Tuesday, October 30, 2012 5:13:57 PM UTC+1, chadkouse wrote:
>
>  An easy workaround is to add a flag to the cloned job to tell your 
> consumer to bury it as soon as it picks it up. 
>

I have thought about that too, but it all just becomes too messy. You 
cannot kick a buried job if you want to modify the data. If you set a bury 
flag, you have to remove it again too. There are thus two down sides:

   1. The job gets queued again until it can be buried. If you have a long 
   queue because of some enormous visitor hit or so, a problem which must be 
   buried can take quite some time before it gets into the bury list for 
   inspection;
   2. If the job gets a flag, you must remove the flag when you kick it 
   again. That means the job is cloned again, removed from the queue and 
   inserted again with a put command.

There are too many steps which all can go wrong, just in order to append 
some data to the job. Also, with clones of the original job floating 
around, you cannot rely on the job id anymore. I am making a bridge where 
other developers can use my code. If anyone uses the job id to track the 
job over time (just to name an example) it is not transparent the buried 
job is completely unrelated to the inserted job. Or the kicked one. Or the 
released one.

If you inspect the protocol, the put command is this:

put <pri> <delay> <ttr> <bytes>\r\n
<data>\r\n

The bury command is this:

bury <id> <pri>\r\n

You can make that into this, where you take care of backwards compatibility 
too:

bury <id> <pri>\r\n
<data>\t\n

The data will overwrite any previously set data for this job. If no data is 
set, the original data remains.

As said, I am willing to hire a C expert to take care of this and another 
feature I really would like to see (a peek-buried-all or so, returning a 
list of ids from all buried jobs, which I can get then with peek <id>). But 
before I can take that step, I think I need some more insights in how this 
all works, if the repository owner is willing to accept such pull requests 
and if I don't overlook something in this process. Or, perhaps there is 
another job queue manager which is better than beanstalk at this aspect?

What we normally do with buried jobs is to just run the same payload 
> against a consumer in our dev environment and observe what happens. We also 
> log all exceptions (by just echoing out from our consumer) so we normally 
> can easily tell exactly why a buried job failed. 
>

Well, the cause the job is buried is highly influenced by the environment. 
I talk about a webapp here where jobs are things like "create pdf", "send 
email" and "resize image". If I connect to a 3rd party service and it 
fails, I retry (so a release with a delay). I start a counter there and 
after x retries I bury the job.

I do not want the app to have infinite loops connecting to a 3rd party 
service and a user might have a look if something has gone wrong there. I 
expect many cases where I bury jobs and I do not want to spit through logs 
to see what went wrong.
---
Jurian Sluiman

-- 
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/beanstalk-talk/-/Vs1CMP2jdHsJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/beanstalk-talk?hl=en.

Reply via email to