I have a Django-based app running on Heroku.  One web request can generate 
hundreds of sub-jobs, which are queued up through django-rq and RedisToGo. 
 Over time, I've noticed that the rate of processing from the queue drops, 
and if I look at the admin page in django-rq, it says that there are fewer 
and fewer workers drawing from the queue. 

This strikes me as strange for two reasons:

1) The number of dynos that are supposedly working the queue remains 
constant in the heroku web interface
2) The queue-handling jobs themselves are written so that they handle any 
unexpected error condition.  They should never stop, they should log an 
error and move to the next job.

has anyone seen anything like this before?  Any idea how I can keep my jobs 
going without having to restart the dynos?


-- 
-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"Heroku Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to heroku+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to