Hi,

I've little experience with Beanstalk then correct me if I was wrong, but 
I've notice that compaction try to move usable jobs from oldest binlog to 
current binlog. When a binlog has no more usable jobs, beanstalk remove it 
from filesystem, otherwise binlog stay in place and new jobs are added on 
an eventually new binlog.
This is very good with jobs that is produced and consumed quickly.

But with long delayed jobs, small binlog size, and high load, it's possible 
that compaction sparse usable jobs among several files (that seems to be 
your case) because current binlog is too small and compaction fail to move 
jobs around files.

In my opinion if you really use long delayed jobs, you can try to :

- increase binlog size
- or split delay deleting e resubmitting the same job more frequently in 
order to force it to move away from oldest binlog.

Cheers
Michele


-- 
You received this message because you are subscribed to the Google Groups 
"beanstalk-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/beanstalk-talk.
For more options, visit https://groups.google.com/d/optout.

Reply via email to