I searched around in the group and google and was surprised to see
that no one else ran into this issue. Then I realized that I soft
delete everything in my app, which meant that delayed_job was deleting
the job, but the record still existed.

I updated my app, and monkey patched DelayedJob to do hard deletes.

Interestingly enough, mysql still complains about the query being
slow, but I believe its because it will always have to do a table scan
because of the way the query is built.

Either way, things are looking better on my end.

Thanks.

On Aug 11, 12:28 am, Pat Allan <[email protected]> wrote:
> Surely others using delayed job heavily will have hit this problem before? 
> Maybe worth hunting around their google 
> group?http://groups.google.com/group/delayed_job
>
> --
> Pat
>
> On 11/08/2010, at 5:21 PM, gmoniey wrote:
>
>
>
> > Thanks. I just wanted to make sure that it wasn't anything related to
> > ts.
>
> > I will look into the delayed_delta source. Another thought I had was
> > to delete the jobs to keep the table size small, but I'm not convinced
> > thats the best idea, as 175k is a small number of rows, and I am
> > already running into perf issues.
>
> > On Aug 10, 5:11 pm, Pat Allan <[email protected]> wrote:
> >> On 11/08/2010, at 5:17 AM, gmoniey wrote:
>
> >>> 1. The delayed_job query is running every second. Is there  a way to
> >>> reduce the frequency?
>
> >> Delayed::Worker.sleep_delay = 60
>
> >> That's from the docs of the collectiveidea fork of delayed_job, but I 
> >> think it's what's used for the gems these days. It might exist in the 
> >> original, too.
>
> >>> 2. It seems that the run_at <= now() prevents any indexes to have an
> >>> effect, as it results in a full table scan (the size of my table is
> >>> 174307)
>
> >>> I'm wondering if its possible to clean up this query a little bit to
> >>> help take advantage of indexes, but I dont know where to look.
>
> >> Again, this is managed by Delayed Job, not TS/ts-delayed-delta. Off the 
> >> top of my head, though, I can't think of any quick fixes to the query - 
> >> you could add a boolean column 'run', set that to true on success, but 
> >> that involves forking delayed job. Up to you, really :)
>
> >> --
> >> Pat
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Thinking Sphinx" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to 
> > [email protected].
> > For more options, visit this group 
> > athttp://groups.google.com/group/thinking-sphinx?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Thinking Sphinx" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/thinking-sphinx?hl=en.

Reply via email to