Hi Pat, That's a good idea; a lot of updates all come in at once, so waiting until those updates finished and then updating the index would be an even better way then using 'my' approach (because with my approach you might start indexing while the import process is still running as the processing duration may vary depending on the number of updates). The only note/question I could think of: with my approach you have thousands of single deletion jobs during the day (sounds not too efficient ;)), with your suggested suspended_delta approach at the end of the day you'll have tens of thousands of records with delta set (and being part of the delta index). I have no idea if that would make the deletion job very 'expensive'? I suspect that deleting them all at once after reindexing will probably be much faster and more instant (also resulting in less time there are duplicate records in both delta and main index).
Is this something we could easily try or would it involve quite some changes to TS? As the resulting jobs would still take a while it's probably a good idea to keep delayed_job to process the jobs in the background too, right? Would that also require changes to ts-delayed- delta? Thanks!!! Gyuri On Mar 23, 6:34 pm, Pat Allan <[email protected]> wrote: > Hi Gyuri > > Are these updates all happening at once? Is there some modification to > suspended_delta that will do the job? Can you update the deleted flags for > the core index with every record that has delta set to true? This way, we > only get two jobs (one index, one deletion), for as many updates as you like > within the suspended_delta block. > > -- > Pat -- You received this message because you are subscribed to the Google Groups "Thinking Sphinx" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/thinking-sphinx?hl=en.
