I've looked at everything suggested - documentation, code and scripts and I follow almost everything that's happening.

If I understand correctly, when the updated snapshot is installed on the slave:

1) The existing index(es) being used by the Solr slave instance are physically deleted 2) The new index snapshots are renamed/moved from their temporary installation location to the default index location 3) The slave is sent a 'commit' to force a new IndexReader to start to read the new index.

What happens to search requests against the existing/old index during step 1) and between steps 1 and 2? Where do they get information if they need to go to disk for results that are not cached? Do they a) hang b) produce no results c) error in some other way?


Regards,

Tracy

On Dec 11, 2007, at 9:11 AM, Tracy Flynn wrote:

That's what I was after.

As always, thanks for the quick response.

Tracy

On Dec 11, 2007, at 12:18 AM, Yonik Seeley wrote:

On Dec 10, 2007 11:22 PM, climbingrose <[EMAIL PROTECTED]> wrote:
I think there is a event listener interface for hooking into Solr events such as post commit, post optimise and open new searcher. I can't remember on top of my head but if you do a search for *EventListener in Eclipse,
you'll find it.
The Wiki shows how to trigger snapshooter after each commit and optimise. You should be able to follow this example to create your own listener.

Right... you shouldn't need to implement your own listeners though.
Search for postCommit in the example solrconfig.xml

-Yonik


Reply via email to