Lance,
A belated thanks for explaining the reason that the delete works for
replacing the slave index copies.
Tracy
On Dec 17, 2007, at 6:05 PM, Norskog, Lance wrote:
It works via two Unix file system tricks.
1) Files are not directly bound with with filenames, instead there
is a
layer of indirection called an 'inode'. So, multiple file and
directory
names point to the same physical file. The "." and ".." directory
entries are implemented this way.
2) Physical files are bound to all open file descriptors even after
there are no file names for the files. So, file data exists until all
file names are gone AND all open files are gone.
Lance
-----Original Message-----
From: Tracy Flynn [mailto:[EMAIL PROTECTED]
Sent: Saturday, December 15, 2007 7:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Replication hooks - changing the index while the slave is
running ...
That helps
Thanks for the prompt reply
On Dec 15, 2007, at 10:15 AM, Yonik Seeley wrote:
On Dec 14, 2007 7:36 PM, Tracy Flynn
<[EMAIL PROTECTED]> wrote:
1) The existing index(es) being used by the Solr slave instance are
physically deleted
2) The new index snapshots are renamed/moved from their temporary
installation location to the default index location
3) The slave is sent a 'commit' to force a new IndexReader to start
to read the new index.
What happens to search requests against the existing/old index
during
step 1) and between steps 1 and 2?
Search requests will still work on the old searcher/index.
Where do they get information if
they need to go to disk for results that are not cached? Do they a)
hang b) produce no results c) error in some other way?
A lucene IndexReader keeps all the files open that aren't loaded into
memory... and external deletion has no effect on the ability to keep
reading these open files (they aren't really deleted yet).
-Yonik