robert engels <[EMAIL PROTECTED]> wrote:

> Then how can the commit during reopen be an issue?

This is what happens:

  * Reader opens latest segments_N & reads all SegmentInfos
    successfully.

  * Writer writes new segments_N+1, and then deletes now un-referenced
    files.

  * Reader tries to open files referenced by segments_N and hits FNFE
    when it tries to open a file writer just removed.

Lucene handles this fine (it just retries on the new segments_N+1),
but the patch in LUCENE-743 is now failing to decRef the Norm
instances when this retry happens.

> I am not very family with this new code, but it seems that you need
> to write segments.XXX.new and then rename to segments.XXX.

We don't rename anymore (it's not reliable on windows).  We write
straight to segments_N.

> As long as the files are sync'd, even on nfs the reopen should not
> see segments.XXX until is is ready.
>
> Although lockless commits are beneficial in their own rite, I still
> think that people's understanding of NFS limitations are
> flawed. Read the section below on "close to open" consistency. There
> should be no problem using Lucene across NFS - even the old version.
>
> The write-once nature of Lucene makes this trivial.  The only
> problem was the segments file, which is lucene used the read/write
> lock and close(0 correctly never would have been a problem.

Yes, in an ideal world, NFS server+clients are supposed to implement
close-to-open semantics but in my experience they do not always
succeed.  Previous version of Lucene do in fact have problems over
NFS.  NFS also does not give you "delete on last close" which Lucene
normally relies on (unless you create a custom deletion policy).

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to