I am betting that if your remote locking has issues, you will have the similar problems (since your new code requires accurate reading of the directory to determine the "latest" files). I also believe that directory reads like this are VERY inefficient in most cases.

OK, I will test the cost with benchmarks...

I think these proposed changes are invalid. I suggest using a plugin-in lock provider that uses the OS level lock methods available with FileChannel in order to assure lock consistency. If your OS is not honoring these, you probably need the changes to be performed there (and not in Lucene).

Yes I agree, and this is in process:

  http://issues.apache.org/jira/browse/LUCENE-635

I think even if we can do lock-less commits, we would still want to use native locks for the write locks.

I'm also working on an OS level locking implementation that subclasses LockFactory. However on an initial test I found that my test NFS server (just a default Ubuntu 6.06 install) does not have locking enabled (though it is an option if I reconfigure it, run it in kernel mode, etc.). Then there was this spooky attempt in the past to use OS level locking over NFS:

  http://marc2.theaimsgroup.com/?l=lucene-dev&m=108322303929090&w=2

Hopefully that particular failure was from bugs in the JVM.

Anyway, the conclusion I generally come to when working with file locks is that there are always many system level nuances / isssues / challenges to getting them to work properly. And so if we can use a lock-less protocol for our commits we can prevent all the corresponding problems.

Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to