Lock obtain timed out from an MDB
If this is a stupid question, I deeply apologize. I'm stumped. I have a message-driven EJB using Lucene. In *every* case where the MDB is trying to create an index, I'm getting Lock obtain timed out. It's in org.apache.lucene.store.Lock.obtain(Lock.java:58), which the user list has referred to before - but I don't see how the suggestions there apply to what I'm trying to do. (It's creating a lock file in /var/tmp/ properly, from what I can see, so it's not write permissions, I imagine.) I set the infoStream in my index writer to System.out, but I don't see any extra information. I'm using a SQL-based Directory object, but I get the same problem if I refer to a file directly. Is there a way to override the Lock portably so that I can have the lock itself managed in an RDMS? (It's a J2EE project, so relying on file access is problematic; if the beans using lucene to write to the index are on multiple servers, multiple locks could exist anyway.) --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out from an MDB
Sorry to reply to my own post, but I now have a greater understanding of PART of my problem - my SQLDirectory is not *quite* right, I think. So I'm rolling back to FSDirectory. Now, I have a servlet that writes to the filesystem to simplify things (as I'm not confident enough to debug the RDMS-based directory yet. That's a task for later, I think). The servlet says it successfully creates the index like so: try { open the index with create=false } catch (file not found) { open the index with create=true } index.optimize(); index.close(); Now, when I fire off any messages to the MDB, it yields the following: java.io.IOException: Lock obtain timed out: Lock@/var/tmp/lucene-d6b0a3281487d1bc4d169d00426f475d-write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:58) Now, this is on only two messages to the MDB, not just a flood of messages. Two handlers, so I expect a lock in one's case, but not the first MDB call - it should be the one causing the lock for the second one, if a lock exists at all. I've verified that when the servlet that initializes the index runs, a lock file is NOT present, but again, it looks like every message fired through looks for a lock and finds one, when I would think it wouldn't be there. What am I not understanding? On Thu, 6 Jan 2005, Joseph Ottinger wrote: If this is a stupid question, I deeply apologize. I'm stumped. I have a message-driven EJB using Lucene. In *every* case where the MDB is trying to create an index, I'm getting Lock obtain timed out. It's in org.apache.lucene.store.Lock.obtain(Lock.java:58), which the user list has referred to before - but I don't see how the suggestions there apply to what I'm trying to do. (It's creating a lock file in /var/tmp/ properly, from what I can see, so it's not write permissions, I imagine.) I set the infoStream in my index writer to System.out, but I don't see any extra information. I'm using a SQL-based Directory object, but I get the same problem if I refer to a file directly. Is there a way to override the Lock portably so that I can have the lock itself managed in an RDMS? (It's a J2EE project, so relying on file access is problematic; if the beans using lucene to write to the index are on multiple servers, multiple locks could exist anyway.) --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out from an MDB
Do you have two threads simultaneously either writing or deleting from the index? Erik On Jan 6, 2005, at 9:27 AM, Joseph Ottinger wrote: Sorry to reply to my own post, but I now have a greater understanding of PART of my problem - my SQLDirectory is not *quite* right, I think. So I'm rolling back to FSDirectory. Now, I have a servlet that writes to the filesystem to simplify things (as I'm not confident enough to debug the RDMS-based directory yet. That's a task for later, I think). The servlet says it successfully creates the index like so: try { open the index with create=false } catch (file not found) { open the index with create=true } index.optimize(); index.close(); Now, when I fire off any messages to the MDB, it yields the following: java.io.IOException: Lock obtain timed out: Lock@/var/tmp/lucene-d6b0a3281487d1bc4d169d00426f475d-write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:58) Now, this is on only two messages to the MDB, not just a flood of messages. Two handlers, so I expect a lock in one's case, but not the first MDB call - it should be the one causing the lock for the second one, if a lock exists at all. I've verified that when the servlet that initializes the index runs, a lock file is NOT present, but again, it looks like every message fired through looks for a lock and finds one, when I would think it wouldn't be there. What am I not understanding? On Thu, 6 Jan 2005, Joseph Ottinger wrote: If this is a stupid question, I deeply apologize. I'm stumped. I have a message-driven EJB using Lucene. In *every* case where the MDB is trying to create an index, I'm getting Lock obtain timed out. It's in org.apache.lucene.store.Lock.obtain(Lock.java:58), which the user list has referred to before - but I don't see how the suggestions there apply to what I'm trying to do. (It's creating a lock file in /var/tmp/ properly, from what I can see, so it's not write permissions, I imagine.) I set the infoStream in my index writer to System.out, but I don't see any extra information. I'm using a SQL-based Directory object, but I get the same problem if I refer to a file directly. Is there a way to override the Lock portably so that I can have the lock itself managed in an RDMS? (It's a J2EE project, so relying on file access is problematic; if the beans using lucene to write to the index are on multiple servers, multiple locks could exist anyway.) -- - Joseph B. Ottinger http://enigmastation.com IT Consultant [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out from an MDB
Well, I think I isolated the problem: stupid error on my part, I think. I was adding an indexed field that had, um, a value of null. Correcting that made the process go much more properly - although note that I haven't scaled up to have multiple elements to index. Good milestone, though. SHouldn't Lucene warn the user if they do something like this? On Thu, 6 Jan 2005, Erik Hatcher wrote: Do you have two threads simultaneously either writing or deleting from the index? Erik On Jan 6, 2005, at 9:27 AM, Joseph Ottinger wrote: Sorry to reply to my own post, but I now have a greater understanding of PART of my problem - my SQLDirectory is not *quite* right, I think. So I'm rolling back to FSDirectory. Now, I have a servlet that writes to the filesystem to simplify things (as I'm not confident enough to debug the RDMS-based directory yet. That's a task for later, I think). The servlet says it successfully creates the index like so: try { open the index with create=false } catch (file not found) { open the index with create=true } index.optimize(); index.close(); Now, when I fire off any messages to the MDB, it yields the following: java.io.IOException: Lock obtain timed out: Lock@/var/tmp/lucene-d6b0a3281487d1bc4d169d00426f475d-write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:58) Now, this is on only two messages to the MDB, not just a flood of messages. Two handlers, so I expect a lock in one's case, but not the first MDB call - it should be the one causing the lock for the second one, if a lock exists at all. I've verified that when the servlet that initializes the index runs, a lock file is NOT present, but again, it looks like every message fired through looks for a lock and finds one, when I would think it wouldn't be there. What am I not understanding? On Thu, 6 Jan 2005, Joseph Ottinger wrote: If this is a stupid question, I deeply apologize. I'm stumped. I have a message-driven EJB using Lucene. In *every* case where the MDB is trying to create an index, I'm getting Lock obtain timed out. It's in org.apache.lucene.store.Lock.obtain(Lock.java:58), which the user list has referred to before - but I don't see how the suggestions there apply to what I'm trying to do. (It's creating a lock file in /var/tmp/ properly, from what I can see, so it's not write permissions, I imagine.) I set the infoStream in my index writer to System.out, but I don't see any extra information. I'm using a SQL-based Directory object, but I get the same problem if I refer to a file directly. Is there a way to override the Lock portably so that I can have the lock itself managed in an RDMS? (It's a J2EE project, so relying on file access is problematic; if the beans using lucene to write to the index are on multiple servers, multiple locks could exist anyway.) -- - Joseph B. Ottinger http://enigmastation.com IT Consultant [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out from an MDB
On Jan 6, 2005, at 10:41 AM, Joseph Ottinger wrote: SHouldn't Lucene warn the user if they do something like this? When a user indexes a null? Or attempts to write to the index from two different IndexWriter instances? I believe you should get an NPE if you try index a null field value? No? Erik - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out from an MDB
On Thu, 6 Jan 2005, Erik Hatcher wrote: On Jan 6, 2005, at 10:41 AM, Joseph Ottinger wrote: SHouldn't Lucene warn the user if they do something like this? When a user indexes a null? Or attempts to write to the index from two different IndexWriter instances? I believe you should get an NPE if you try index a null field value? No? Well, I'd agree - the lack of an exception was rather disturbing, considering how badly it destroyed Lucene for the application (requiring not only restart but cleanup as well.) I don't know Lucene well enough to say according to the code... but NOT adding the null managed to correct the problem entirely. --- Joseph B. Ottinger http://enigmastation.com IT Consultant[EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
indexreader throwing IOException with lock obtain timed out
hi i am updating the index and therefore need to delete documents before adding the updated version. This is how I delete the document which is working fine. - int deleteDoc = 0; deleteDoc = IndexReader.open(dstDir).delete(new Term(url, url)); IndexReader.open(dstDir).close(); - The writer after that throws an IOException : Lock obtain timed out. - Analyzer analyzer = new StandardAnalyzer(); IndexWriter writer = new IndexWriter(dstDir, analyzer, overwrite); - Am I missing anything? I have already closed the IndexReader before calling the writer. Thanks Sebastian - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: indexreader throwing IOException with lock obtain timed out
Sorry guys I have solved it. I should do this. int deleteDoc = 0; IndexReader reader = IndexReader.open(dstDir); deleteDoc = reader.delete(new Term(url, url)); reader.close(); Just need to use the same instance of reader. anyway lucene should just overwrite the old document during updating instead.. sebastian On Mon, 2004-05-31 at 18:02, Sebastian Ho wrote: hi i am updating the index and therefore need to delete documents before adding the updated version. This is how I delete the document which is working fine. - int deleteDoc = 0; deleteDoc = IndexReader.open(dstDir).delete(new Term(url, url)); IndexReader.open(dstDir).close(); - The writer after that throws an IOException : Lock obtain timed out. - Analyzer analyzer = new StandardAnalyzer(); IndexWriter writer = new IndexWriter(dstDir, analyzer, overwrite); - Am I missing anything? I have already closed the IndexReader before calling the writer. Thanks Sebastian - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
'Lock obtain timed out' even though NO locks exist...
I've noticed this really strange problem on one of our boxes. It's happened twice already. We have indexes where when Lucnes starts it says 'Lock obtain timed out' ... however NO locks exist for the directory. There are no other processes present and no locks in the index dir or /tmp. Is there anyway to figure out what's going on here? Looking at the index it seems just fine... But this is only a brief glance. I was hoping that if it was corrupt (which I don't think it is) that lucene would give me a better error than Lock obtain timed out Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster signature.asc Description: OpenPGP digital signature
RE: 'Lock obtain timed out' even though NO locks exist...
It is possible that a previous operation on the index left the lock open. Leaving the IndexWriter or Reader open without closing them ( in a finally block ) could cause this. Anand -Original Message- From: Kevin A. Burton [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 28, 2004 2:57 PM To: Lucene Users List Subject: 'Lock obtain timed out' even though NO locks exist... I've noticed this really strange problem on one of our boxes. It's happened twice already. We have indexes where when Lucnes starts it says 'Lock obtain timed out' ... however NO locks exist for the directory. There are no other processes present and no locks in the index dir or /tmp. Is there anyway to figure out what's going on here? Looking at the index it seems just fine... But this is only a brief glance. I was hoping that if it was corrupt (which I don't think it is) that lucene would give me a better error than Lock obtain timed out Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: 'Lock obtain timed out' even though NO locks exist...
Which version of lucene are you using? In 1.2, I believe the lock file was located in the index directory itself. In 1.3, it's in your system's tmp folder. Perhaps it's a permission problem on either one of those folders. Maybe your process doesn't have write access to the correct folder and is thus unable to create the lock file? You can also pass lucene a system property to increase the lock timeout interval, like so: -Dorg.apache.lucene.commitLockTimeout=6 or -Dorg.apache.lucene.writeLockTimeout=6 The above sets the timeout to one minute. Hope this helps, Jim --- Kevin A. Burton [EMAIL PROTECTED] wrote: I've noticed this really strange problem on one of our boxes. It's happened twice already. We have indexes where when Lucnes starts it says 'Lock obtain timed out' ... however NO locks exist for the directory. There are no other processes present and no locks in the index dir or /tmp. Is there anyway to figure out what's going on here? Looking at the index it seems just fine... But this is only a brief glance. I was hoping that if it was corrupt (which I don't think it is) that lucene would give me a better error than Lock obtain timed out Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster ATTACHMENT part 2 application/pgp-signature name=signature.asc __ Do you Yahoo!? Win a $20,000 Career Makeover at Yahoo! HotJobs http://hotjobs.sweepstakes.yahoo.com/careermakeover - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: 'Lock obtain timed out' even though NO locks exist...
[EMAIL PROTECTED] wrote: It is possible that a previous operation on the index left the lock open. Leaving the IndexWriter or Reader open without closing them ( in a finally block ) could cause this. Actually this is exactly the problem... I ran some single index tests and a single process seems to read from it. The problem is that we were running under Tomcat with diff webapps for testing and didn't run into this problem before. We had an 11G index that just took a while to open and during this open Lucene was creating a lock. I wasn't sure that Tomcat was multithreading this so maybe it is and it's just taking longer to open the lock in some situations. Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster signature.asc Description: OpenPGP digital signature
Re: 'Lock obtain timed out' even though NO locks exist...
Kevin A. Burton wrote: Actually this is exactly the problem... I ran some single index tests and a single process seems to read from it. The problem is that we were running under Tomcat with diff webapps for testing and didn't run into this problem before. We had an 11G index that just took a while to open and during this open Lucene was creating a lock. I wasn't sure that Tomcat was multithreading this so maybe it is and it's just taking longer to open the lock in some situations. This is strange... after removing all the webapps (besides 1) Tomcat still refuses to allow Lucene to open this index with Lock obtain timed out. If I open it up from the console it works just fine. I'm only doing it with one index and a ulimit -n so it's not a files issue. Memory is 1G for Tomcat. If I figure this out will be sure to send a message to the list. This is a strange one Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster signature.asc Description: OpenPGP digital signature
Re: 'Lock obtain timed out' even though NO locks exist...
James Dunn wrote: Which version of lucene are you using? In 1.2, I believe the lock file was located in the index directory itself. In 1.3, it's in your system's tmp folder. Yes... 1.3 and I have a script that removes the locks from both dirs... This is only one process so it's just fine to remove them. Perhaps it's a permission problem on either one of those folders. Maybe your process doesn't have write access to the correct folder and is thus unable to create the lock file? I thought about that too... I have plenty of disk space so that's not an issue. Also did a chmod -R so that should work too. You can also pass lucene a system property to increase the lock timeout interval, like so: -Dorg.apache.lucene.commitLockTimeout=6 or -Dorg.apache.lucene.writeLockTimeout=6 I'll give that a try... good idea. Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster signature.asc Description: OpenPGP digital signature
RE: 'Lock obtain timed out' even though NO locks exist...
Not sure if our installation is the same or not, but we are also using Tomcat. I had a similiar problem last week, it occurred after Tomcat went through a hard restart and some software errors had the website hammered. I found the lock file in /usr/local/tomcat/temp/ using locate. According to the README.txt this is a directory created for the JVM within Tomcat. So it is a system temp directory, just inside Tomcat. Hope that helps, -Gus -Original Message- From: Kevin A. Burton [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 28, 2004 1:01 PM To: Lucene Users List Subject: Re: 'Lock obtain timed out' even though NO locks exist... James Dunn wrote: Which version of lucene are you using? In 1.2, I believe the lock file was located in the index directory itself. In 1.3, it's in your system's tmp folder. Yes... 1.3 and I have a script that removes the locks from both dirs... This is only one process so it's just fine to remove them. Perhaps it's a permission problem on either one of those folders. Maybe your process doesn't have write access to the correct folder and is thus unable to create the lock file? I thought about that too... I have plenty of disk space so that's not an issue. Also did a chmod -R so that should work too. You can also pass lucene a system property to increase the lock timeout interval, like so: -Dorg.apache.lucene.commitLockTimeout=6 or -Dorg.apache.lucene.writeLockTimeout=6 I'll give that a try... good idea. Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: 'Lock obtain timed out' even though NO locks exist...
Gus Kormeier wrote: Not sure if our installation is the same or not, but we are also using Tomcat. I had a similiar problem last week, it occurred after Tomcat went through a hard restart and some software errors had the website hammered. I found the lock file in /usr/local/tomcat/temp/ using locate. According to the README.txt this is a directory created for the JVM within Tomcat. So it is a system temp directory, just inside Tomcat. Man... you ROCK! I didn't even THINK of that... Hm... I wonder if we should include the name of the lock file in the Exception within Tomcat. That would probably have saved me a lot of time :) Either that or we can put this in the wiki Kevin -- Please reply using PGP. http://peerfear.org/pubkey.asc NewsMonster - http://www.newsmonster.org/ Kevin A. Burton, Location - San Francisco, CA, Cell - 415.595.9965 AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 IRC - freenode.net #infoanarchy | #p2p-hackers | #newsmonster signature.asc Description: OpenPGP digital signature
java.io.IOException: Lock obtain timed out
I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: java.io.IOException: Lock obtain timed out
There is no need for that .unlock call, just .close() Otis --- Gabe [EMAIL PROTECTED] wrote: I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: java.io.IOException: Lock obtain timed out
Otis, I only put the unlock call in because I had the error in the first place. Removing it, the IOException still occurs, when trying to instantiate the IndexWriter. Thanks, Gabe --- Otis Gospodnetic [EMAIL PROTECTED] wrote: There is no need for that .unlock call, just .close() Otis --- Gabe [EMAIL PROTECTED] wrote: I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: java.io.IOException: Lock obtain timed out
Did you close your writer if an Exception occured? I had a similiar problem, but it was fixed when i close the writer in the finally block. Below is my original code (which generate Mjava.io.Exception: Lock obtain timed out when an Exception is thrown) public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } } writer.close(); // never reaches this statement if the catch block is called. } // revised code to force a close on the IndexWriter public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents writer.close(); } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } finally{ writer.close(); } } } -Original Message- From: Gabe [mailto:[EMAIL PROTECTED] Sent: Monday, March 15, 2004 1:53 PM To: Lucene Users List Subject: Re: java.io.IOException: Lock obtain timed out Otis, I only put the unlock call in because I had the error in the first place. Removing it, the IOException still occurs, when trying to instantiate the IndexWriter. Thanks, Gabe --- Otis Gospodnetic [EMAIL PROTECTED] wrote: There is no need for that .unlock call, just .close() Otis --- Gabe [EMAIL PROTECTED] wrote: I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: java.io.IOException: Lock obtain timed out
I notice in your catch clause you always set the writer to be true... (i.e. new IndexWriter(INDEX_DIR, analyzer, true). If I am not mistaken reading the docs, this overwrites the entire index, no? That is why I was setting that variable to false when doing an incremental update. When I reindex all documents, I have had no problem. Gabe --- Nguyen, Tri (NIH/NLM/LHC) [EMAIL PROTECTED] wrote: Did you close your writer if an Exception occured? I had a similiar problem, but it was fixed when i close the writer in the finally block. Below is my original code (which generate Mjava.io.Exception: Lock obtain timed out when an Exception is thrown) public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } } writer.close(); // never reaches this statement if the catch block is called. } // revised code to force a close on the IndexWriter public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents writer.close(); } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } finally{ writer.close(); } } } -Original Message- From: Gabe [mailto:[EMAIL PROTECTED] Sent: Monday, March 15, 2004 1:53 PM To: Lucene Users List Subject: Re: java.io.IOException: Lock obtain timed out Otis, I only put the unlock call in because I had the error in the first place. Removing it, the IOException still occurs, when trying to instantiate the IndexWriter. Thanks, Gabe --- Otis Gospodnetic [EMAIL PROTECTED] wrote: There is no need for that .unlock call, just .close() Otis --- Gabe [EMAIL PROTECTED] wrote: I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: java.io.IOException: Lock obtain timed out
I figured it out. an errant open IndexWriter. --- Nguyen, Tri (NIH/NLM/LHC) [EMAIL PROTECTED] wrote: Did you close your writer if an Exception occured? I had a similiar problem, but it was fixed when i close the writer in the finally block. Below is my original code (which generate Mjava.io.Exception: Lock obtain timed out when an Exception is thrown) public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } } writer.close(); // never reaches this statement if the catch block is called. } // revised code to force a close on the IndexWriter public static void index(File indexDir, List cList, boolean ow) throws Exception{ IndexWriter writer = null; try{ writer = new IndexWriter(indexDir, new MyAnalyzer(), overwrite); // index documents writer.close(); } catch(Exception e){ writer = new IndexWriter(indexDir, new MyAnalyzer(), true); try{ // index documents } catch(Exception ee){ throw ee; } finally{ writer.close(); } } } -Original Message- From: Gabe [mailto:[EMAIL PROTECTED] Sent: Monday, March 15, 2004 1:53 PM To: Lucene Users List Subject: Re: java.io.IOException: Lock obtain timed out Otis, I only put the unlock call in because I had the error in the first place. Removing it, the IOException still occurs, when trying to instantiate the IndexWriter. Thanks, Gabe --- Otis Gospodnetic [EMAIL PROTECTED] wrote: There is no need for that .unlock call, just .close() Otis --- Gabe [EMAIL PROTECTED] wrote: I am using Lucene 1.3 final and am having an error that I can't seem to shake. Basically, I am updating a Document in the index incrementally by calling an IndexReader to remove the document. This works. Then, I close the IndexReader with the following code: reader.unlock(reader.directory()); reader.close(); I put the first of the two lines in to try to force the lock to disable. According to the logging, this code is being called and the IndexReader is being closed. However, then I open a writer to add the document, I get the following. java.io.IOException: Lock obtain timed out at org.apache.lucene.store.Lock.obtain(Lock.java:97) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:173) at ... I open the writer by calling: return new IndexWriter(INDEX_DIR, analyzer, false); where analyzer=new StandardAnalyzer(); I get the reader by calling: IndexReader reader=IndexReader.open(INDEX_DIR); Thanks for any help, Gabe __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - More reliable, more storage, less spam http://mail.yahoo.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Lock obtain timed out
-Original Message- From: Tatu Saloranta [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 16, 2003 7:23 AM To: Lucene Users List Subject: Re: Lock obtain timed out On Tuesday 16 December 2003 03:37, Hohwiller, Joerg wrote: Hi there, I have not yet got any response about my problem. While debugging into the depth of lucene (really hard to read deep insde) I discovered that it is possible to disable the Locks using a System property. ... Am I safe disabling the locking??? Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? Or does anybody know where to get an official example of how to handle concurrent index modification and searches? One problem I have seen, and am still trying to solve, is that if my web app is terminated (running from console during development, ctrl+c on unix), sometimes it seems commit.lock file is left. Now problem is that apparently [Anand Stephen] You could attach the current Thread to Runtime shut down hook and release all resources when (ctrl +c) or the app shuts down. This works for me. code-snip Thread t = new Thread(this.getClass().getName()) { public void run() { logger.info(Closing Lucene indexer; releasing resources.); try { if (writer != null) { logger.info(Writer is open closing it!); writer.close(); } if (reader != null) { logger.info(reader is still open, closing it!); reader.close(); } } catch (Exception e) { logger.info(Error occurred shutting down Lucene indexer.+ e.getMessage(), e); } } }; Runtime.getRuntime().addShutdownHook(t); /code-snip - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Error occurrence Lock obtain timed out
I am using Lucene final.if i want to delete the documents from the index with IndexReader.delete(i) where i is doc. number in index.There will some problem occer during obtaining of lock as file could not create in obtain() method of FSDirectory.java and following error occur: Lock obtain timed out in Lock.java's obtain(long lockWaitTimeout) method.So if locked variable comes false (no file creation during obtain() method call) Please tell me if somebody have some solution that what should be done for deleting a document from index and again reindex same document. Here documen is an html file. With thanks. - Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes
Lock obtain timed out
Hi there, I have not yet got any response about my problem. While debugging into the depth of lucene (really hard to read deep insde) I discovered that it is possible to disable the Locks using a System property. When I start my application with -DdisableLuceneLocks=true, I do not get the error anymore. I just wonder if this is legal and wont cause other trouble??? As far as I could understand the source, a proper thread synchronization is done using locks on Java Objects and the index-store locks seem to be required only if multiple lucenes (in different VMs) work on the same index. In my situation there is only one Java-VM running and only one lucene is working on one index. Am I safe disabling the locking??? Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? Or does anybody know where to get an official example of how to handle concurrent index modification and searches? Tank you so much Jörg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Lock obtain timed out
Hi. I obtained this exception when I had more than one thread trying to create an IndexWriter. I solved it by placing the code using the IndexWriter in a synchronized method. Hope it will help, Gilles. -Message d'origine- De : Hohwiller, Joerg [mailto:[EMAIL PROTECTED] Envoyé : mardi 16 décembre 2003 11:37 À : [EMAIL PROTECTED] Objet : Lock obtain timed out Hi there, I have not yet got any response about my problem. While debugging into the depth of lucene (really hard to read deep insde) I discovered that it is possible to disable the Locks using a System property. When I start my application with -DdisableLuceneLocks=true, I do not get the error anymore. I just wonder if this is legal and wont cause other trouble??? As far as I could understand the source, a proper thread synchronization is done using locks on Java Objects and the index-store locks seem to be required only if multiple lucenes (in different VMs) work on the same index. In my situation there is only one Java-VM running and only one lucene is working on one index. Am I safe disabling the locking??? Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? Or does anybody know where to get an official example of how to handle concurrent index modification and searches? Tank you so much Jörg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Lock obtain timed out
Hohwiller, Joerg writes: Am I safe disabling the locking??? No. Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? I guess -- but given your input I really have to guess; the source you wanted to attach didn't make it to the list -- your problem is, that you cannot have a writing (deleting) IndexReader and an IndexWriter open at the same time. There can only be one instance that writes to an index at one time. Disabling locking disables the checks, but then you have to take care yourself. So in practice disabling locking is useful for readonly access to static indices only. Morus - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Lock obtain timed out
Does this mean if you can insure that only one IndexWriter and/or IndexReader(Doing deletion) are never open at the same time (eg using database instead of lucene's locking), there will be no problem with removing locking? If you do not use an IndexReader to do deletion can you open and close it at anytime? David -Original Message- From: Morus Walter [mailto:[EMAIL PROTECTED] Sent: 16 December 2003 11:08 To: Lucene Users List Subject: Re: Lock obtain timed out Hohwiller, Joerg writes: Am I safe disabling the locking??? No. Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? I guess -- but given your input I really have to guess; the source you wanted to attach didn't make it to the list -- your problem is, that you cannot have a writing (deleting) IndexReader and an IndexWriter open at the same time. There can only be one instance that writes to an index at one time. Disabling locking disables the checks, but then you have to take care yourself. So in practice disabling locking is useful for readonly access to static indices only. Morus - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Lock obtain timed out
Hi there, thanks for your resonse guys! For the answers I got the info that I must not have an IndexWriter and an IndexReader open at the same time that both want to modify the index - even sequentially. What I have is the following: 1 Thread is working out events such as resource (file or folder) was added/removed/deleted/etc. All index modifications are synchronized against a write-lock object. 1 Thread does index switching what means that he synchronizes on the write lock and then closes modifying index-reader and index-writer. Next it copies that index completely and reopens the index-reader and -writer on the copied index. Then he syncs on the read lock and closes the index searcher and reopens it on the index that was previously copied. N Threads that perform search requestes but sync against the read-lock. Since I can garantee that there is only one thread working out the change events sequentially, the index-writer and index-reader will never do any concurrent modifications. This time I will attatch my source as text in this mail to get sure. For those who do not know avalon/exalibur: It is a framework that will be the only one calling the configure/start/stop methods. No one can access the instance until it is properly created, configured and started so synchronization is not neccessary in the start method. Thanks again Jörg /** * This is the implementation of the ISearchManager using lucene as underlying * search engine.br/ * Everything would be so simple if lucene was thread-safe for concurrently * modifying and searching on the same index, but it is not. br/ * My first idea was to have a single index that is continiusly modified and a * background thread that continuosly closes and reopens the index searcher. * This should bring most recent search results but it did not work proberly * with lucene.br/ * My strategy now is to have multiple indexes and to cycle over all of them * in a backround thread copying the most recent one to the next (least recent) * one. Index modifications are always performed on the most recent index, * while searching is always performed on the second recent (copy of the) index. * This stategy results in less acutal (but still very acceptable) actuality * of search results. Further it produces a lot more disk space overhead but * with the advantage of having backups of the index.br/ * Because the search must filter the search results the user does not have * read access on, it can also filter the results that do not exist anymore * without further costs. * * @author Joerg Hohwiller (jhohwill) */ public class SearchManager extends AbstractManager implements ISearchManager, IDataEventListener, Startable, Serviceable, Disposable, Configurable, Runnable, ThreadSafe { /** * A background thread is switching/updating the index used for indexing * and/or searching. The thread sleeps an amount of this constant in * milliseconds until the next switch is done.br/ * The shorter the delay, the more actual the search results but also the * more preformance overhead is produced.br/ * Be aware that the delay does not determine the index switching frequency * because after a sleep of the delay, the index is copied and the switched. * This required time for this operation does depend on the size of the * index. This also means that the bigger the index, the less acutal are * the search results.br/ * A value of 60 seconds (60 * 1000L) should be OK. */ private static final long INDEX_SWITCH_DELAY = 30 * 1000L; /** the URI field name */ public static final String FIELD_URI = uri; /** the title field name */ public static final String FIELD_TITLE = dc_title; /** the text field name */ public static final String FIELD_TEXT = text; /** the read action */ private static final String READ_ACTION_URI = /actions/read; /** the name of the configuration tag for the index settings */ private static final String CONFIGURATION_TAG_INDEXER = indexer; /** the name of the configuration attribute for the index path */ private static final String CONFIGURATION_ATTRIBUTE_INDEX_PATH = index-path; /** the user used to access resources for indexing (global read access) */ private static final String SEARCH_INDEX_USER = indexer; /** the maximum number of search hits */ private static final int MAX_SEARCH_HITS = 100; /** the default analyzer used for the search index */ private static final Analyzer ANALYZER = new StandardAnalyzer(); /** * the number of indexes used, must be at least 3: * ul * lione for writing/updating/li * lione for read/search/li * lione temporary where the index is copied to/li * /ul * All further indexes will act as extra backups of the index but will * also
Re: Lock obtain timed out
On Tuesday 16 December 2003 03:37, Hohwiller, Joerg wrote: Hi there, I have not yet got any response about my problem. While debugging into the depth of lucene (really hard to read deep insde) I discovered that it is possible to disable the Locks using a System property. ... Am I safe disabling the locking??? Can anybody tell me where to get documentation about the Locking strategy (I still would like to know why I have that problem) ??? Or does anybody know where to get an official example of how to handle concurrent index modification and searches? One problem I have seen, and am still trying to solve, is that if my web app is terminated (running from console during development, ctrl+c on unix), sometimes it seems commit.lock file is left. Now problem is that apparently method that seems like it tries to check if there is a lock (and subsequently asking it to be removed via API) doesn't consider that to be the lock (sorry for not having details, writing this from home without source). So I'll probably see if disabling locks would get rid of this lock file (as I never have multiple writers, or even writer and reader, working on same index... I just always make full file copy of index before doing incremental updates), or physically delete commit.lock if necessary when starting the app. The problem I describe above happens fairly infrequently, but that's actually what makes it worse... our QA people (in different continent) have been bitten by a bit couple of times. :-/ -+ Tatu +- - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Problems deleting documents from the index (Lock obtain timed out)
Hi there, I just subscribed to this list and have a little Problem: I am using lucene for incremental indexing (yes, I read the FAQ! dont try to convince me to rebuild the index periodically from scratch :) ). Now the problem seems to be that lucene is not able to perform index modifications and parallel search requests. After my simple approaches failed, I finnaly implemented the recomended way to have an index that is modified and create a copy of that index for searches. I do all this with proper Thread synchronization (at least I hope so). Before I copy the index, I do close the index-writer and index-reader working on that index, then copy and reopen the index-writer and -reader on the new copy. Next I close the index-searcher and reopen it on the index that has been copied before. Now my problem is that when I receive a delete event and want to remove a document from the index by a special field (in my case the URI), I get a IOException with the message Lock obtain timed out. I tried lucene 1.3-rc1, 1.3-rc2 and 1.3-rc3 all with the same result. Any suggestions would be very welcome :) Thank you so far Jörg Hohwiller BTW: I attatched the relevant source code (but removed imports, etc. so that it does not contain any confidential information). Maybe this answers the first of your questions. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]