RE: index problem with write lock
I think I had the same problem (the same error at least) and submitted a patch. The patch adds a new config option to use the nio locking facilities instead of the default lucene locking. In the ~week since I haven't seen the issue after applying the patch (ymmv) https://issues.apache.org/jira/browse/SOLR-240 - will -Original Message- From: Chris Hostetter [mailto:[EMAIL PROTECTED] Sent: Friday, May 25, 2007 1:50 AM To: solr-user@lucene.apache.org Subject: Re: index problem with write lock : i know how to fix it. : : but i just don't know why it happen. : : this solr error information: : : Exception during commit/optimize:java.io.IOException: Lock obtain timed : out: SimpleFSLock@/usr/solrapp/solr21/data/index/write.lock that's the problem you see ... but in normal SOlr operation there's no reason why there should be any problem getting the write lock -- Solr only ever makes one IndexWriter at a time. which is why i asked about any other errors earlier in your log (possibly much earlier) to indicate *abnormal* Solr operation. -Hoss
Re: index problem with write lock
i find it always happen when index have been doing for a while. for example, it will happen after starting index 1 hour - 2hours. 2007/5/24, James liu [EMAIL PROTECTED]: i find one interesting thing. when i index data with 45 solr boxs.(data have 1700w, freebsd6, java: diablo-1.5.0_07-b01, tomcat6), write lock will happen in the procedure. Reindex with solr box which have problem with write block. it show me well. it happen serveral times, so i wanna know why it happen, it should be ok in theory. not every solr box will have this problem. 45 solr boxs have same config and empty index.(i use copy, just rename their directory name) anyone have same problem and know why? -- regards jl -- regards jl
Re: index problem with write lock
2007/5/25, Chris Hostetter [EMAIL PROTECTED]: : when i index data with 45 solr boxs.(data have 1700w, freebsd6, java: : diablo-1.5.0_07-b01, tomcat6), write lock will happen in the procedure. 1) bug reports about errors are nearly useless without a real error message including a stack trace. 2) what do you mean you index data with 45 solr boxs .. are you running 45 seperate instances of solr and idnexing on all of them indepndently? if so why doees the number matter? .. it sounds like you are describing a problem you would habe after a while even if there was only 1 solr server right? 45 solr boxs means 45 seperate instances of solr . The number why i say because my code use for statement to index . like this: for($i=0; $i45; $i++){ doIndex($i); } 1700w data divided into 45, and sent to 45 solr instances. typically when i see problems with write locks it's because Solr crashed (usually from an OOM) and then the container restarted it but the stale write lock was still on disk ... have you checked your logs for other previous exceptions? i know how to fix it. but i just don't know why it happen. this solr error information: Exception during commit/optimize:java.io.IOException: Lock obtain timed out: SimpleFSLock@/usr/solrapp/solr21/data/index/write.lock -Hoss Thks hoss. -- regards jl
Re: index problem with write lock
: i know how to fix it. : : but i just don't know why it happen. : : this solr error information: : : Exception during commit/optimize:java.io.IOException: Lock obtain timed : out: SimpleFSLock@/usr/solrapp/solr21/data/index/write.lock that's the problem you see ... but in normal SOlr operation there's no reason why there should be any problem getting the write lock -- Solr only ever makes one IndexWriter at a time. which is why i asked about any other errors earlier in your log (possibly much earlier) to indicate *abnormal* Solr operation. -Hoss