2007/5/25, Chris Hostetter <[EMAIL PROTECTED]>:
: when i index data with 45 solr boxs.(data have 1700w, freebsd6, java: : diablo-1.5.0_07-b01, tomcat6), write lock will happen in the procedure. 1) bug reports about errors are nearly useless without a real error message including a stack trace. 2) what do you mean you "index data with 45 solr boxs" .. are you running 45 seperate instances of solr and idnexing on all of them indepndently? if so why doees the number matter? .. it sounds like you are describing a problem you would habe after a while even if there was only 1 solr server right?
45 solr boxs means 45 seperate instances of solr . The number why i say because my code use for statement to index . like this:
for($i=0; $i<45; $i++){ doIndex($i); }
1700w data divided into 45, and sent to 45 solr instances. typically when i see problems with write locks it's because Solr crashed
(usually from an OOM) and then the container restarted it but the stale write lock was still on disk ... have you checked your logs for other previous exceptions?
i know how to fix it. but i just don't know why it happen. this solr error information:
Exception during commit/optimize:java.io.IOException: Lock obtain timed out: SimpleFSLock@/usr/solrapp/solr21/data/index/write.lock
-Hoss
Thks hoss.
-- regards jl