Here is SolrConfig.xml, and I am using Lucene NRT with soft commit and
 update the index every 5 seconds, soft commit every 1 second and hard
commit every 15 minutes

> SolrConfig.xml:
>
>
>        <indexDefaults>
>                <useCompoundFile>false</useCompoundFile>
>                <mergeFactor>10</mergeFactor>
>                <maxMergeDocs>2147483647</maxMergeDocs>
>                <maxFieldLength>10000</maxFieldLength-->
>                <ramBufferSizeMB>4096</ramBufferSizeMB>
>                <maxThreadStates>10</maxThreadStates>
>                <writeLockTimeout>1000</writeLockTimeout>
>                <commitLockTimeout>10000</commitLockTimeout>
>                <lockType>single</lockType>
>
>            <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
>              <double name="forceMergeDeletesPctAllowed">0.0</double>
>              <double name="reclaimDeletesWeight">10.0</double>
>            </mergePolicy>
>
>            <deletionPolicy class="solr.SolrDeletionPolicy">
>              <str name="keepOptimizedOnly">false</str>
>              <str name="maxCommitsToKeep">0</str>
>            </deletionPolicy>
>
>        </indexDefaults>
>
>
>        <updateHandler class="solr.DirectUpdateHandler2">
>            <maxPendingDeletes>1000</maxPendingDeletes>
>             <autoCommit>
>               <maxTime>900000</maxTime>
>               <openSearcher>false</openSearcher>
>             </autoCommit>
>             <autoSoftCommit>
>               <maxTime>${inventory.solr.softcommit.duration:1000}</
maxTime>
>             </autoSoftCommit>
>
>        </updateHandler>

On Sun, Apr 1, 2012 at 7:47 PM, Gopal Patwa <gopalpa...@gmail.com> wrote:

> I am using Solr 4.0 nightly build with NRT and I often get this
> error during auto commit "Too many open files". I have search this forum
> and what I found it is related to OS ulimit setting, please see below my
> ulimit settings. I am not sure what ulimit setting I should have for open
> file? ulimit -n unlimited?.
>
> Even if I set to higher number, it will just delay the issue until it
> reach new open file limit. What I have seen that Solr has kept deleted
> index file open by java process, which causing issue for our application
> server jboss to shutdown gracefully due this open files by java process.
>
> I have seen recently this issue was resolved in lucene, is it TRUE?
>
> https://issues.apache.org/jira/browse/LUCENE-3855
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3
> - 15GB, with Single shard
>
> We update the index every 5 seconds, soft commit every 1 second and hard
> commit every 15 minutes
>
> Environment: Jboss 4.2, JDK 1.6 64 bit, CentOS , JVM Heap Size = 24GB*
>
>
> ulimit:
>
> core file size          (blocks, -c) 0
>
> data seg size           (kbytes, -d) unlimited
>
> scheduling priority             (-e) 0
>
> file size               (blocks, -f) unlimited
>
> pending signals                 (-i) 401408
>
> max locked memory       (kbytes, -l) 1024
>
> max memory size         (kbytes, -m) unlimited
>
> open files                      (-n) 4096
>
> pipe size            (512 bytes, -p) 8
>
> POSIX message queues     (bytes, -q) 819200
>
> real-time priority              (-r) 0
>
> stack size              (kbytes, -s) 10240
>
> cpu time               (seconds, -t) unlimited
>
> max user processes              (-u) 401408
>
> virtual memory          (kbytes, -v) unlimited
>
> file locks                      (-x) unlimited
>
>
> ERROR:*
>
> *2012-04-01* *20:08:35*,*323* [] *priority=ERROR* *app_name=* 
> *thread=pool-10-thread-1* *location=CommitTracker* *line=93* *auto* *commit* 
> *error...:org.apache.solr.common.SolrException:* *Error* *opening* *new* 
> *searcher*
>       *at* 
> *org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1138*)
>       *at* *org.apache.solr.core.SolrCore.getSearcher*(*SolrCore.java:1251*)
>       *at* 
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:409*)
>       *at* 
> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>       *at* 
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>       *at* 
> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>       *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>       *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>       *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>       *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>       *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>       *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:* 
> *java.io.FileNotFoundException:* 
> */opt/mci/data/srwp01mci001/inventory/index/_4q1y_0.tip* (*Too many open 
> files*)
>       *at* *java.io.RandomAccessFile.open*(*Native* *Method*)
>       *at* *java.io.RandomAccessFile.*<*init*>(*RandomAccessFile.java:212*)
>       *at* 
> *org.apache.lucene.store.FSDirectory$FSIndexOutput.*<*init*>(*FSDirectory.java:449*)
>       *at* 
> *org.apache.lucene.store.FSDirectory.createOutput*(*FSDirectory.java:288*)
>       *at* 
> *org.apache.lucene.codecs.BlockTreeTermsWriter.*<*init*>(*BlockTreeTermsWriter.java:161*)
>       *at* 
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsConsumer*(*Lucene40PostingsFormat.java:66*)
>       *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.addField*(*PerFieldPostingsFormat.java:118*)
>       *at* 
> *org.apache.lucene.index.FreqProxTermsWriterPerField.flush*(*FreqProxTermsWriterPerField.java:322*)
>       *at* 
> *org.apache.lucene.index.FreqProxTermsWriter.flush*(*FreqProxTermsWriter.java:92*)
>       *at* *org.apache.lucene.index.TermsHash.flush*(*TermsHash.java:117*)
>       *at* *org.apache.lucene.index.DocInverter.flush*(*DocInverter.java:53*)
>       *at* 
> *org.apache.lucene.index.DocFieldProcessor.flush*(*DocFieldProcessor.java:81*)
>       *at* 
> *org.apache.lucene.index.DocumentsWriterPerThread.flush*(*DocumentsWriterPerThread.java:475*)
>       *at* 
> *org.apache.lucene.index.DocumentsWriter.doFlush*(*DocumentsWriter.java:422*)
>       *at* 
> *org.apache.lucene.index.DocumentsWriter.flushAllThreads*(*DocumentsWriter.java:553*)
>       *at* 
> *org.apache.lucene.index.IndexWriter.getReader*(*IndexWriter.java:354*)
>       *at* 
> *org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter*(*StandardDirectoryReader.java:258*)
>       *at* 
> *org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged*(*StandardDirectoryReader.java:243*)
>       *at* 
> *org.apache.lucene.index.DirectoryReader.openIfChanged*(*DirectoryReader.java:250*)
>       *at* 
> *org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1091*)
>
>
> Thanks
>
> Gopal
>
>

Reply via email to