I did that , but when I split them into 5 mill records, the first file went
through fine, when I started processing the second file SOLR hit an OOM
again:
org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: Java heap space
        at
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:184)
        at
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:194)
        at
org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
        at
org.apache.lucene.index.TermsHashPerField.growParallelPostingsArray(TermsHashPerField.java:137)
        at
org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:440)
        at
org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:169)
        at
org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:248)

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOlR-Out-of-Memory-exception-tp3074636p3076610.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to