We are doing Autocommit for every five minutes.
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr-during-indexing-tp4124898p4125497.html
Sent from the Solr - User mailing list archive at Nabble.com.
at
sometimes during indexing?
due to large index size(80M docs) or some large incoming record.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr-during-indexing-tp4124898.html
Sent from the Solr - User mailing list archive at Nabble.com.
at
sometimes during indexing?
due to large index size(80M docs) or some large incoming record.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr-during-indexing-tp4124898.html
Sent from the Solr - User mailing list archive
divided that into two chunks and indexing twice. So now we are not
getting OOM but heap usage is more. So we are analyzing and trying to find
the cause to make sure we shouldn't get OOM again.
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr
this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr-during-indexing-tp4124898p4124906.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 3/17/2014 12:39 PM, solr2020 wrote:
previously we faced OOM when we try to index 1.2M records at the same time.
Now we divided that into two chunks and indexing twice. So now we are not
getting OOM but heap usage is more. So we are analyzing and trying to find
the cause to make sure we
transactionIsolation=TRANSACTION_READ_COMMITTED
holdability=CLOSE_CURSORS_AT_COMMIT/
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-heap-usage-in-Solr-during-indexing-tp4124898p4124934.html
Sent from the Solr - User mailing list archive at Nabble.com.