Hi all ,
I have managed to successfully index around 6 million documents, but while
indexing (and even now after the indexing has stopped), I am running into a
bunch of errors.
The most common error I see is
/ null:org.apache.solr.common.SolrException:
Thanks Mark. I meant ConcurrentMergeScheduler and ramBufferSizeMB (not
maxBuffer). These are my settings for Merge.
/
ramBufferSizeMB960/ramBufferSizeMB
mergeFactor40/mergeFactor
mergeScheduler
class=org.apache.lucene.index.ConcurrentMergeScheduler/
/
--Shreejay
Mark Miller-3
You really should be careful about optimizes, they're generally not needed.
And optimizing is almost always wrong when done after every N documents in
a batch process. Do it at the very end or not at all. optimize essentially
re-writes the entire index into a single segment, so you're copying
Thanks Erick. I will try optimizing after indexing everything. I was doing it
after every batch since it was taking way too long to Optimize (which was
expected), but it was not finishing merging it into lesser number of
segments (1 segment).
Instead of doing an optimize, I have now changed the
On Nov 9, 2012, at 1:20 PM, shreejay shreej...@gmail.com wrote:
Instead of doing an optimize, I have now changed the Merge settings by
keeping a maxBuffer = 960, a merge Factor = 40 and ConcurrentMergePolicy.
Don't you mean ConcurrentMergeScheduler?
Keep in mind that if you use the default
Thanks Everyone.
As Shawn mentioned, it was a memory issue. I reduced the amount allocated to
Java to 6 GB. And its been working pretty good.
I am re-indexing one of the SolrCloud. I was having trouble with optimizing
the data when I indexed last time
I am hoping optimizing will not be an
If you can share any logs, that would help as well.
- Mark
In addition to Shawn's comments, you might want to see:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Lucene's use of MMapDirectory can mislead you when looking at
memory usage
Best
Erick
On Mon, Oct 29, 2012 at 5:59 PM, Shawn Heisey s...@elyograg.org wrote:
On
Hi All,
I am trying to run two SolrCloud with 3 and 2 shards respectively (lets say
Cloud3shards and Clouds2Shards). All servers are identical with 18GB Ram
(16GB assigned for Java).
I am facing a few issues on both clouds and would be grateful if any one
else has seen / solved these.
1)
On 10/29/2012 3:26 PM, shreejay wrote:
I am trying to run two SolrCloud with 3 and 2 shards respectively (lets say
Cloud3shards and Clouds2Shards). All servers are identical with 18GB Ram
(16GB assigned for Java).
This bit right here sets off warning bells right away. You're only
leaving 2GB
10 matches
Mail list logo