In addition to what Erick and Walter correctly mentioned :
"heap usage varies from 5 gb to 12 gb . Initially it was 5 gb then increased
to 12 gb gradually and decreasing to 5 gb again. (may be because of garbage
collection)
10-12 GB maximum heap uses, allocated is 50 GB. "
Did I read it
Yes, why are you doing this? A suggester is designed to have a smaller set of
terms than the entire index.
I would never expect a 130 million term suggester to work. I’m astonished that
it works with 50 million terms.
We typically have about 50 thousand terms in a suggester.
Also, you haven’t
bq. I have 130 million documents and each document has unique document id. I
want to build suggester on document id.
Why do it this way? I'm supposing you want to have someone start
typing in the doc ID
then do autocomplete on it. For such a simple operation, it would be
far easier and
pretty
I sent log of node to which i sent the request. need to check other nodes
log
>>In SolrCloud an investigation does not isolate to a single Solr log : you
>>see a timeout, i would recommend to check both the nodes involved.
monitored from admin UI, could not find any clue at the time of
Hi Yogendra,
you mentioned you are using SolrCloud.
In SolrCloud an investigation does not isolate to a single Solr log : you
see a timeout, i would recommend to check both the nodes involved.
When you say : " heap usage is around 10 GB - 12 GB per node.", do you refer
to the effective usage by
I have 130 million documents and each document has unique document id. I
want to build suggester on document id. suggest dictionary building is
failing for 130 millions. while testing it was successful with 50 million
documents.
8 nodes with 50 GB head for each node and total 600 gb ram
heap