We CANNOT diagnose anything until you tell us the error message!

Erick, I strongly disagree that more heap is needed for bigger indexes.
Except for faceting, Lucene was designed to stream index data and
work regardless of the size of the index. Indexing is in RAM buffer
sized chunks, so large updates also don’t need extra RAM.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <rajdeepsahoo2...@gmail.com> wrote:
> 
> We have allocated 16 gb of heap space  out of 24 g.
>   There are 3 solr cores here, for one core when the no of documents are
> getting increased i.e. around 4.5 lakhs,then this scenario is happening.
> 
> 
> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <erickerick...@gmail.com>
> wrote:
> 
>> Allocate more heap and possibly add more RAM.
>> 
>> What are you expectations? You can't continue to
>> add documents to your Solr instance without regard to
>> how much heap you’ve allocated. You’ve put over 4x
>> the number of docs on the node. There’s no magic here.
>> You can’t continue to add docs to a Solr instance without
>> increasing the heap at some point.
>> 
>> And as far as I know, you’ve never told us how much heap yo
>> _are_ allocating. The default for Java processes is 512M, which
>> is quite small. so perhaps it’s a simple matter of starting Solr
>> with the -XmX parameter set to something larger.
>> 
>> Best,
>> Erick
>> 
>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <rajdeepsahoo2...@gmail.com>
>> wrote:
>>> 
>>> What can we do in this scenario as the solr master node is going down and
>>> the indexing is failing.
>>> Please provide some workaround for this issue.
>>> 
>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wun...@wunderwood.org>
>>> wrote:
>>> 
>>>> What message do you get about the heap space.
>>>> 
>>>> It is completely normal for Java to use all of heap before running a
>> major
>>>> GC. That
>>>> is how the JVM works.
>>>> 
>>>> wunder
>>>> Walter Underwood
>>>> wun...@wunderwood.org
>>>> http://observer.wunderwood.org/  (my blog)
>>>> 
>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <rajdeepsahoo2...@gmail.com>
>>>> wrote:
>>>>> 
>>>>> Please reply anyone
>>>>> 
>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>>>> rajdeepsahoo2...@gmail.com>
>>>>> wrote:
>>>>> 
>>>>>> This is happening when the no of indexed document count is increasing.
>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
>>>>>> million it's heap space is getting full.
>>>>>> 
>>>>>> 
>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>>>> mich...@michaelgibney.net>
>>>>>> wrote:
>>>>>> 
>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>>>>>> this mean that some variant of this configuration was working for you
>>>>>>> at some point, or just that the failure happens quickly?
>>>>>>> 
>>>>>>> If heap space and faceting are indeed the bottleneck, you might make
>>>>>>> sure that you have docValues enabled for your facet field fieldTypes,
>>>>>>> and perhaps set uninvertible=false.
>>>>>>> 
>>>>>>> I'm not seeing where large numbers of facets initially came from in
>>>>>>> this thread? But on that topic this is perhaps relevant, regarding
>> the
>>>>>>> potential utility of a facet cache:
>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>>>> 
>>>>>>> Michael
>>>>>>> 
>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <t...@kb.dk> wrote:
>>>>>>>> 
>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>>>> I  had a similar issue with a large number of facets. There is no
>> way
>>>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>>>> search engine with high number of facets.
>>>>>>>> 
>>>>>>>> Just for the record then it is doable under specific circumstances
>>>>>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>>>>>> fixed list of facet fields):
>>>>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>>>> 
>>>>>>>> More usable for the current case would be to play with facet.threads
>>>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>>>> 
>>>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>> 
>>>> 
>> 
>> 

Reply via email to