Hi Satnam,

Can you please share some details about what application using Lucene you are using. For Solr and Elasticserach there are recommendations and default startup scripts. If it is yur own Lucene application we would also need more details.

Basically, Lucene itsself needs very few heap to execute queries and index stuff. With an index of 700 Gigabytes you should still be able to use a small heap (like a few gigabytes). Problems are mostly located outside of Lucene, e.g., code trying to fetch all results of a large query result using TopDocs paging ("deep paging problem"). So please share more details to give you some answers. Maybe also source code where it hangs.

Uwe

Am 03.01.2023 um 13:49 schrieb _ SATNAM:
Hi,
The issue is my garbage collection is running quite often i configure my
JVM as recommended (Gone though several articles ,blogs on lucene) also
provide enough RAM  and memory (not as large to trigger GC ) .Main cause of
concern is GC run for more than 10 min (sometimes even 15 min)
This make whole server stuck and  search is not responding . to solve it
what i am doing right now is restarting my server (very bad approach) can
you please help me in managing it and provide your insight what steps or
configuration i should prefer some useful way to optimize it .
my index size  700 GB

what configurations you suggest for it ,
like jvm,ram ,cpu cores,heap size,young and old genration.
I hope to hear from you soon

    -

--
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to