Hello,

I am using Solr (5.5.0) on HDFS. SolrCloud of 80 nodes. Sold collection
with number of shards = 80 and replication Factor=2

Sold JVM heap size = 20 GB
solr.hdfs.blockcache.enabled = true
solr.hdfs.blockcache.direct.memory.allocation = true
MaxDirectMemorySize = 25 GB

I am querying a solr collection with index size = 500 MB per core.

The off-heap (25 GB) is huge so that it can load the entire index.

Using cursor approach (number of rows = 100K), I read 2 fields (Total 40
bytes per solr doc) from the Solr docs that satisfy the query. The docs are
sorted by "id" and then by those 2 fields.

I am not able to understand why the heap memory is getting full and Full
GCs are consecutively running with long GC pauses (> 30 seconds). I am
using CMS GC.

-XX:NewRatio=3 \

-XX:SurvivorRatio=4 \

-XX:TargetSurvivorRatio=90 \

-XX:MaxTenuringThreshold=8 \

-XX:+UseConcMarkSweepGC \

-XX:+UseParNewGC \

-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \

-XX:+CMSScavengeBeforeRemark \

-XX:PretenureSizeThreshold=64m \

-XX:+UseCMSInitiatingOccupancyOnly \

-XX:CMSInitiatingOccupancyFraction=50 \

-XX:CMSMaxAbortablePrecleanTime=6000 \

-XX:+CMSParallelRemarkEnabled \

-XX:+ParallelRefProcEnabled"


Please guide me in debugging the heap usage issue.


Thanks!

Reply via email to