[ 
https://issues.apache.org/jira/browse/SOLR-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bidorbuy updated SOLR-10140:
----------------------------
    Description: 
We migrated from a perfectly healthy Solr 6.2.0 installation to Solr 6.4.1 and 
when switching Solr 6.4.1 into production, load-average and CPU trash - compare 
Solr6.4.1.png (which shows that CPU and load-average spike) with Solr6.2.0.png 
(very stable, predictable utilisation).

Migration changes:
- Changed luceneMatchVersion from 6.2.0 to 6.4.1 and rebuild index
- Changed solr.SynonymFilterFactory to solr.SynonymFilterFactory
- Removed defaultSearchField and replaced with df in solrconfig.xml
- Removed solrQueryParser defaultOperator and replaced with q.op in 
solrconfig.xml
- Increased heap from 3G to 4G via "SOLR_JAVA_MEM="-Xms4G -Xmx4G""
- Our GC tune remains unchanged:
{code}
GC_TUNE="-XX:NewRatio=3 \
-XX:SurvivorRatio=4 \
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkSweepGC \
-XX:+UseParNewGC \
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
-XX:+CMSScavengeBeforeRemark \
-XX:PretenureSizeThreshold=64m \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=50 \
-XX:CMSMaxAbortablePrecleanTime=6000 \
-XX:+CMSParallelRemarkEnabled \
-XX:+ParallelRefProcEnabled"
{code}
- I noticed in jetty.xml new additions of "InstrumentedQueuedThreadPool" and 
"InstrumentedHandler" - unsure if this would affect CPU overloading?

Since our production load is fairly static with regards to the index size (see 
Solr6.4.1-info-*.pngs) and since our current Solr 6.2.0 runs perfectly fine on 
the same version of CentOS and JDK I can only think that a change in Jetty or 
Solr/Lucene causes the trashing of CPU.

I would like to assist with isolating/resolving the issue but am not sure what 
other diagnostic information is needed (nor have I seen similar reports 
elsewhere).
 


  was:
We migrated from a perfectly healthy Solr 6.2.0 installation to Solr 6.4.1 and 
when switching Solr 6.4.1 into production, load-average and CPU trash - compare 
Solr6.4.1.png (which shows that CPU and load-average spike) with Solr6.2.0.png 
(very stable, predictable utilisation).

Migration changes:
- Changed luceneMatchVersion from 6.2.0 to 6.4.1 and rebuild index
- Changed solr.SynonymFilterFactory to solr.SynonymFilterFactory
- Removed defaultSearchField and replaced with df in solrconfig.xml
- Removed solrQueryParser defaultOperator and replaced with q.op in 
solrconfig.xml
- Increased heap from 3G to 4G via "SOLR_JAVA_MEM="-Xms4G -Xmx4G""
- Our GC tune remains unchanged:
{code}
GC_TUNE="-XX:NewRatio=3 \
-XX:SurvivorRatio=4 \
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkSweepGC \
-XX:+UseParNewGC \
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
-XX:+CMSScavengeBeforeRemark \
-XX:PretenureSizeThreshold=64m \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=50 \
-XX:CMSMaxAbortablePrecleanTime=6000 \
-XX:+CMSParallelRemarkEnabled \
-XX:+ParallelRefProcEnabled"
{code}

 



> Performance degradation and CPU spike when moving to Solr 6.4.1
> ---------------------------------------------------------------
>
>                 Key: SOLR-10140
>                 URL: https://issues.apache.org/jira/browse/SOLR-10140
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: Server
>    Affects Versions: 6.4.1
>         Environment: CentOS Linux release 7.3.1611 (Core)
> Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
>            Reporter: bidorbuy
>         Attachments: Solr6.2.0.png, Solr6.4.1-info-dashboard.png, 
> Solr6.4.1-info-index.png, Solr6.4.1.png
>
>
> We migrated from a perfectly healthy Solr 6.2.0 installation to Solr 6.4.1 
> and when switching Solr 6.4.1 into production, load-average and CPU trash - 
> compare Solr6.4.1.png (which shows that CPU and load-average spike) with 
> Solr6.2.0.png (very stable, predictable utilisation).
> Migration changes:
> - Changed luceneMatchVersion from 6.2.0 to 6.4.1 and rebuild index
> - Changed solr.SynonymFilterFactory to solr.SynonymFilterFactory
> - Removed defaultSearchField and replaced with df in solrconfig.xml
> - Removed solrQueryParser defaultOperator and replaced with q.op in 
> solrconfig.xml
> - Increased heap from 3G to 4G via "SOLR_JAVA_MEM="-Xms4G -Xmx4G""
> - Our GC tune remains unchanged:
> {code}
> GC_TUNE="-XX:NewRatio=3 \
> -XX:SurvivorRatio=4 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+UseParNewGC \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:+CMSScavengeBeforeRemark \
> -XX:PretenureSizeThreshold=64m \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=50 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled \
> -XX:+ParallelRefProcEnabled"
> {code}
> - I noticed in jetty.xml new additions of "InstrumentedQueuedThreadPool" and 
> "InstrumentedHandler" - unsure if this would affect CPU overloading?
> Since our production load is fairly static with regards to the index size 
> (see Solr6.4.1-info-*.pngs) and since our current Solr 6.2.0 runs perfectly 
> fine on the same version of CentOS and JDK I can only think that a change in 
> Jetty or Solr/Lucene causes the trashing of CPU.
> I would like to assist with isolating/resolving the issue but am not sure 
> what other diagnostic information is needed (nor have I seen similar reports 
> elsewhere).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to