[ 
https://issues.apache.org/jira/browse/SOLR-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-16555:
--------------------------------
    Description: 
SolrIndexSearcher takes the bitset from the result and tries to combine it with 
all the cached filter queries. Currently this duplicates the bitset multiple 
times based on the number of filter queries. It looks like this isn't necessary 
and instead could just operate on the bitset itself or a single mutable copy of 
the bitset.

Lines 1219 to 1225
https://github.com/apache/solr/blob/main/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1219

----

I've been using async profiler 
(https://github.com/jvm-profiling-tools/async-profiler) to look at some 
performance stuff with Solr for a client. Originally I looked at CPU in the 
profile and found that I could also capture and look at memory allocations 
during the same run. This led me to finding this crazy amount of memory 
allocation over a short period of time.

Async profiler is being run with the following parameters which captures cpu, 
memory, and lock information for a 300 second period on some pid.

{code:java}
/opt/async-profiler/profiler.sh -a -d 300 -o jfr -e cpu,alloc,lock -f 
/tmp/profile.jfr PID_GOES_HERE
{code}

The resulting JFR for this is ~100-200MB usually and so not going to attach it 
here since it has some client specific methods in some calls in it.

However screenshots of the findings from loading the jfr in both IntelliJ and 
Java Mission Control you can see some of the findings:

 !Screenshot 2022-11-16 at 14.52.37.png|width=750! 

This shows that in 5 minutes ~1.06 terabytes of memory allocations for 
SolrIndexSearcher#getProcessedFilter

 !Screenshot 2022-11-16 at 14.53.23.png|width=750! 

~680GB was allocated from BitDocSet#intersection

 !Screenshot 2022-11-16 at 14.53.35.png! 

~315GB was allocated from BitDocSet#andNot

Based on CPU profiling, it is amazing to me but G1 garbage collector is keeping 
up. Each of these objects are very short lived.

This was during some load testing and able to give some query types in question:
* ~30 queries/second
* ~5 fq parameters per query
* so ~9000 queries in 5 minutes with ~45000 fq clauses
* 10GB heap for the Solr instance with 128GB ram on the node and index size 
completely fits in memory.
* this is one shard on the node for testing and ~23 million documents in the 
shard - optimized so no deletes.

Based on my rough calculations, that is ~24MB of heap per filter query clause 
(1.06TB/45000) or ~120MB of heap per query (assuming 5 fq per query).

Since most of these are large allocations, Java mission control is very helpful 
in saying that there are a large number of allocations outside of TLAB.

 !Screenshot 2022-11-17 at 13.03.21.png! 

  was:
SolrIndexSearcher takes the bitset from the result and tries to combine it with 
all the cached filter queries. Currently this duplicates the bitset multiple 
times based on the number of filter queries. It looks like this isn't necessary 
and instead could just operate on the bitset itself or a single mutable copy of 
the bitset.

Lines 1219 to 1225
https://github.com/apache/solr/blob/main/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1219

----

I've been using async profiler 
(https://github.com/jvm-profiling-tools/async-profiler) to look at some 
performance stuff with Solr for a client. Originally I looked at CPU in the 
profile and found that I could also capture and look at memory allocations 
during the same run. This led me to finding this crazy amount of memory 
allocation over a short period of time.

Async profiler is being run with the following parameters which captures cpu, 
memory, and lock information for a 300 second period on some pid.

{code:java}
/opt/async-profiler/profiler.sh -a -d 300 -o jfr -e cpu,alloc,lock -f 
/tmp/profile.jfr PID_GOES_HERE
{code}

The resulting JFR for this is ~100-200MB usually and so not going to attach it 
here since it has some client specific methods in some calls in it.

However screenshots of the findings from loading the jfr in both IntelliJ and 
Java Mission Control you can see some of the findings:

 !Screenshot 2022-11-16 at 14.52.37.png|width=750! 

This shows that in 5 minutes ~1.06 terabytes of memory allocations for 
SolrIndexSearcher#getProcessedFilter

 !Screenshot 2022-11-16 at 14.53.23.png|width=750! 

~680GB was allocated from BitDocSet#intersection

 !Screenshot 2022-11-16 at 14.53.35.png! 


> SolrIndexSearcher - FilterCache intersections/andNot should not clone bitsets 
> repeatedly
> ----------------------------------------------------------------------------------------
>
>                 Key: SOLR-16555
>                 URL: https://issues.apache.org/jira/browse/SOLR-16555
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: query
>            Reporter: Kevin Risden
>            Priority: Major
>         Attachments: Screenshot 2022-11-16 at 14.52.37.png, Screenshot 
> 2022-11-16 at 14.53.23.png, Screenshot 2022-11-16 at 14.53.35.png, Screenshot 
> 2022-11-17 at 13.03.21.png
>
>
> SolrIndexSearcher takes the bitset from the result and tries to combine it 
> with all the cached filter queries. Currently this duplicates the bitset 
> multiple times based on the number of filter queries. It looks like this 
> isn't necessary and instead could just operate on the bitset itself or a 
> single mutable copy of the bitset.
> Lines 1219 to 1225
> https://github.com/apache/solr/blob/main/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1219
> ----
> I've been using async profiler 
> (https://github.com/jvm-profiling-tools/async-profiler) to look at some 
> performance stuff with Solr for a client. Originally I looked at CPU in the 
> profile and found that I could also capture and look at memory allocations 
> during the same run. This led me to finding this crazy amount of memory 
> allocation over a short period of time.
> Async profiler is being run with the following parameters which captures cpu, 
> memory, and lock information for a 300 second period on some pid.
> {code:java}
> /opt/async-profiler/profiler.sh -a -d 300 -o jfr -e cpu,alloc,lock -f 
> /tmp/profile.jfr PID_GOES_HERE
> {code}
> The resulting JFR for this is ~100-200MB usually and so not going to attach 
> it here since it has some client specific methods in some calls in it.
> However screenshots of the findings from loading the jfr in both IntelliJ and 
> Java Mission Control you can see some of the findings:
>  !Screenshot 2022-11-16 at 14.52.37.png|width=750! 
> This shows that in 5 minutes ~1.06 terabytes of memory allocations for 
> SolrIndexSearcher#getProcessedFilter
>  !Screenshot 2022-11-16 at 14.53.23.png|width=750! 
> ~680GB was allocated from BitDocSet#intersection
>  !Screenshot 2022-11-16 at 14.53.35.png! 
> ~315GB was allocated from BitDocSet#andNot
> Based on CPU profiling, it is amazing to me but G1 garbage collector is 
> keeping up. Each of these objects are very short lived.
> This was during some load testing and able to give some query types in 
> question:
> * ~30 queries/second
> * ~5 fq parameters per query
> * so ~9000 queries in 5 minutes with ~45000 fq clauses
> * 10GB heap for the Solr instance with 128GB ram on the node and index size 
> completely fits in memory.
> * this is one shard on the node for testing and ~23 million documents in the 
> shard - optimized so no deletes.
> Based on my rough calculations, that is ~24MB of heap per filter query clause 
> (1.06TB/45000) or ~120MB of heap per query (assuming 5 fq per query).
> Since most of these are large allocations, Java mission control is very 
> helpful in saying that there are a large number of allocations outside of 
> TLAB.
>  !Screenshot 2022-11-17 at 13.03.21.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to