What kind of statistics?Are these stats that you could perhaps get from
faceting or the stats component instead of gathering docs and accumulating
stats yourself?
> On Jul 14, 2020, at 8:51 AM, Odysci wrote:
>
> Hi Erick,
>
> I agree. The 300K docs in one search is an anomaly.
> But we
Hi Erick,
I agree. The 300K docs in one search is an anomaly.
But we do use 'fq' to return a large number of docs for the purposes of
generating statistics for the whole index. We do use CursorMark extensively.
Thanks!
Reinaldo
On Tue, Jul 14, 2020 at 8:55 AM Erick Erickson
wrote:
> I’d add th
I’d add that you’re abusing Solr horribly by returning 300K documents in a
single go.
Solr is built to return the top N docs where N is usually quite small, < 100.
If you allow
an unlimited number of docs to be returned, you’re simply kicking the can down
the road, somebody will ask for 1,000,00
Shawn,
thanks for the extra info.
The OOM errors were indeed because of heap space. In my case most of the GC
calls were not full GC. Only when heap was really near the top, a full GC
was done.
I'll try out your suggestion of increasing the G1 heap region size. I've
been using 4m, and from what yo
On 6/25/2020 2:08 PM, Odysci wrote:
I have a solrcloud setup with 12GB heap and I've been trying to optimize it
to avoid OOM errors. My index has about 30million docs and about 80GB
total, 2 shards, 2 replicas.
Have you seen the full OutOfMemoryError exception text? OOME can be
caused by prob
Hi,
Just summarizing:
I've experimented using different sized of filtercache and documentcache,
after removing any maxRamMB. Now the heap seems to behave as expected,
that is, it grows, then GC (not full one) kicks in multiple times and keep
the used heap under control. eventually full GC may kic
Hi Reinaldo,
Glad that helped. I've had several sleepless nights with Solr clusters
failing spectacularly in production due to that but I still cannot say that
the problem is completely away.
Did you check in the heap dump if you have cache memory leaks as described
in https://issues.apache.org/
Thanks.
The heapdump indicated that most of the space was occupied by the caches
(filter and documentCache in my case).
I followed your suggestion of removing the limit on maxRAMMB on filterCache
and documentCache and decreasing the number of entries allowed.
It did have a significant impact on the
I have faced similar issues and the culprit was filterCache when using
maxRAMMB. More specifically on a sharded Solr cluster with lots of faceting
during search (which makes use of the filterCache in a distributed setting)
I noticed that maxRAMMB value was not respected. I had a value of 300MB set
Hi Furkan,
I'm using solr 8.3.1 (with openjdk version "11.0.7"), with the following
cache settings:
Thanks
Reinaldo
On Thu, Jun 25, 2020 at 7:45 PM Furkan KAMACI
wrote:
> Hi Reinaldo,
>
> Which version of Solr do you use and could you share your cache settings?
>
> O
Hi Reinaldo,
Which version of Solr do you use and could you share your cache settings?
On the other hand, did you check here:
https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems
Kind Regards,
Furkan KAMACI
On Thu, Jun 25, 2020 at 11:09 PM Odysci wrote:
> Hi,
>
> I have a
Hi,
I have a solrcloud setup with 12GB heap and I've been trying to optimize it
to avoid OOM errors. My index has about 30million docs and about 80GB
total, 2 shards, 2 replicas.
In my testing setup I submit multiple queries to solr (same node),
sequentially, and with no overlap between the docum
12 matches
Mail list logo