No faceting. Highlighting. We have very long queries, because students are
pasting homework problems. I’ve seen 1000 word queries, but we truncate at 40
words.
We do as-you-type results, so we also have ngram fields on the 20 million
solved homework questions. This bloats the index severely.
Walter Underwood wrote:
> I knew about SOLR-7433, but I’m really surprised that 200 incoming requests
> can need 4000 threads.
>
> We have four shards.
For that I would have expected at most 800 Threads. Are you perhaps doing
faceting on multiple fields with
I knew about SOLR-7433, but I’m really surprised that 200 incoming requests can
need 4000 threads.
We have four shards.
Why is there a thread per shard? HTTP can be done async: send1, send2, send3,
send4, recv1 recv2, recv3, recv4. I’ve been doing that for over a decade with
HTTPClient.
Walter Underwood wrote:
> I set this in jetty.xml, but it still created 4000 threads.
>
>
>
>
>
>
That sets a limit on the number of threads started by Jetty to handle incoming
connections, but does not affect how many threads Solr can create. I guess you
I’m pretty sure these OOMs are caused by uncontrolled thread creation, up to
4000 threads. That requires an additional 4 Gb (1 Meg per thread). It is like
Solr doesn’t use thread pools at all.
I set this in jetty.xml, but it still created 4000 threads.
wunder
Walter Underwood
I found the suggesters very memory hungry. I had one particularly large
index where the suggester should have been filtering a small number of
docs, but was mmap'ing the entire index. I only ever saw this behavior with
the suggesters.
On 22 November 2017 at 03:17, Walter Underwood
On 11/21/2017 9:17 AM, Walter Underwood wrote:
> All our customizations are in solr.in.sh. We’re using the one we configured
> for 6.3.0. I’ll check for any differences between that and the 6.5.1 script.
The order looks correct to me -- the arguments for the OOM killer are
listed *before* the
Walter:
Yeah, I've seen this on occasion. IIRC, the OOM exception will be
specific to running out of stack space, or at least slightly different
than the "standard" OOM error. That would be the "smoking gun" for too
many threads
Erick
On Tue, Nov 21, 2017 at 9:00 AM, Walter Underwood
I do have one theory about the OOM. The server is running out of memory because
there are too many threads. Instead of queueing up overload in the load
balancer, it is queue in new threads waiting to run. Setting
solr.jetty.threads.max to 10,000 guarantees this will happen under overload.
New
bq: but those use analyzing infix, so they are search indexes, not in-memory
Sure, but they still can consume heap. Most of the index is MMapped of
course, but there are some control structures, indexes and the like
still kept on the heap.
I suppose not using the suggester would nail it though.
All our customizations are in solr.in.sh. We’re using the one we configured for
6.3.0. I’ll check for any differences between that and the 6.5.1 script.
I don’t see any arguments at all in the dashboard. I do see them in a ps
listing, right at the end.
java -server -Xms8g -Xmx8g -XX:+UseG1GC
On 11/20/2017 6:17 PM, Walter Underwood wrote:
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting
OOMs. That is really bad, because it means we need to reboot every node in the
cluster.
Also,
Hi Walter,
you can check if the JVM OOM hook is acknowledged by JVM
and setup in the JVM. The options are "-XX:+PrintFlagsFinal -version"
You can modify your bin/solr script and tweak the function "launch_solr"
at the end of the script. Replace "-jar start.jar" with "-XX:+PrintFlagsFinal
When I ran load benchmarks with 6.3.0, an overloaded cluster would get super
slow but keep functioning. With 6.5.1, we hit 100% CPU, then start getting
OOMs. That is really bad, because it means we need to reboot every node in the
cluster.
Also, the JVM OOM hook isn’t running the process
14 matches
Mail list logo