Hello,

We are migrating from solr 4.10.1 to solr 5.5.2. We don't use solr cloud.

We installed the service with installation script and kept the default
configuration, except for a few settings about logs and the gc config (the
same used with solr 4.10.1).

We tested today the performances of solr 5.5.2 with a limit test, and got
really really bad performances, some queries taking up to 290000 ms (on our
dev server, which are sub dimensioned, but with no perf test, the query
time is still bigger, but not THAT much)

The server has three cores, one of 8g, one of 3g and one of less than 1g.
The machine has 64g of ram and xmx and xms are set to 16g.

 We check the jvm in visualvm and noticed too many threads were created by
jetty. The max threads was set to 10000 in jetty.xml, so we lowered it to
400 (the same number we used with tomcat7)

Then we perf tested again, the queries were still very slow, with not so
much of the cpu used, as we saw with top, 16 cores all used at most at 20%
(but really some juste 5%). After 30 minutes of test, we could see in
visualvm that the threads we're spending 65% of their time in
LRUCache.get() and 25% in LRUCache.put(). We noticed in visualvm the
solrthreads were mostly blocked, and then checked the dump threads in solr
admin interface, and the blocked ones were waiting for LRUcache.get().

We have queries with filters (fq parameter). We use FastLRUCache for filter
cache and LRUCache for document cache, with a min/max size of 512 for
filter and 15000 for document cache. This may seem small but it's the value
we use with solr 4.10.1 in production with what we consider good enough
performances (less than 40 ms).

Does anyone have an idea what is wrong ? Our configuration is ok with solr
4.10.1.

Best regards,
Elisabeth

Reply via email to