Hi everyone,
My Solr JVM runs out of heap space quite frequently. I'm trying to
understand Solr/Lucene's memory usage so I can address the problem
correctly. Otherwise, I feel I'm taking random shots in the dark.
I've tried previous troubleshooting suggestions. Here's what I've done:
1)
Mike, Yonik, thanks for the quick reply.
I think it is in your queries. Are you sorting on many
fields? What is a typical query? I'm not a lucene expert,
but there are lucene experts on this list.
Our queries do not sort by any field. However, we do make use of
FunctionQueries and a
I'm afraid I don't have the answer, I can only add that we also had this
problem. We later installed the official Tomcat binary, but still get the
optimal performance in production environments error notification.
-Graham
We have used replication for a few weeks now and it generally works well.
I believe you'll find that commit operations cause only new segments to be
transferred, whereas optimize operations cause the entire index to be
transferred. Therefore, the amount of data transferred really depends on how
Sorry for interloping, but I have been wondering the same thing as Ryan. On
my current index with ~6.1M docs, I restarted Solr and ran a query that
included faceting on 4 fields:
QTime: 5712
numFound: 25908
filterCache stats:
lookups : 0
hits : 0
hitratio : 0.00
Hi Galo,
The snapinstaller actually performs a commit as its last step, so if that
didn't work, it's not surprising that running commit separately didn't work,
either.
I would suggest running the snapinstaller and/or commit scripts with the -V
option. This will produce verbose debugging
Apologies in advance if SOLR-187 and SOLR-188 look the same -- they are the
same issue. I have been using adjusted scripts locally but hadn't used Jira
before and wasn't sure of the process. I decided to figure it out after
answering Gola's question this morning...then saw that Jeff had mentioned