What do you mean by JVM level? Run Solr on different ports on the same
machine? If you have a 32 core box would you run 2,3,4 JVMs?
On Sun, Dec 4, 2016 at 8:46 PM, Jeff Wartes wrote:
>
> Here’s an earlier post where I mentioned some GC investigation tools:
> https://mail-archives.apache.org/mod_
Here’s an earlier post where I mentioned some GC investigation tools:
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3c8f8fa32d-ec0e-4352-86f7-4b2d8a906...@whitepages.com%3E
In my experience, there are many aspects of the Solr/Lucene memory allocation
model that scale wi
On 12/3/2016 9:46 PM, S G wrote:
> The symptom we see is that the java clients querying Solr see response
> times in 10s of seconds (not milliseconds).
> Some numbers for the Solr Cloud:
>
> *Overall infrastructure:*
> - Only one collection
> - 16 VMs used
> - 8 shards (1 leader and 1 replica per
ers
>>> available to handle this request. Zombie server list:
>>> [HOSTA_ca_1_1456429897]
>>>
>>> 9) org.apache.solr.common.SolrException:
>>> org.apache.solr.client.solrj.SolrServerException: Tried one server for
>> read
>>> operation and
t; >
> > 11) org.apache.solr.common.SolrException:
> > org.apache.solr.client.solrj.SolrServerException: Tried one server for
> read
> > operation and it timed out, so failing fast
> >
> > 12) null:org.apache.solr.common.SolrException:
> > org.apache.solr.client.solrj
ption:
> org.apache.solr.client.solrj.SolrServerException: Tried one server for read
> operation and it timed out, so failing fast
>
> Why are we seeing so many timeouts then and why so huge response times on
> the client?
>
> Thanks
> SG
>
>
>
> On S
from mobile
>
>
> > On Dec 2, 2016, at 4:49 PM, Shawn Heisey wrote:
> >
> >> On 12/2/2016 12:01 PM, S G wrote:
> >> This post shows some stats on Solr which indicate that there might be a
> >> memory leak in there.
> >>
> >> http:/
gt; memory leak in there.
>>
>> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
>>
>> Can someone please help to debug this?
>> It might be a very good step in making Solr stable if we can fix this.
>
> +1 to what Walter said.
>
> I re
This post shows some stats on Solr which indicate that there might be a
memory leak in there.
http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
Can someone please help to debug this?
It might be a very good step in making Solr stable if we can fix this.
Thanks
SG
On 12/2/2016 12:01 PM, S G wrote:
> This post shows some stats on Solr which indicate that there might be a
> memory leak in there.
>
> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
>
> Can someone please help to debug this?
> It might be a very goo
n there.
>
> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
>
> Can someone please help to debug this?
> It might be a very good step in making Solr stable if we can fix this.
>
> Thanks
> SG
>
)
> On Dec 2, 2016, at 11:01 AM, S G wrote:
>
> Hi,
>
> This post shows some stats on Solr which indicate that there might be a
> memory leak in there.
>
> http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
> <http://stackoverflow.com/quest
Hi,
This post shows some stats on Solr which indicate that there might be a
memory leak in there.
http://stackoverflow.com/questions/40939166/is-this-a-memory-leak-in-solr
Can someone please help to debug this?
It might be a very good step in making Solr stable if we can fix this.
Thanks
SG
I don't have a filter cache, and have completely disabled filter cache. Since
I am not using filter queries.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Leak-in-solr-4-8-1-tp4198488p4198716.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Wed, 2015-04-08 at 14:00 -0700, pras.venkatesh wrote:
> 1. 8 nodes, 4 shards(2 nodes per shard)
> 2. each node having about 55 GB of Data, in total there is 450 million
> documents in the collection. so the document size is not huge,
So ~120M docs/shard.
> 3. The schema has 42 fields, it gets
.472066.n3.nabble.com/Memory-Leak-in-solr-4-8-1-tp4198488.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi, one of problem is now alleviated.
Number of lines with "can't identify protocol " in "lsof" output is now
reduced very much. Earlier it kept increasing upto "ulimit -n" thus causing
"Too many open files" error but now it is contained to a quite lesser
number. This happened after I changed max
Hi Chris,
Thanks for you reply and sorry for delay. Please find my replies below in
the mail.
On Sat, Dec 3, 2011 at 5:56 AM, Chris Hostetter wrote:
>
> : Till 3 days ago, we were running Solr 3.4 instance with following java
> : command line options
> : java -server -*Xms2048m* -*Xmx4096m* -Dso
: Till 3 days ago, we were running Solr 3.4 instance with following java
: command line options
: java -server -*Xms2048m* -*Xmx4096m* -Dsolr.solr.home=etc -jar start.jar
:
: Then we increased the memory with following options and restarted the
: server
: java -server *-**Xms4096m* -*Xmx10g* -Dso
Hi everyone,
A couple of days back I encountered a weird problem of continuously
increasing memory consumption. I am not sure if this is a problem of java
or Solr (3.4).
Till 3 days ago, we were running Solr 3.4 instance with following java
command line options
java -server -*Xms2048m* -*Xmx4096m
20 matches
Mail list logo