RE: Sql entity processor sortedmapbackedcache out of memory issue

2020-10-02 Thread Srinivas Kashyap
entity processor sortedmapbackedcache out of memory issue On 4/8/2019 11:47 PM, Srinivas Kashyap wrote: > I'm using DIH to index the data and the structure of the DIH is like below > for solr core: > > > 16 child entities > > > During indexing, since the number of request

Re: solr suggester.rebuild takes forever and eventually runs out of memory on production

2020-07-24 Thread Anthony Groves
totally forgot to mention our solr version, it's 7.7.3. > > -Ursprüngliche Nachricht- > Von: Sebastian Riemer [mailto:s.rie...@littera.eu] > Gesendet: Freitag, 24. Juli 2020 09:53 > An: solr-user@lucene.apache.org > Betreff: solr suggester.rebuild takes forever and ev

AW: solr suggester.rebuild takes forever and eventually runs out of memory on production

2020-07-24 Thread Sebastian Riemer
out of memory on production Dear mailing list community, we have troubles when starting the Suggester-Build on one of our production servers. 1. We execute the required query with the suggest.build parameter 2. It seems solr is taking up the task to recreate the suggester index

solr suggester.rebuild takes forever and eventually runs out of memory on production

2020-07-24 Thread Sebastian Riemer
Dear mailing list community, we have troubles when starting the Suggester-Build on one of our production servers. 1. We execute the required query with the suggest.build parameter 2. It seems solr is taking up the task to recreate the suggester index (we see that the CPU rises

Re: Out of memory errors with Spatial indexing

2020-07-06 Thread David Smiley
I believe you are experiencing this bug: LUCENE-5056 The fix would probably be adjusting code in here org.apache.lucene.spatial.query.SpatialArgs#calcDistanceFromErrPct ~ David Smiley Apache Lucene/Solr Search Developer

Re: Out of memory errors with Spatial indexing

2020-07-06 Thread Sunil Varma
Hi David Thanks for your response. Yes, I noticed that all the data causing issue were at the poles. I tried the "RptWithGeometrySpatialField" field type definition but get a "Spatial context does not support S2 spatial index"error. Setting "spatialContextFactory="Geo3D" I still see the original

Re: Out of memory errors with Spatial indexing

2020-07-03 Thread David Smiley
Hi Sunil, Your shape is at a pole, and I'm aware of a bug causing an exponential explosion of needed grid squares when you have polygons super-close to the pole. Might you try S2PrefixTree instead? I forget if this would fix it or not by itself. For indexing non-point data, I recommend

Out of memory errors with Spatial indexing

2020-07-03 Thread Sunil Varma
We are seeing OOM errors when trying to index some spatial data. I believe the data itself might not be valid but it shouldn't cause the Server to crash. We see this on both Solr 7.6 and Solr 8. Below is the input that is causing the error. { "id": "bad_data_1", "spatialwkt_srpt": "LINESTRING

Re: FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-23 Thread Colvin Cowie
https://issues.apache.org/jira/browse/SOLR-14428 On Thu, 23 Apr 2020 at 08:45, Colvin Cowie wrote: > I created a little test that fires off fuzzy queries from random UUID > strings for 5 minutes > *FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"* > > The change in heap

Re: FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-23 Thread Colvin Cowie
I created a little test that fires off fuzzy queries from random UUID strings for 5 minutes *FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"* The change in heap usage is really severe. On 8.5.1 Solr went OOM almost immediately on a 512mb heap, and with a 4GB heap it only

FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-22 Thread Colvin Cowie
Hello, I'm moving our product from 8.3.1 to 8.5.1 in dev and we've got tests failing because Solr is getting OOMEs with a 512mb heap where it was previously fine. I ran our tests on both versions with jconsole to track the heap usage. Here's a little comparison. 8.5.1 dies part way through

RE: Sql entity processor sortedmapbackedcache out of memory issue

2019-04-24 Thread Srinivas Kashyap
@lucene.apache.org Subject: RE: Sql entity processor sortedmapbackedcache out of memory issue Hi Shawn/Mikhail Khludnev, I was going through Jira https://issues.apache.org/jira/browse/SOLR-4799 and see, I can do my intended activity by specifying zipper. I tried doing it, however I'm getting error

Re: Sql entity processor sortedmapbackedcache out of memory issue

2019-04-12 Thread Nitin Kumar
name="B" column="B" /> > > > > Thanks and Regards, > Srinivas Kashyap > > -Original Message- > From: Shawn Heisey > Sent: 09 April 2019 01:27 PM

RE: Sql entity processor sortedmapbackedcache out of memory issue

2019-04-12 Thread Srinivas Kashyap
and Regards, Srinivas Kashyap -Original Message- From: Shawn Heisey Sent: 09 April 2019 01:27 PM To: solr-user@lucene.apache.org Subject: Re: Sql entity processor sortedmapbackedcache out of memory issue On 4/8/2019 11:47 PM, Srinivas

Re: Sql entity processor sortedmapbackedcache out of memory issue

2019-04-09 Thread Shawn Heisey
On 4/8/2019 11:47 PM, Srinivas Kashyap wrote: I'm using DIH to index the data and the structure of the DIH is like below for solr core: 16 child entities During indexing, since the number of requests being made to database was high(to process one document 17 queries) and was utilizing most

Sql entity processor sortedmapbackedcache out of memory issue

2019-04-09 Thread Srinivas Kashyap
Physical memory system(RAM) with 5GB of it allocated to JVM and when we do full-import, only 17 requests are made to database. However, it is shooting up memory consumption and is making the JVM out of memory. Out of memory is happening depending on the number of records each entity is bringing

Re: unable to create new threads: out-of-memory issues

2019-02-12 Thread Erick Erickson
(my blog) > > > On Feb 12, 2019, at 6:58 AM, Martin Frank Hansen (MHQ) wrote: > > > > Hi Mikhail, > > > > Thanks for your help. I will try it. > > > > -Original Message- > > From: Mikhail Khludnev > > Sent: 12. februar 2019 15

Re: unable to create new threads: out-of-memory issues

2019-02-12 Thread Walter Underwood
; To: solr-user > Subject: Re: unable to create new threads: out-of-memory issues > > 1. you can jstack to find it out. > 2. It might create a thread, I don't know. > 3. SolrClient is definitely a subject for heavy reuse. > > On Tue, Feb 12, 2019 at 5:16 PM Martin Frank H

RE: unable to create new threads: out-of-memory issues

2019-02-12 Thread Vadim Ivanov
rg > Subject: unable to create new threads: out-of-memory issues > > Hi, > > I am trying to create an index on a small Linux server running Solr-7.5.0, but > keep running into problems. > > When I try to index a file-folder of roughly 18 GB (18000 files) I get t

RE: unable to create new threads: out-of-memory issues

2019-02-12 Thread Martin Frank Hansen (MHQ)
Hi Mikhail, Thanks for your help. I will try it. -Original Message- From: Mikhail Khludnev Sent: 12. februar 2019 15:54 To: solr-user Subject: Re: unable to create new threads: out-of-memory issues 1. you can jstack to find it out. 2. It might create a thread, I don't know. 3

Re: unable to create new threads: out-of-memory issues

2019-02-12 Thread Mikhail Khludnev
lient.Builder(urlString).build(); > > Thanks > > -Original Message- > From: Mikhail Khludnev > Sent: 12. februar 2019 15:09 > To: solr-user > Subject: Re: unable to create new threads: out-of-memory issues > > Hello, Martin. > How do you index? Where did yo

RE: unable to create new threads: out-of-memory issues

2019-02-12 Thread Martin Frank Hansen (MHQ)
create a new thread? SolrClient solr = new HttpSolrClient.Builder(urlString).build(); Thanks -Original Message- From: Mikhail Khludnev Sent: 12. februar 2019 15:09 To: solr-user Subject: Re: unable to create new threads: out-of-memory issues Hello, Martin. How do you index? Where

Re: unable to create new threads: out-of-memory issues

2019-02-12 Thread Mikhail Khludnev
Hello, Martin. How do you index? Where did you get this error? Usually it occurs in custom code with many new Thread() calls and usually healed with thread poling. On Tue, Feb 12, 2019 at 3:25 PM Martin Frank Hansen (MHQ) wrote: > Hi, > > I am trying to create an index on a small Linux server

unable to create new threads: out-of-memory issues

2019-02-12 Thread Martin Frank Hansen (MHQ)
Hi, I am trying to create an index on a small Linux server running Solr-7.5.0, but keep running into problems. When I try to index a file-folder of roughly 18 GB (18000 files) I get the following error from the server: java.lang.OutOfMemoryError: unable to create new native thread. >From the

Re: SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-19 Thread Erick Erickson
Tanya: Good to hear. You probably want to configure hard as well, and in your case perhaps with openSearcher=true Indexing is only half the problem. It's quite possible that what's happening is your index is just growing and that's pushing the boundaries of Java heap. What I'm thinking is that

Re: SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-19 Thread Tanya Bompi
Hi Erick, Thank you for the follow-up. I have resolved the issue with the increase in heapSize and I am able to set the SOLR VM to initialize with a 3G heap size and the subset of 1 mil records was fetched successfully. Although it fails with the entire 3 mil records. So something is off with

Re: SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-19 Thread Erick Erickson
Has this ever worked? IOW, is this something that's changed or has just never worked? The obvious first step is to start Solr with more than 1G of memory. Solr _likes_ memory and a 1G heap is quite small. But you say: "Increasing the heap size further doesnt start SOLR instance itself.". How much

Re: SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-19 Thread Tanya Bompi
Hi, I am using the Microsoft Jdbc driver 6.4 version in Solr 7.4.0 . I have tried removing the selectMethod=Cursor and still it runs out of heap space. Do we have anyone who has faced similar issue. Thanks Tanya On Tue, Sep 18, 2018 at 6:38 PM Shawn Heisey wrote: > On 9/18/2018 4:48 PM,

Re: SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-18 Thread Shawn Heisey
On 9/18/2018 4:48 PM, Tanya Bompi wrote: I have the SOLR 7.0 setup with the DataImportHandler connecting to the sql server db. I keep getting OutOfMemory: Java Heap Space when doing a full import. The size of the records is around 3 million so not very huge. I tried the following steps and

SOLR 7.0 DIH out of memory issue with sqlserver

2018-09-18 Thread Tanya Bompi
Hi, I have the SOLR 7.0 setup with the DataImportHandler connecting to the sql server db. I keep getting OutOfMemory: Java Heap Space when doing a full import. The size of the records is around 3 million so not very huge. I tried the following steps and nothing helped thus far. 1. Setting the

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-22 Thread Ash Ramesh
Thank you all :) We have made the necessary changes to mitigate this issue On Wed, Aug 22, 2018 at 6:01 AM Shawn Heisey wrote: > On 8/20/2018 9:55 PM, Ash Ramesh wrote: > > We ran a bunch of deep paginated queries (offset of 1,000,000) with a > > filter query. We set the timeout to 5 seconds

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-21 Thread Shawn Heisey
On 8/20/2018 9:55 PM, Ash Ramesh wrote: We ran a bunch of deep paginated queries (offset of 1,000,000) with a filter query. We set the timeout to 5 seconds and it did timeout. We aren't sure if this is what caused the irrecoverable failure, but by reading this -

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-21 Thread Erick Erickson
eone else, but we > identified deep paging as the definite reason for running out of memory or > at least grinding to semi-halt because of long stop-the-world garbage > collection pauses in an application running on a similar SolrCloud. You can > often get away without issues as long as y

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-21 Thread Jan Høydahl
lution architect Cominvent AS - www.cominvent.com > 21. aug. 2018 kl. 11:08 skrev Ere Maijala : > > Hi, > > Just my short comment here. It's difficult to say for someone else, but we > identified deep paging as the definite reason for running out of memory or at > least grinding t

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-21 Thread Ere Maijala
Hi, Just my short comment here. It's difficult to say for someone else, but we identified deep paging as the definite reason for running out of memory or at least grinding to semi-halt because of long stop-the-world garbage collection pauses in an application running on a similar SolrCloud

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-20 Thread Ash Ramesh
s very likely (but not guaranteed) that using cursors will fix > this problem. > > Best, > Erick > > > > On Mon, Aug 20, 2018 at 8:55 PM, Ash Ramesh wrote: > > Hi everyone, > > > > We ran into an issue yesterday where all our ec2 machines, running solr, >

Re: 7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-20 Thread Erick Erickson
ning solr, > ran out of memory and could not heal themselves. I'll try break down what > happened here. > > *System Architecture:* > > - Solr Version: 7.3.1 > - Replica Types: TLOG/PULL > - Num Shards: 8 (default hashing mechanism) > - Doc Count: > 20m > - Index Siz

7.3.1: Query of death - all nodes ran out of memory and had to be shut down

2018-08-20 Thread Ash Ramesh
Hi everyone, We ran into an issue yesterday where all our ec2 machines, running solr, ran out of memory and could not heal themselves. I'll try break down what happened here. *System Architecture:* - Solr Version: 7.3.1 - Replica Types: TLOG/PULL - Num Shards: 8 (default hashing mechanism

Re: Solr Nodes Killed During a ReIndexing Process on New VMs Out of Memory Error

2018-07-19 Thread THADC
Thanks, made heap size considerably larger and its fine now. Thank you -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Re: Solr Nodes Killed During a ReIndexing Process on New VMs Out of Memory Error

2018-07-18 Thread Shawn Heisey
On 7/18/2018 8:31 AM, THADC wrote: Thanks for the reply. I read the link you provided. I am currently not specifying a heap size with solr so my understanding is that by default it will just grow automatically. If I add more physical memory to the VM without doing anything with heap size, won't

Re: Solr Nodes Killed During a ReIndexing Process on New VMs Out of Memory Error

2018-07-18 Thread THADC
Thanks for the reply. I read the link you provided. I am currently not specifying a heap size with solr so my understanding is that by default it will just grow automatically. If I add more physical memory to the VM without doing anything with heap size, won't that possibly fix the problem?

Re: Solr Nodes Killed During a ReIndexing Process on New VMs Out of Memory Error

2018-07-18 Thread Shawn Heisey
On 7/18/2018 7:10 AM, THADC wrote: We performed a full reindex for the first time against our largest database and on two new VMs dedicated to solr indexing. We have two solr nodes (solrCloud/solr7.3) with a zookeeper cluster. Several hours into the reindexing process, both solr nodes shut down

Solr Nodes Killed During a ReIndexing Process on New VMs Out of Memory Error

2018-07-18 Thread THADC
Hi, We performed a full reindex for the first time against our largest database and on two new VMs dedicated to solr indexing. We have two solr nodes (solrCloud/solr7.3) with a zookeeper cluster. Several hours into the reindexing process, both solr nodes shut down with:

Re: Multiple consecutive wildcards (**) causes Out-of-memory

2018-02-07 Thread Bjarke Buur Mortensen
utive wildcards) it causes my > Solr to run out of memory. > > http://localhost:8983/solr/select?q=** > > Why is that? > > I realize that this is not a reasonable query to make, but the system > supports input from users, and they might by accident input this query, > causing S

Multiple consecutive wildcards (**) causes Out-of-memory

2018-02-07 Thread Bjarke Buur Mortensen
Hello list, Whenever I make a query for ** (two consecutive wildcards) it causes my Solr to run out of memory. http://localhost:8983/solr/select?q=** Why is that? I realize that this is not a reasonable query to make, but the system supports input from users, and they might by accident input

Re: With 100% CPU usage giving out of memory exception and solr is not responding

2018-01-02 Thread prathap
What is your Xmx? - 20GB * How many documents in your index? 12GB * What is your filterCache size? 512 MB -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Re: With 100% CPU usage giving out of memory exception and solr is not responding

2018-01-02 Thread prathap
What is your Xmx? 20GB * How many documents in your index? 12GB * What is your filterCache size? 512 MB -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html

Re: With 100% CPU usage giving out of memory exception and solr is not responding

2017-12-29 Thread Toke Eskildsen
prathap wrote: > ERROR - 2017-12-21 08:39:13.326; org.apache.solr.common.SolrException; > null:java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead > limit exceeded ... > INFO - 2017-12-21 08:43:43.351; org.apache.solr.core.SolrCore; > [SA_PROD_SPK_QC]

With 100% CPU usage giving out of memory exception and solr is not responding

2017-12-29 Thread prathap
ERROR - 2017-12-21 08:39:13.326; org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:793) at

Re: Solr5.1 delete out of memory

2017-12-18 Thread Shawn Heisey
On 12/17/2017 9:36 PM, soul wrote: hi! I'm using solr5.1. There have about 0.3 billion doc in my solr. I can insert and select doc in my solr, while failed to delete doc. It remind me that this writer hit an OutOfMemoryError : cannot commit. I am curious that what cause this reason? The

Solr5.1 delete out of memory

2017-12-17 Thread soul
hi! I'm using solr5.1. There have about 0.3 billion doc in my solr. I can insert and select doc in my solr, while failed to delete doc. It remind me that this writer hit an OutOfMemoryError : cannot commit. I am curious that what cause this reason? -- Sent from:

RE: DataImport Handler Out of Memory

2017-09-27 Thread Allison, Timothy B.
...@flexera.com] Sent: Wednesday, September 27, 2017 1:40 PM To: solr-user@lucene.apache.org Subject: DataImport Handler Out of Memory I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in mysql db and the number of records are more than 3.5 million. My solr server stops due to OOM

DataImport Handler Out of Memory

2017-09-27 Thread Deeksha Sharma
I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in mysql db and the number of records are more than 3.5 million. My solr server stops due to OOM (out of memory error). I tried starting solr by giving 12GB of RAM but still no luck. Also, I see that Solr fetches all

Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
d, Jun 14, 2017 at 11:46 AM Susheel Kumar <susheel2...@gmail.com> > wrote: > >> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer >> or so and share. >> >> Thnx >> >> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada < >

Re: Out of Memory Errors

2017-06-14 Thread Satya Marivada
d, Jun 14, 2017 at 11:26 AM, Satya Marivada < > satya.chaita...@gmail.com> > wrote: > > > Hi, > > > > I am getting Out of Memory Errors after a while on solr-6.3.0. > > The > -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh >

Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
You may have gc logs saved when OOM happened. Can you draw it in GC Viewer or so and share. Thnx On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <satya.chaita...@gmail.com> wrote: > Hi, > > I am getting Out of Memory Errors after a while on solr-6.3.0. > The -XX:OnOutOfMemo

Out of Memory Errors

2017-06-14 Thread Satya Marivada
Hi, I am getting Out of Memory Errors after a while on solr-6.3.0. The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh just kills the jvm right after. Using Jconsole, I see the nice triangle pattern, where it uses the heap and being reclaimed back. The heap size is set

Re: Solr Delete By Id Out of memory issue

2017-04-03 Thread Rohit Kanchan
Thanks everyone for replying to this issue. Just a final comment on this issue which I was closely working on. We have fixed this issue. It was a bug in our custom component which we wrote to convert delete by query to delete by id. We were using BytesRef differently, we were not making a deep

Re: Solr Delete By Id Out of memory issue

2017-03-27 Thread Rohit Kanchan
Thanks Erick for replying back. I have deployed changes to production, we will figure it out soon if it is still causing OOM or not. And for commits we are doing auto commits after 10K docs or 30 secs. If I get time I will try to run a local test to check if we will hit OOM because of 1K map

Re: Solr Delete By Id Out of memory issue

2017-03-27 Thread Erick Erickson
Rohit: Well, whenever I see something like "I have this custom component..." I immediately want the problem to be demonstrated without that custom component before trying to debug Solr. As Chris explained, we can't clear the 1K entries. It's hard to imagine why keeping the last 1,000 entries

Re: Solr Delete By Id Out of memory issue

2017-03-25 Thread Rohit Kanchan
I think we figure out the issue, When we were conventing delete by query in a Solr Handler we were not making a deep copy of BytesRef. We were making reference of same object, which was causing old deletes(LinkedHasmap) adding more than 1K entries. But I think it is still not clearing those 1K

Re: Solr Delete By Id Out of memory issue

2017-03-22 Thread Rohit Kanchan
For commits we are relying on auto commits. We have define following in configs: 1 3 false 15000 One thing which I would like to mention is that we are not calling directly deleteById from client.

Re: Solr Delete By Id Out of memory issue

2017-03-22 Thread Chris Hostetter
: OK, The whole DBQ thing baffles the heck out of me so this may be : totally off base. But would committing help here? Or at least be worth : a test? ths isn't DBQ -- the OP specifically said deleteById, and that the oldDeletes map (only used for DBI) was the problem acording to the heap

Re: Solr Delete By Id Out of memory issue

2017-03-21 Thread Erick Erickson
Chris: OK, The whole DBQ thing baffles the heck out of me so this may be totally off base. But would committing help here? Or at least be worth a test? On Tue, Mar 21, 2017 at 4:28 PM, Chris Hostetter wrote: > > : Thanks for replying. We are using Solr 6.1 version.

Re: Solr Delete By Id Out of memory issue

2017-03-21 Thread Chris Hostetter
: Thanks for replying. We are using Solr 6.1 version. Even I saw that it is : bounded by 1K count, but after looking at heap dump I was amazed how can it : keep more than 1K entries. But Yes I see around 7M entries according to : heap dump and around 17G of memory occupied by BytesRef there.

Re: Solr Delete By Id Out of memory issue

2017-03-21 Thread Rohit Kanchan
ter than delete by query. It works > : great for few days but after a week these delete by id get accumulated in > : Linked hash map of UpdateLog (variable name as olddeletes). Once this map > : is full then we are seeing out of memory. > > first off: what version of Solr are you runni

Re: Solr Delete By Id Out of memory issue

2017-03-21 Thread Chris Hostetter
are : using delete by id because it is faster than delete by query. It works : great for few days but after a week these delete by id get accumulated in : Linked hash map of UpdateLog (variable name as olddeletes). Once this map : is full then we are seeing out of memory. first off: what version

Solr Delete By Id Out of memory issue

2017-03-21 Thread Rohit Kanchan
Hi All, I am looking for some help to solve an out of memory issue which we are facing. We are storing messages in solr as documents. We are running a pruning job every night to delete old message documents. We are deleting old documents by calling multiple delete by id query to solr. Document

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-09 Thread Tim Chen
:apa...@elyograg.org] Sent: Monday, 8 August 2016 11:44 PM To: solr-user@lucene.apache.org Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory On 8/7/2016 6:53 PM, Tim Chen wrote: > Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemor

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-09 Thread Shawn Heisey
On 8/8/2016 11:09 AM, Ritesh Kumar (Avanade) wrote: > This is great but where can I do this change in SOLR 6 as I have > implemented CDCR. In Solr 6, the chance of using Tomcat will be near zero, and the maxThreads setting in Solr's Jetty config should already be set to 1. If you're seeing

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Ritesh Kumar (Avanade)
.com] Sent: 08 August 2016 21:30 To: solr-user <solr-user@lucene.apache.org> Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory Yeah, Shawn, but you, like, know something about Tomcat and actually provide useful advice ;) On Mon, Aug 8, 2016 at 6:44 AM, Shawn H

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Erick Erickson
Yeah, Shawn, but you, like, know something about Tomcat and actually provide useful advice ;) On Mon, Aug 8, 2016 at 6:44 AM, Shawn Heisey wrote: > On 8/7/2016 6:53 PM, Tim Chen wrote: >> Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError: >> unable to

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-08 Thread Shawn Heisey
On 8/7/2016 6:53 PM, Tim Chen wrote: > Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError: > unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at >

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-07 Thread Erick Erickson
ize="512" >autowarmCount="0"/> > > -Original Message- > From: Erick Erickson [mailto:erickerick...@gmail.com] > Sent: Saturday, 6 August 2016 2:31 AM > To: solr-user > Subject: Re: Solr Cloud with 5 servers cluster failed due to Lea

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-07 Thread Tim Chen
im Reference: -Original Message- From: Erick Erickson [mailto:erickerick...@gmail.com] Sent: Saturday, 6 August 2016 2:31 AM To: solr-user Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory You don't really have to worry that much

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-07 Thread Tim Chen
Hi Erick, Shawn, Thanks for following this up. 1, For some reason, ramBufferSizeMB in our solrconfig.xml is not set to 100MB, but 32MB. In that case, considering we have 10G for JVM, my understanding is we should not run out of memory due to large number of documents being added to Solr

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-05 Thread Erick Erickson
ks guys. > > Cheers, > Tim > > -Original Message- > From: Shawn Heisey [mailto:apa...@elyograg.org] > Sent: Friday, 5 August 2016 4:55 PM > To: solr-user@lucene.apache.org > Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of > memory >

RE: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-05 Thread Tim Chen
...@elyograg.org] Sent: Friday, 5 August 2016 4:55 PM To: solr-user@lucene.apache.org Subject: Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory On 8/4/2016 8:14 PM, Tim Chen wrote: > Couple of thoughts: 1, If Leader goes down, it should just go down, > like dead down, so other serve

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-05 Thread Shawn Heisey
On 8/4/2016 8:14 PM, Tim Chen wrote: > Couple of thoughts: 1, If Leader goes down, it should just go down, > like dead down, so other servers can do the election and choose the > new leader. This at least avoids bringing down the whole cluster. Am I > right? Supplementing what Erick told you:

Re: Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-04 Thread Erick Erickson
The fact that all the shards have the same leader is somewhat of a red herring. Until you get hundreds of shards (perhaps across a _lot_ of collections), the additional load on the leaders is hard to measure. If you really see this as a problem, consider the BALANCESHARDUNIQUE and REBALANCELEADERS

Solr Cloud with 5 servers cluster failed due to Leader out of memory

2016-08-04 Thread Tim Chen
servers. Unfortunately most of time, all the Shards have the same Leader (eg, Solr server 01). Now, If we are adding a lot of documents to Solr, and eventually Solr 01 (All Shard's Leader) throws Out of memory in Tomcat log, and service goes down (but 8983 port is still responding to telnet

Re: Solr JSON facet range out of memory exception

2016-04-11 Thread Toke Eskildsen
On Mon, 2016-04-11 at 13:31 +0430, Ali Nazemian wrote: > http: //10.102.1.5: 8983/solr/edgeIndex/select?q=*%3A*=stat_owner_id: > 122952=0=json=true=true=%7bresult: %7b > type: range, > field: stat_date, > start: 146027158386, > end: 1460271583864, > gap: 1 > %7d%7d

Re: Solr JSON facet range out of memory exception

2016-04-11 Thread Ali Nazemian
Dear Yonik, Hi, The entire index has 50k documents not the faceted one. It is just a test case right now! I used the JSON facet API, here is my query after encoding: http: //10.102.1.5: 8983/solr/edgeIndex/select?q=*%3A*=stat_owner_id: 122952=0=json=true=true=%7bresult: %7b type: range, field:

Re: Solr JSON facet range out of memory exception

2016-04-10 Thread Yonik Seeley
On Sun, Apr 10, 2016 at 3:47 AM, Ali Nazemian wrote: > Dear all Solr users/developeres, > Hi, > I am going to use Solr JSON facet range on a date filed which is stored as > long milis. Unfortunately I got java heap space exception no matter how > much memory assigned to

Solr JSON facet range out of memory exception

2016-04-10 Thread Ali Nazemian
Dear all Solr users/developeres, Hi, I am going to use Solr JSON facet range on a date filed which is stored as long milis. Unfortunately I got java heap space exception no matter how much memory assigned to Solr Java heap! I already test that with 2g heap space for Solr core with 50k documents!!

Re: Out of memory error during full import

2016-02-04 Thread Shawn Heisey
On 2/4/2016 12:18 AM, Srinivas Kashyap wrote: > I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the > child entities in data-config.xml. When i try to do full import, i'm getting > OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the > maximum extent

Out of memory error during full import

2016-02-04 Thread Srinivas Kashyap
Hello, I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the child entities in data-config.xml. When i try to do full import, i'm getting OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the maximum extent possible. Is there a workaround to do initial

Out of memory error during full import

2016-02-03 Thread Srinivas Kashyap
Hello, I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the child entities in data-config.xml. When i try to do full import, i'm getting OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the maximum extent possible. Is there a workaround to do initial

Re: Help on Out of memory when using Cursor with sort on Unique Key

2015-09-09 Thread Naresh Yadav
docvalues with reindexing does not seem viable option for me as of now...regarding second question on Xmx4G so i tried various options Xmx8G, Xmx10G, Xmx12G all not worked except Xmx14G which not seem practical for production with 16gb ram. While searching i came across :

Help on Out of memory when using Cursor with sort on Unique Key

2015-09-08 Thread Naresh Yadav
Cluster details : Solr Version : solr-4.10.4 No of nodes : 2 each 16 GB RAM Node of shards : 2 Replication : 1 Each node memory parameter : -Xms2g, -Xmx4g Collection details : No of docs in my collection : 12.31 million Indexed field per document : 2 Unique key field : tids Stored filed per

Re: Help on Out of memory when using Cursor with sort on Unique Key

2015-09-08 Thread Raja Pothuganti
Hi Naresh 1) For 'sort by' fields, have you considered using DocValue=true for in schema definition. If you are changing schema definition, you would need redo full reindex after backing up & deleting current index from dataDir. Also note that, adding docValue=true would increase size of index.

out of memory when trying to sort by id in a 1.5 billion index

2014-11-07 Thread adfel70
this message in context: http://lucene.472066.n3.nabble.com/out-of-memory-when-trying-to-sort-by-id-in-a-1-5-billion-index-tp4168156.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: out of memory when trying to sort by id in a 1.5 billion index

2014-11-07 Thread Yago Riveiro
that, but I would rather not . All other usecases I have are using stable 8gb heap . Any other way to handle this in solr 4.8? -- View this message in context: http://lucene.472066.n3.nabble.com/out-of-memory-when-trying-to-sort-by-id-in-a-1-5-billion-index-tp4168156.html Sent from the Solr

Re: out of memory when trying to sort by id in a 1.5 billion index

2014-11-07 Thread Chris Hostetter
: For sorting DocValues are the best option I think. yep, definitely a good idea. : I have a usecase for using cursorpage and when tried to check this, I got : outOfMemory just for sorting by id. what does the field/fieldType for your uniqueKey field look like? If you aren't using

Large Transaction Logs Out of memory

2014-09-18 Thread or gerson
) since the same data is being retrieved over and over. Thanks for the help -- View this message in context: http://lucene.472066.n3.nabble.com/Large-Transaction-Logs-Out-of-memory-tp4159636.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Large Transaction Logs Out of memory

2014-09-18 Thread Shawn Heisey
get out of memory errors. It's exceptionally difficult to write programs that have defined behavior in an OOM situation. Not impossible, but *very* hard, so I'm reasonably sure that no attempt has been made in the Lucene/Solr codebase. To eliminate the OOM, you need to either make your Java heap

Re: solr/lucene 4.10 out of memory issues

2014-09-18 Thread rulinma
mark. -- View this message in context: http://lucene.472066.n3.nabble.com/solr-lucene-4-10-out-of-memory-issues-tp4158262p4159829.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: solr/lucene 4.10 out of memory issues

2014-09-16 Thread Luis Carlos Guerrero
Thanks for the response, I've been working on solving some of the most evident issues and I also added your garbage collector parameters. First of all the Lucene field cache is being filled with some entries which are marked as 'insanity'. Some of these were related to a custom field that we use

Re: solr/lucene 4.10 out of memory issues

2014-09-16 Thread Luis Carlos Guerrero
I checked and these 'insanity' cached keys correspond to fields we use for both grouping and faceting. The same behavior is documented here: https://issues.apache.org/jira/browse/SOLR-4866, although I have single shards for every replica which the jira says is a setup which should not generate

solr/lucene 4.10 out of memory issues

2014-09-11 Thread Luis Carlos Guerrero
hey guys, I'm running a solrcloud cluster consisting of five nodes. My largest index contains 2.5 million documents and occupies about 6 gigabytes of disk space. We recently switched to the latest solr version (4.10) from version 4.4.1 which we ran successfully for about a year without any major

  1   2   3   4   5   >