RE: OutOfMemory error solr 8.4.1

2020-03-09 Thread Srinivas Kashyap
Sent: 09 March 2020 21:13 To: solr-user@lucene.apache.org Subject: Re: OutOfMemory error solr 8.4.1 I’m 99% certain that something in your custom jar is the culprit, otherwise we’d have seen a _lot_ of these. TIMED_WAITING is usually just a listener thread, but they shouldn’t be

Re: OutOfMemory error solr 8.4.1

2020-03-09 Thread Erick Erickson
d) > org.apache.http.impl.client.IdleConnectionEvictor$1.run​(IdleConnectionEvictor.java:66) > java.lang.Thread.run​(Thread.java:748) > > Thanks and Regards, > Srinivas Kashyap > > -Original Message- > From: Erick Erickson > Sent: 06 March 2020 21:34 > To:

RE: OutOfMemory error solr 8.4.1

2020-03-09 Thread Srinivas Kashyap
@lucene.apache.org Subject: Re: OutOfMemory error solr 8.4.1 I assume you recompiled the jar file? re-using the same one compiled against 5x is unsupported, nobody will be able to help until you recompile. Once you’ve done that, if you still have the problem you need to take a thread dump to see

Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Erick Erickson
I assume you recompiled the jar file? re-using the same one compiled against 5x is unsupported, nobody will be able to help until you recompile. Once you’ve done that, if you still have the problem you need to take a thread dump to see if your custom code is leaking threads, that’s my number

Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Srinivas Kashyap
Hi Erick, We have custom code which are schedulers to run delta imports on our cores and I have added that custom code as a jar and I have placed it on server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource configured in the jetty.xml(Oracle) and creating connection

Re: OutOfMemory error solr 8.4.1

2020-03-06 Thread Erick Erickson
This one can be a bit tricky. You’re not running out of overall memory, but you are running out of memory to allocate stacks. Which implies that, for some reason, you are creating a zillion threads. Do you have any custom code? You can take a thread dump and see what your threads are doing, and

OutOfMemory error solr 8.4.1

2020-03-06 Thread Srinivas Kashyap
Hi All, I have recently upgraded solr to 8.4.1 and have installed solr as service in linux machine. Once I start my service, it will be up for 15-18hours and suddenly stops without us shutting down. In solr.log I found below error. Can somebody guide me what values should I be increasing in

Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Zheng Lin Edwin Yeo
Hi Atita, It would be good to consider upgrading to have the use of the better features like better memory consumption and better authentication. On a side note, it is also good to upgrade now in Solr 7, as Solr Indexes can only be upgraded from the previous major release version (Solr 6) to the

Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Atita Arora
Hi Andrzej, We're rather weighing on a lot of other stuff to upgrade our Solr for a very long time like better authentication handling, backups using CDCR, new Replication mode and this probably has just given us another reason to upgrade. Thank you so much for the suggestion, I think its good to

Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Andrzej Białecki
I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5 comes with an alternative strategy for SPLITSHARD that doesn’t consume as much memory and nearly doesn’t consume additional disk space on the leader. This strategy can be turned on by “splitMethod=link” parameter. > On 4

Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Atita Arora
Hi Edwin, Thanks for following up on this. So here are the configs : Memory - 30G - 20 G to Solr Disk - 1TB Index = ~ 500G and I think that it possibly is due to the reason why this could be happening is that during split shard, the unsplit index + split index persists on the instance and may

Re: SPLITSHARD throwing OutOfMemory Error

2018-10-04 Thread Zheng Lin Edwin Yeo
Hi Atita, What is the amount of memory that you have in your system? And what is your index size? Regards, Edwin On Tue, 25 Sep 2018 at 22:39, Atita Arora wrote: > Hi, > > I am working on a test setup with Solr 6.1.0 cloud with 1 collection > sharded across 2 shards with no replication. When

SPLITSHARD throwing OutOfMemory Error

2018-09-25 Thread Atita Arora
Hi, I am working on a test setup with Solr 6.1.0 cloud with 1 collection sharded across 2 shards with no replication. When triggered a SPLITSHARD command it throws "java.lang.OutOfMemoryError: Java heap space" everytime. I tried this with multiple heap settings of 8, 12 & 20G but every time it

Re: DataImportHandler OutOfMemory Mysql

2017-04-02 Thread Shawn Heisey
On 4/1/2017 4:17 PM, marotosg wrote: > I am trying to load a big table into Solr using DataImportHandler and Mysql. > I am getting OutOfMemory error because Solr is trying to load the full > table. I have been reading different posts and tried batchSize="-1". > https:

Re: DataImportHandler OutOfMemory Mysql

2017-04-02 Thread Mikhail Khludnev
into Solr using DataImportHandler and > Mysql. > I am getting OutOfMemory error because Solr is trying to load the full > table. I have been reading different posts and tried batchSize="-1". > https://wiki.apache.org/solr/DataImportHandlerFaq > > Do you have any idea what

DataImportHandler OutOfMemory Mysql

2017-04-01 Thread marotosg
Hi, I am trying to load a big table into Solr using DataImportHandler and Mysql. I am getting OutOfMemory error because Solr is trying to load the full table. I have been reading different posts and tried batchSize="-1". https://wiki.apache.org/solr/DataImportHandlerFaq Do you hav

AW: AW: AW: OutOfMemory when batchupdating from SolrJ

2016-02-22 Thread Clemens Wyss DEV
instead of creating a > new ArrayList Will do that, allthough I am not hunting for nano's, at least not at the moment ;) -Ursprüngliche Nachricht- Von: Shawn Heisey [mailto:apa...@elyograg.org] Gesendet: Montag, 22. Februar 2016 15:57 An: solr-user@lucene.apache.org Betreff: Re: AW: AW: O

Re: AW: AW: OutOfMemory when batchupdating from SolrJ

2016-02-22 Thread Shawn Heisey
On 2/22/2016 1:55 AM, Clemens Wyss DEV wrote: > SolrClient solrClient = getSolrClient( coreName, true ); > Collection batch = new ArrayList(); > while ( elements.hasNext() ) > { > IIndexableElement elem = elements.next(); > SolrInputDocument doc = createSolrDocForElement( elem, provider,

AW: AW: OutOfMemory when batchupdating from SolrJ

2016-02-22 Thread Clemens Wyss DEV
> solrClient.add( documents ); // [2] is of course: solrClient.add( batch ); // [2] -Ursprüngliche Nachricht- Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch] Gesendet: Montag, 22. Februar 2016 09:55 An: solr-user@lucene.apache.org Betreff: AW: AW: OutOfMemory when batchupdating f

AW: AW: OutOfMemory when batchupdating from SolrJ

2016-02-22 Thread Clemens Wyss DEV
.: executorService.submit( () -> { } ); Thanks for any advices. If needed, I can also provide the OOM-heapdump ... -Ursprüngliche Nachricht- Von: Shawn Heisey [mailto:apa...@elyograg.org] Gesendet: Freitag, 19. Februar 2016 18:59 An: solr-user@lucene.apache.org Betreff: Re: AW: OutOfMemory w

Re: AW: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Shawn Heisey
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote: > The logic is somewhat this: > > SolrClient solrClient = new HttpSolrClient( coreUrl ); > while ( got more elements to index ) > { > batch = create 100 SolrInputDocuments > solrClient.add( batch ) > } How much data is going into each of those

Re: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Susheel Kumar
el Kumar [mailto:susheel2...@gmail.com] > Gesendet: Freitag, 19. Februar 2016 17:23 > An: solr-user@lucene.apache.org > Betreff: Re: OutOfMemory when batchupdating from SolrJ > > Clemens, > > First allocating higher or right amount of heap memory is not a workaround > but bec

AW: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Clemens Wyss DEV
Thanks Susheel, but I am having problems in and am talking about SolrJ, i.e. the "client-side of Solr" ... -Ursprüngliche Nachricht- Von: Susheel Kumar [mailto:susheel2...@gmail.com] Gesendet: Freitag, 19. Februar 2016 17:23 An: solr-user@lucene.apache.org Betreff: Re: OutOfM

Re: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Susheel Kumar
-- > Von: Susheel Kumar [mailto:susheel2...@gmail.com] > Gesendet: Freitag, 19. Februar 2016 14:42 > An: solr-user@lucene.apache.org > Betreff: Re: OutOfMemory when batchupdating from SolrJ > > When you run your SolrJ Client Indexing program, can you increase heap > size similar

AW: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Clemens Wyss DEV
An: solr-user@lucene.apache.org Betreff: Re: OutOfMemory when batchupdating from SolrJ When you run your SolrJ Client Indexing program, can you increase heap size similar below. I guess it may be on your client side you are running int OOM... or please share the exact error if below doesn't work/is the

Re: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Susheel Kumar
lient side buffer" ... >> >> -Ursprüngliche Nachricht- >> Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch] >> Gesendet: Freitag, 19. Februar 2016 11:09 >> An: solr-user@lucene.apache.org >> Betreff: AW: OutOfMemory when batchupdating from SolrJ >

Re: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Susheel Kumar
... > > -Ursprüngliche Nachricht- > Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch] > Gesendet: Freitag, 19. Februar 2016 11:09 > An: solr-user@lucene.apache.org > Betreff: AW: OutOfMemory when batchupdating from SolrJ > > The char[] which occupies 180MB has

AW: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Clemens Wyss DEV
reitag, 19. Februar 2016 11:09 An: solr-user@lucene.apache.org Betreff: AW: OutOfMemory when batchupdating from SolrJ The char[] which occupies 180MB has the following "path to root" char[87690841] @ 0x7940ba658 shopproducts#... |- java.lang.Thread @ 0x7321d9b80 SolrUtil executorService

AW: OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Clemens Wyss DEV
to:clemens...@mysign.ch] Gesendet: Freitag, 19. Februar 2016 09:07 An: solr-user@lucene.apache.org Betreff: OutOfMemory when batchupdating from SolrJ Environment: Solr 5.4.1 I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!) SolrInputDocument instances, although my batchsize is 1

OutOfMemory when batchupdating from SolrJ

2016-02-19 Thread Clemens Wyss DEV
Environment: Solr 5.4.1 I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!) SolrInputDocument instances, although my batchsize is 100. I.e. I call solrClient.add( documents ) for every 100 documents only. So I'd expect to see at most 100 SolrInputDocument's in memory at any

AW: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-08 Thread Clemens Wyss DEV
Sorry for the delay - https://issues.apache.org/jira/browse/SOLR-7646 -Ursprüngliche Nachricht- Von: Erick Erickson [mailto:erickerick...@gmail.com] Gesendet: Mittwoch, 3. Juni 2015 17:39 An: solr-user@lucene.apache.org Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh

Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Shawn Heisey
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote: Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for Solr. I am seeing the following OOMs: ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1] org.apache.solr.common.SolrException; null:java.lang.RuntimeException:

Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Clemens Wyss DEV
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for Solr. I am seeing the following OOMs: ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1] org.apache.solr.common.SolrException; null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space

AW: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Clemens Wyss DEV
An: solr-user@lucene.apache.org Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote: Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for Solr. I am seeing the following OOMs: ERROR - 2015-06-03

AW: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Clemens Wyss DEV
Hi Mark, what exactly should I file? What needs to be added/appended to the issue? Regards Clemens -Ursprüngliche Nachricht- Von: Mark Miller [mailto:markrmil...@gmail.com] Gesendet: Mittwoch, 3. Juni 2015 14:23 An: solr-user@lucene.apache.org Betreff: Re: Solr OutOfMemory but no heap

Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Mark Miller
File a JIRA issue please. That OOM Exception is getting wrapped in a RuntimeException it looks. Bug. - Mark On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV clemens...@mysign.ch wrote: Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for Solr. I am seeing the following

Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Mark Miller
We will have to a find a way to deal with this long term. Browsing the code I can see a variety of places where problem exception handling has been introduced since this all was fixed. - Mark On Wed, Jun 3, 2015 at 8:19 AM Mark Miller markrmil...@gmail.com wrote: File a JIRA issue please. That

Re: AW: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Shawn Heisey
On 6/3/2015 1:41 AM, Clemens Wyss DEV wrote: The oom script just kills Solr with the KILL signal (-9) and logs the kill. I know. But my feeling is, that not even this happens, i.e. the script is not being executed. At least I see no solr_oom_killer-$SOLR_PORT-$NOW.log file ... Btw: Who

Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered

2015-06-03 Thread Erick Erickson
: Mark Miller [mailto:markrmil...@gmail.com] Gesendet: Mittwoch, 3. Juni 2015 14:23 An: solr-user@lucene.apache.org Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not triggered We will have to a find a way to deal with this long term. Browsing the code I can see a variety

Re: PermGen space OutOfMemory error when Solr is running

2015-05-31 Thread Zheng Lin Edwin Yeo
: Hi, I've recently upgrade my system to 16GB RAM. While there's no more OutofMemory due to the physically memory being full, I get this java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously as I think the physical memory run out first. This occurs

Re: PermGen space OutOfMemory error when Solr is running

2015-05-31 Thread Tomasz Borek
. pozdrawiam, LAFK 2015-05-18 4:07 GMT+02:00 Zheng Lin Edwin Yeo edwinye...@gmail.com: Hi, I've recently upgrade my system to 16GB RAM. While there's no more OutofMemory due to the physically memory being full, I get this java.lang.OutOfMemoryError: PermGen space

Re: PermGen space OutOfMemory error when Solr is running

2015-05-18 Thread Zheng Lin Edwin Yeo
...@gmail.com: Hi, I've recently upgrade my system to 16GB RAM. While there's no more OutofMemory due to the physically memory being full, I get this java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously as I think the physical memory run out first. This occurs

Re: PermGen space OutOfMemory error when Solr is running

2015-05-18 Thread Tomasz Borek
Yeo edwinye...@gmail.com: Hi, I've recently upgrade my system to 16GB RAM. While there's no more OutofMemory due to the physically memory being full, I get this java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously as I think the physical memory run out first

PermGen space OutOfMemory error when Solr is running

2015-05-17 Thread Zheng Lin Edwin Yeo
Hi, I've recently upgrade my system to 16GB RAM. While there's no more OutofMemory due to the physically memory being full, I get this java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously as I think the physical memory run out first. This occurs after about 2 days of running

OUTOFMEMORY

2015-03-04 Thread Rajesh
Hi, I'm using SortedMapBackedCache for my child entities. When I use this I'm getting outofmemory exception and the records are not getting indexed. I've increased my heap size to 3GB. but still the same result. Is there a way how I can configure it to index 1L records and clear the cache

OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Mohsin Beg Beg
Hi, I am getting OOM when faceting on numFound=28. The receiving solr node throws the OutOfMemoryError even though there is 7gb available heap before the faceting request was submitted. If a different solr node is selected that one fails too. Any suggestions ? 1) Test setup is:- 100

RE: OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Toke Eskildsen
Mohsin Beg Beg [mohsin@oracle.com] wrote: I am getting OOM when faceting on numFound=28. The receiving solr node throws the OutOfMemoryError even though there is 7gb available heap before the faceting request was submitted. fc and fcs faceting memory overhead is (nearly) independent on the

Re: OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Mohsin Beg Beg
@lucene.apache.org Sent: Tuesday, November 18, 2014 12:34:08 PM GMT -08:00 US/Canada Pacific Subject: RE: OutOfMemory on 28 docs with facet.method=fc/fcs Mohsin Beg Beg [mohsin@oracle.com] wrote: I am getting OOM when faceting on numFound=28. The receiving solr node throws the OutOfMemoryError

Re: OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Shawn Heisey
On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote: Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for the fields. So assuming each field has a unique term across the 28 rows, a max of 28 * 15 unique small strings (100bytes), should be in the order of 1MB. For 100

RE: OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Toke Eskildsen
Mohsin Beg Beg [mohsin@oracle.com] wrote: Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for the fields. To get the seed for the concrete faceting resolving, yes. That still leaves the mapping and the counting structures. So assuming each field has a unique

Re: OutOfMemory on 28 docs with facet.method=fc/fcs

2014-11-18 Thread Mohsin Beg Beg
@lucene.apache.org Sent: Tuesday, November 18, 2014 2:45:46 PM GMT -08:00 US/Canada Pacific Subject: Re: OutOfMemory on 28 docs with facet.method=fc/fcs On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote: Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for the fields. So

Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
-1000, 1000-2000, 2000-3000) and sending each chunk to a worker in a pool of multi-threaded workers. This worked well for us with a single server. However upon upgrading to solr cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going on here, but it looks like the coordinating node receives the query and sends it to the shard requested. For example, given: shards=shard3sort=id+ascstart=4000q=*:*rows=1000 The coordinating

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Steve Rowe
cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going on here, but it looks like the coordinating node receives the query and sends it to the shard requested

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
requests) causes an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going on here, but it looks like the coordinating node receives the query and sends it to the shard requested. For example, given: shards=shard3sort=id+ascstart=4000q

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Steve Rowe
, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going on here, but it looks like the coordinating node receives the query and sends it to the shard requested

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
of multi-threaded workers. This worked well for us with a single server. However upon upgrading to solr cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Greg Pendlebury
, 2000-3000) and sending each chunk to a worker in a pool of multi-threaded workers. This worked well for us with a single server. However upon upgrading to solr cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
) and sending each chunk to a worker in a pool of multi-threaded workers. This worked well for us with a single server. However upon upgrading to solr cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating node

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Greg Pendlebury
. However upon upgrading to solr cloud, we've found that this quickly (within the first 4 or 5 requests) causes an OutOfMemory error on the coordinating node that receives the query. I don't fully understand what's going on here, but it looks like

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Yonik Seeley
On Mon, Mar 17, 2014 at 7:14 PM, Greg Pendlebury greg.pendleb...@gmail.com wrote: My suspicion is that it won't work in parallel Deep paging with cursorMark does work with distributed search (assuming that's what you meant by parallel... querying sub-shards in parallel?). -Yonik

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Greg Pendlebury
Sorry, I meant one thread requesting records 1 - 1000, whilst the next thread requests 1001 - 2000 from the same ordered result set. We've observed several of our customers trying to harvest our data with multi-threaded scripts that work like this. I thought it would not work using cursor marks...

Re: Deep paging in parallel with solr cloud - OutOfMemory

2014-03-17 Thread Mike Hugo
Greg and I are talking about the same type of parallel. We do the same thing - if I know there are 10,000 results, we can chunk that up across multiple worker threads up front without having to page through the results. We know there are 10 chunks of 1,000, so we can have one thread process

OutOfMemory while indexing (PROD environment!)

2013-06-06 Thread Isaac Hebsh
Hi everyone, My SolrCloud cluster (4.3.0) has came into production a few days ago. Docs are being indexed into Solr using /update requestHandler, as a POST request, containing text/xml content-type. The collection is sharded into 36 pieces, each shard has two replicas. There are 36 nodes (each

Re: OutOfMemory while indexing (PROD environment!)

2013-06-06 Thread Otis Gospodnetic
Hi, Try running jstat to see if the heap is full. 4gb is not much and could easily be eaten by structures used for sorting, facetting, and caching. Plug: SPM has a new feature that lets you send graphs with various metrics to Solr mailing list. I'd personally look at the GC graphs to see if GC

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-22 Thread David Smiley (@MITRE.org)
-enterprise-search-server/book -- View this message in context: http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-22 Thread Javier Molina
.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-21 Thread David Smiley (@MITRE.org)
: http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-21 Thread Javier Molina
rollback INFO: end_rollback - Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book -- View this message in context: http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-21 Thread David Smiley (@MITRE.org)
-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Spatial+Dataimport full import results in OutOfMemory for a rectangle defining a line

2013-01-21 Thread Javier Molina
- Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book -- View this message in context: http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html Sent from the Solr - User

Re: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread Erick Erickson
-failed-OutOfMemory-tp4019802p4019950.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread uwe72
if the doc exists or not. We use here the functionality in solrj to delete a list of ids. Always in this deletion the error occurs. -- View this message in context: http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020027.html Sent from the Solr - User

Re: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread uwe72
Environment (build 1.6.0_33-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode) -- View this message in context: http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020078.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread uwe72
: http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020121.html Sent from the Solr - User mailing list archive at Nabble.com.

AW: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread André Widhani
@lucene.apache.org Betreff: Re: java.io.IOException: Map failed :: OutOfMemory today the same exception: INFO: [] webapp=/solr path=/update params={waitSearcher=truecommit=truewt=javabinwaitFlush=trueversion=2} status=0 QTime=1009 Nov 13, 2012 2:02:27 PM org.apache.solr.core.SolrDeletionPolicy

Re: AW: java.io.IOException: Map failed :: OutOfMemory

2012-11-13 Thread uwe72
. -- View this message in context: http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020134.html Sent from the Solr - User mailing list archive at Nabble.com.

java.io.IOException: Map failed :: OutOfMemory

2012-11-12 Thread uwe72
, 2012 5:16:41 PM org.apache.solr.update.SolrIndexWriter finalize SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! -- View this message in context: http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802.html

Re: Sort by date field = outofmemory?

2012-07-14 Thread Lance Norskog
. -- Where Influence Isn’t a Game. http://www.appinions.com On Wed, Jul 11, 2012 at 4:05 AM, Bruno Mannina bmann...@free.fr wrote: Hi, some news this morning... I added -Xms1024m option and now it works?! no outofmemory ?! java -jar -Xms1024m -Xmx2048m start.jar Le 11/07/2012 09:55, Bruno

Re: Sort by date field = outofmemory?

2012-07-12 Thread Erick Erickson
option and now it works?! no outofmemory ?! java -jar -Xms1024m -Xmx2048m start.jar Le 11/07/2012 09:55, Bruno Mannina a écrit : Hi Yury, Thanks for your anwer. ok for to increase memory but I have a problem with that, I have 8Go on my computer but the JVM accepts only 2Go max

Re: Sort by date field = outofmemory?

2012-07-11 Thread Michael Della Bitta
...@free.fr wrote: Hi, some news this morning... I added -Xms1024m option and now it works?! no outofmemory ?! java -jar -Xms1024m -Xmx2048m start.jar Le 11/07/2012 09:55, Bruno Mannina a écrit : Hi Yury, Thanks for your anwer. ok for to increase memory but I have a problem

Re: Sort by date field = outofmemory?

2012-07-11 Thread Yury Kats
news this morning... I added -Xms1024m option and now it works?! no outofmemory ?! java -jar -Xms1024m -Xmx2048m start.jar Le 11/07/2012 09:55, Bruno Mannina a écrit : Hi Yury, Thanks for your anwer. ok for to increase memory but I have a problem with that, I have 8Go on my computer

Sort by date field = outofmemory?

2012-07-10 Thread Bruno Mannina
Dear Solr Users, Each time I try to do a request with sort=pubdate+desc I get: GRAVE: java.lang.OutOfMemoryError: Java heap space I use Solr3.6, I have around 80M docs and my request gets around 160 results. Actually for my test, i use jetty java -jar -Xmx2g start.jar PS: If I write

Re: Sort by date field = outofmemory?

2012-07-10 Thread Bruno Mannina
To complete my question: after having this error, some fields (not all) aren't reachable with the same error. Le 10/07/2012 14:25, Bruno Mannina a écrit : Dear Solr Users, Each time I try to do a request with sort=pubdate+desc I get: GRAVE: java.lang.OutOfMemoryError: Java heap space

Re: Sort by date field = outofmemory?

2012-07-10 Thread Yury Kats
Sorting is a memory-intensive operation indeed. Not sure what you are asking, but it may very well be that your only option is to give JVM more memory. On 7/10/2012 8:25 AM, Bruno Mannina wrote: Dear Solr Users, Each time I try to do a request with sort=pubdate+desc I get: GRAVE:

Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Ralf Matulat
Good morning! Recently we slipped into an OOME by optimizing our index. It looks like it's regarding to the nio class and the memory-handling. I'll try to describe the environment, the error and what we did to solve the problem. Nevertheless, none of our approaches was successful. The

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
Are you sure you are using a 64 bit JVM? Are you sure you really changed your vmem limit to unlimited? That should have resolved the OOME from mmap. Or: can you run cat /proc/sys/vm/max_map_count? This is a limit on the total number of maps in a single process, that Linux imposes. But the

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Ralf Matulat
Dear Mike, thanks for your your reply. Just a couple of minutes we found a solution or - to be honest - where we went wrong. Our failure was the use of ulimit. We missed, that ulimit sets the vmem for each shell seperatly. So we set 'ulimit -v unlimited' on a shell, thinking that we've done

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
OK, excellent. Thanks for bringing closure, Mike McCandless http://blog.mikemccandless.com On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat ralf.matu...@bundestag.de wrote: Dear Mike, thanks for your your reply. Just a couple of minutes we found a solution or - to be honest - where we went

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Shawn Heisey
Michael, What is the best central place on an rpm-based distro (CentOS 6 in my case) to raise the vmem limit for specific user(s), assuming it's not already correct? I'm using /etc/security/limits.conf to raise the open file limit for the user that runs Solr: ncindex hardnofile

Re: Optimize fails with OutOfMemory Exception - sun.nio.ch.FileChannelImpl.map involved

2011-09-22 Thread Michael McCandless
Unfortunately I really don't know ;) Every time I set forth to figure things like this out I seem to learn some new way... Maybe someone else knows? Mike McCandless http://blog.mikemccandless.com On Thu, Sep 22, 2011 at 2:15 PM, Shawn Heisey s...@elyograg.org wrote: Michael, What is the

RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-14 Thread Jonathan Rochkind
: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected? The second commit will bring in all changes, from both syncs. Think of the sync part as a glorified rsync of files on disk. So the files will have been copied to disk, but the in memory index on the slave will not have

RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-14 Thread Upayavira
@lucene.apache.org Subject: RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected? The second commit will bring in all changes, from both syncs. Think of the sync part as a glorified rsync of files on disk. So the files will have been copied to disk, but the in memory

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-14 Thread Jonathan Rochkind
: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected? The second commit will bring in all changes, from both syncs. Think of the sync part as a glorified rsync of files on disk. So the files will have been copied to disk, but the in memory index on the slave

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-14 Thread Shawn Heisey
On 12/14/2010 9:02 AM, Jonathan Rochkind wrote: 1. Will the existing index searcher have problems because the files have been changed out from under it? 2. Will a future replication -- at which NO new files are available on master -- still trigger a future commit on slave? I'm not really

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-14 Thread Jonathan Rochkind
Thanks Shawn, that helps explain things. So the issue there, with using maxSearchWarmers to try and prevent out of control RAM/CPU usage from over-lapping on-deck, combined with replication... is if you're still pulling down replications very frequently but using maxSearchWarmers to prevent

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-13 Thread John Russell
Thanks for the response. The date types are defined in our schema file like this fieldType name=date class=solr.TrieDateField omitNorms=true precisionStep=0 positionIncrementGap=0/ !-- A Trie based date field for faster date range queries and date faceting. -- fieldType name=tdate

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-13 Thread Jonathan Rochkind
Forgive me if I've said this in this thread already, but I'm beginning to think this is the main 'mysterious' cause of Solr RAM/gc issues. Are you committing very frequently? So frequently that you commit faster than it takes for warming operations on a new Solr index to complete, and you're

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-13 Thread John Russell
Wow, you read my mind. We are committing very frequently. We are trying to get as close to realtime access to the stuff we put in as possible. Our current commit time is... ahem every 4 seconds. Is that insane? I'll try the ConcMarkSweep as well and see if that helps. On Mon, Dec 13,

Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap getting collected?

2010-12-13 Thread Yonik Seeley
On Mon, Dec 13, 2010 at 8:47 PM, John Russell jjruss...@gmail.com wrote: Wow, you read my mind.  We are committing very frequently.  We are trying to get as close to realtime access to the stuff we put in as possible.  Our current commit time is... ahem every 4 seconds. Is that insane?

  1   2   >