Sent: 09 March 2020 21:13
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory error solr 8.4.1
I’m 99% certain that something in your custom jar is the culprit, otherwise
we’d have seen a _lot_ of these. TIMED_WAITING is usually just a listener
thread, but they shouldn’t be
d)
> org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
> java.lang.Thread.run(Thread.java:748)
>
> Thanks and Regards,
> Srinivas Kashyap
>
> -Original Message-
> From: Erick Erickson
> Sent: 06 March 2020 21:34
> To:
@lucene.apache.org
Subject: Re: OutOfMemory error solr 8.4.1
I assume you recompiled the jar file? re-using the same one compiled against 5x
is unsupported, nobody will be able to help until you recompile.
Once you’ve done that, if you still have the problem you need to take a thread
dump to see
I assume you recompiled the jar file? re-using the same one compiled against 5x
is unsupported, nobody will be able to help until you recompile.
Once you’ve done that, if you still have the problem you need to take a thread
dump to see if your custom code is leaking threads, that’s my number
Hi Erick,
We have custom code which are schedulers to run delta imports on our cores and
I have added that custom code as a jar and I have placed it on
server/solr-webapp/WEB-INF/lib. Basically we are fetching the JNDI datasource
configured in the jetty.xml(Oracle) and creating connection
This one can be a bit tricky. You’re not running out of overall memory, but you
are running out of memory to allocate stacks. Which implies that, for some
reason, you are creating a zillion threads. Do you have any custom code?
You can take a thread dump and see what your threads are doing, and
Hi All,
I have recently upgraded solr to 8.4.1 and have installed solr as service in
linux machine. Once I start my service, it will be up for 15-18hours and
suddenly stops without us shutting down. In solr.log I found below error. Can
somebody guide me what values should I be increasing in
Hi Atita,
It would be good to consider upgrading to have the use of the better
features like better memory consumption and better authentication.
On a side note, it is also good to upgrade now in Solr 7, as Solr Indexes
can only be upgraded from the previous major release version (Solr 6) to
the
Hi Andrzej,
We're rather weighing on a lot of other stuff to upgrade our Solr for a
very long time like better authentication handling, backups using CDCR, new
Replication mode and this probably has just given us another reason to
upgrade.
Thank you so much for the suggestion, I think its good to
I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5 comes
with an alternative strategy for SPLITSHARD that doesn’t consume as much memory
and nearly doesn’t consume additional disk space on the leader. This strategy
can be turned on by “splitMethod=link” parameter.
> On 4
Hi Edwin,
Thanks for following up on this.
So here are the configs :
Memory - 30G - 20 G to Solr
Disk - 1TB
Index = ~ 500G
and I think that it possibly is due to the reason why this could be
happening is that during split shard, the unsplit index + split index
persists on the instance and may
Hi Atita,
What is the amount of memory that you have in your system?
And what is your index size?
Regards,
Edwin
On Tue, 25 Sep 2018 at 22:39, Atita Arora wrote:
> Hi,
>
> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> sharded across 2 shards with no replication. When
Hi,
I am working on a test setup with Solr 6.1.0 cloud with 1 collection
sharded across 2 shards with no replication. When triggered a SPLITSHARD
command it throws "java.lang.OutOfMemoryError: Java heap space" everytime.
I tried this with multiple heap settings of 8, 12 & 20G but every time it
On 4/1/2017 4:17 PM, marotosg wrote:
> I am trying to load a big table into Solr using DataImportHandler and Mysql.
> I am getting OutOfMemory error because Solr is trying to load the full
> table. I have been reading different posts and tried batchSize="-1".
> https:
into Solr using DataImportHandler and
> Mysql.
> I am getting OutOfMemory error because Solr is trying to load the full
> table. I have been reading different posts and tried batchSize="-1".
> https://wiki.apache.org/solr/DataImportHandlerFaq
>
> Do you have any idea what
Hi,
I am trying to load a big table into Solr using DataImportHandler and Mysql.
I am getting OutOfMemory error because Solr is trying to load the full
table. I have been reading different posts and tried batchSize="-1".
https://wiki.apache.org/solr/DataImportHandlerFaq
Do you hav
instead of creating a
> new ArrayList
Will do that, allthough I am not hunting for nano's, at least not at the moment
;)
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Montag, 22. Februar 2016 15:57
An: solr-user@lucene.apache.org
Betreff: Re: AW: AW: O
On 2/22/2016 1:55 AM, Clemens Wyss DEV wrote:
> SolrClient solrClient = getSolrClient( coreName, true );
> Collection batch = new ArrayList();
> while ( elements.hasNext() )
> {
> IIndexableElement elem = elements.next();
> SolrInputDocument doc = createSolrDocForElement( elem, provider,
> solrClient.add( documents ); // [2]
is of course:
solrClient.add( batch ); // [2]
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Montag, 22. Februar 2016 09:55
An: solr-user@lucene.apache.org
Betreff: AW: AW: OutOfMemory when batchupdating f
.:
executorService.submit( () -> {
} );
Thanks for any advices. If needed, I can also provide the OOM-heapdump ...
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Freitag, 19. Februar 2016 18:59
An: solr-user@lucene.apache.org
Betreff: Re: AW: OutOfMemory w
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote:
> The logic is somewhat this:
>
> SolrClient solrClient = new HttpSolrClient( coreUrl );
> while ( got more elements to index )
> {
> batch = create 100 SolrInputDocuments
> solrClient.add( batch )
> }
How much data is going into each of those
el Kumar [mailto:susheel2...@gmail.com]
> Gesendet: Freitag, 19. Februar 2016 17:23
> An: solr-user@lucene.apache.org
> Betreff: Re: OutOfMemory when batchupdating from SolrJ
>
> Clemens,
>
> First allocating higher or right amount of heap memory is not a workaround
> but bec
Thanks Susheel,
but I am having problems in and am talking about SolrJ, i.e. the "client-side
of Solr" ...
-Ursprüngliche Nachricht-
Von: Susheel Kumar [mailto:susheel2...@gmail.com]
Gesendet: Freitag, 19. Februar 2016 17:23
An: solr-user@lucene.apache.org
Betreff: Re: OutOfM
--
> Von: Susheel Kumar [mailto:susheel2...@gmail.com]
> Gesendet: Freitag, 19. Februar 2016 14:42
> An: solr-user@lucene.apache.org
> Betreff: Re: OutOfMemory when batchupdating from SolrJ
>
> When you run your SolrJ Client Indexing program, can you increase heap
> size similar
An: solr-user@lucene.apache.org
Betreff: Re: OutOfMemory when batchupdating from SolrJ
When you run your SolrJ Client Indexing program, can you increase heap size
similar below. I guess it may be on your client side you are running int
OOM... or please share the exact error if below doesn't work/is the
lient side buffer" ...
>>
>> -Ursprüngliche Nachricht-
>> Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
>> Gesendet: Freitag, 19. Februar 2016 11:09
>> An: solr-user@lucene.apache.org
>> Betreff: AW: OutOfMemory when batchupdating from SolrJ
>
...
>
> -Ursprüngliche Nachricht-
> Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
> Gesendet: Freitag, 19. Februar 2016 11:09
> An: solr-user@lucene.apache.org
> Betreff: AW: OutOfMemory when batchupdating from SolrJ
>
> The char[] which occupies 180MB has
reitag, 19. Februar 2016 11:09
An: solr-user@lucene.apache.org
Betreff: AW: OutOfMemory when batchupdating from SolrJ
The char[] which occupies 180MB has the following "path to root"
char[87690841] @ 0x7940ba658 shopproducts#...
|- java.lang.Thread @ 0x7321d9b80 SolrUtil executorService
to:clemens...@mysign.ch]
Gesendet: Freitag, 19. Februar 2016 09:07
An: solr-user@lucene.apache.org
Betreff: OutOfMemory when batchupdating from SolrJ
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 1
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 100. I.e. I call
solrClient.add( documents ) for every 100 documents only. So I'd expect to see
at most 100 SolrInputDocument's in memory at any
Sorry for the delay - https://issues.apache.org/jira/browse/SOLR-7646
-Ursprüngliche Nachricht-
Von: Erick Erickson [mailto:erickerick...@gmail.com]
Gesendet: Mittwoch, 3. Juni 2015 17:39
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
Solr.
I am seeing the following OOMs:
ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1]
org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
Solr.
I am seeing the following OOMs:
ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1]
org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
java.lang.OutOfMemoryError: Java heap space
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not
triggered
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
Solr.
I am seeing the following OOMs:
ERROR - 2015-06-03
Hi Mark,
what exactly should I file? What needs to be added/appended to the issue?
Regards
Clemens
-Ursprüngliche Nachricht-
Von: Mark Miller [mailto:markrmil...@gmail.com]
Gesendet: Mittwoch, 3. Juni 2015 14:23
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV clemens...@mysign.ch
wrote:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
for Solr.
I am seeing the following
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller markrmil...@gmail.com wrote:
File a JIRA issue please. That
On 6/3/2015 1:41 AM, Clemens Wyss DEV wrote:
The oom script just kills Solr with the KILL signal (-9) and logs the kill.
I know. But my feeling is, that not even this happens, i.e. the script is
not being executed. At least I see no solr_oom_killer-$SOLR_PORT-$NOW.log
file ...
Btw:
Who
: Mark Miller [mailto:markrmil...@gmail.com]
Gesendet: Mittwoch, 3. Juni 2015 14:23
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not
triggered
We will have to a find a way to deal with this long term. Browsing the code I
can see a variety
:
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
java.lang.OutOfMemoryError: PermGen space. This doesn't happen
previously
as I think the physical memory run out first.
This occurs
.
pozdrawiam,
LAFK
2015-05-18 4:07 GMT+02:00 Zheng Lin Edwin Yeo edwinye...@gmail.com:
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
java.lang.OutOfMemoryError: PermGen space
...@gmail.com:
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
java.lang.OutOfMemoryError: PermGen space. This doesn't happen
previously
as I think the physical memory run out first.
This occurs
Yeo edwinye...@gmail.com:
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously
as I think the physical memory run out first
Hi,
I've recently upgrade my system to 16GB RAM. While there's no more
OutofMemory due to the physically memory being full, I get this
java.lang.OutOfMemoryError: PermGen space. This doesn't happen previously
as I think the physical memory run out first.
This occurs after about 2 days of running
Hi,
I'm using SortedMapBackedCache for my child entities. When I use this I'm
getting outofmemory exception and the records are not getting indexed. I've
increased my heap size to 3GB. but still the same result. Is there a way how
I can configure it to index 1L records and clear the cache
Hi,
I am getting OOM when faceting on numFound=28. The receiving solr node throws
the OutOfMemoryError even though there is 7gb available heap before the
faceting request was submitted. If a different solr node is selected that one
fails too. Any suggestions ?
1) Test setup is:-
100
Mohsin Beg Beg [mohsin@oracle.com] wrote:
I am getting OOM when faceting on numFound=28. The receiving
solr node throws the OutOfMemoryError even though there is 7gb
available heap before the faceting request was submitted.
fc and fcs faceting memory overhead is (nearly) independent on the
@lucene.apache.org
Sent: Tuesday, November 18, 2014 12:34:08 PM GMT -08:00 US/Canada Pacific
Subject: RE: OutOfMemory on 28 docs with facet.method=fc/fcs
Mohsin Beg Beg [mohsin@oracle.com] wrote:
I am getting OOM when faceting on numFound=28. The receiving
solr node throws the OutOfMemoryError
On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote:
Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
the fields. So assuming each field has a unique term across the 28 rows, a
max of 28 * 15 unique small strings (100bytes), should be in the order of
1MB. For 100
Mohsin Beg Beg [mohsin@oracle.com] wrote:
Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
the fields.
To get the seed for the concrete faceting resolving, yes. That still leaves the
mapping and the counting structures.
So assuming each field has a unique
@lucene.apache.org
Sent: Tuesday, November 18, 2014 2:45:46 PM GMT -08:00 US/Canada Pacific
Subject: Re: OutOfMemory on 28 docs with facet.method=fc/fcs
On 11/18/2014 3:06 PM, Mohsin Beg Beg wrote:
Looking at SimpleFacets.java, doesn't fc/fcs iterate only over the DocSet for
the fields. So
-1000, 1000-2000, 2000-3000)
and sending each chunk to a worker in a pool of multi-threaded workers.
This worked well for us with a single server. However upon upgrading to
solr cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error on the coordinating
an OutOfMemory error on the coordinating node that
receives the query. I don't fully understand what's going on here, but it
looks like the coordinating node receives the query and sends it to the
shard requested. For example, given:
shards=shard3sort=id+ascstart=4000q=*:*rows=1000
The coordinating
cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error on the coordinating node that
receives the query. I don't fully understand what's going on here, but it
looks like the coordinating node receives the query and sends it to the
shard requested
requests) causes an OutOfMemory error on the coordinating node that
receives the query. I don't fully understand what's going on here, but
it
looks like the coordinating node receives the query and sends it to the
shard requested. For example, given:
shards=shard3sort=id+ascstart=4000q
, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error on the coordinating node that
receives the query. I don't fully understand what's going on here, but
it
looks like the coordinating node receives the query and sends it to the
shard requested
of multi-threaded workers.
This worked well for us with a single server. However upon upgrading
to
solr cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error on the coordinating node that
receives the query. I don't fully understand what's going
,
2000-3000)
and sending each chunk to a worker in a pool of multi-threaded
workers.
This worked well for us with a single server. However upon upgrading
to
solr cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error
)
and sending each chunk to a worker in a pool of multi-threaded
workers.
This worked well for us with a single server. However upon
upgrading
to
solr cloud, we've found that this quickly (within the first 4 or 5
requests) causes an OutOfMemory error on the coordinating node
. However upon
upgrading
to
solr cloud, we've found that this quickly (within the first 4 or
5
requests) causes an OutOfMemory error on the coordinating node
that
receives the query. I don't fully understand what's going on
here,
but
it
looks like
On Mon, Mar 17, 2014 at 7:14 PM, Greg Pendlebury
greg.pendleb...@gmail.com wrote:
My suspicion is that it won't work in parallel
Deep paging with cursorMark does work with distributed search
(assuming that's what you meant by parallel... querying sub-shards
in parallel?).
-Yonik
Sorry, I meant one thread requesting records 1 - 1000, whilst the next
thread requests 1001 - 2000 from the same ordered result set. We've
observed several of our customers trying to harvest our data with
multi-threaded scripts that work like this. I thought it would not work
using cursor marks...
Greg and I are talking about the same type of parallel.
We do the same thing - if I know there are 10,000 results, we can chunk
that up across multiple worker threads up front without having to page
through the results. We know there are 10 chunks of 1,000, so we can have
one thread process
Hi everyone,
My SolrCloud cluster (4.3.0) has came into production a few days ago.
Docs are being indexed into Solr using /update requestHandler, as a POST
request, containing text/xml content-type.
The collection is sharded into 36 pieces, each shard has two replicas.
There are 36 nodes (each
Hi,
Try running jstat to see if the heap is full. 4gb is not much and could
easily be eaten by structures used for sorting, facetting, and caching.
Plug: SPM has a new feature that lets you send graphs with various metrics
to Solr mailing list. I'd personally look at the GC graphs to see if GC
-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html
Sent from the Solr - User mailing list archive at Nabble.com.
.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035372.html
Sent from the Solr - User mailing list archive at Nabble.com.
:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163.html
Sent from the Solr - User mailing list archive at Nabble.com.
rollback
INFO: end_rollback
-
Author:
http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035163
-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html
Sent from the Solr - User mailing list archive at Nabble.com.
-
Author:
http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spatial-Dataimport-full-import-results-in-OutOfMemory-for-a-rectangle-defining-a-line-tp4034928p4035234.html
Sent from the Solr - User
-failed-OutOfMemory-tp4019802p4019950.html
Sent from the Solr - User mailing list archive at Nabble.com.
if the doc exists or not.
We use here the functionality in solrj to delete a list of ids.
Always in this deletion the error occurs.
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020027.html
Sent from the Solr - User
Environment (build 1.6.0_33-b03) Java HotSpot(TM) 64-Bit
Server VM (build 20.8-b03, mixed mode)
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020078.html
Sent from the Solr - User mailing list archive at Nabble.com.
:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020121.html
Sent from the Solr - User mailing list archive at Nabble.com.
@lucene.apache.org
Betreff: Re: java.io.IOException: Map failed :: OutOfMemory
today the same exception:
INFO: [] webapp=/solr path=/update
params={waitSearcher=truecommit=truewt=javabinwaitFlush=trueversion=2}
status=0 QTime=1009
Nov 13, 2012 2:02:27 PM org.apache.solr.core.SolrDeletionPolicy
.
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802p4020134.html
Sent from the Solr - User mailing list archive at Nabble.com.
, 2012 5:16:41 PM org.apache.solr.update.SolrIndexWriter finalize
SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug
-- POSSIBLE RESOURCE LEAK!!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-io-IOException-Map-failed-OutOfMemory-tp4019802.html
. -- Where Influence Isn’t a Game.
http://www.appinions.com
On Wed, Jul 11, 2012 at 4:05 AM, Bruno Mannina bmann...@free.fr wrote:
Hi, some news this morning...
I added -Xms1024m option and now it works?! no outofmemory ?!
java -jar -Xms1024m -Xmx2048m start.jar
Le 11/07/2012 09:55, Bruno
option and now it works?! no outofmemory ?!
java -jar -Xms1024m -Xmx2048m start.jar
Le 11/07/2012 09:55, Bruno Mannina a écrit :
Hi Yury,
Thanks for your anwer.
ok for to increase memory but I have a problem with that,
I have 8Go on my computer but the JVM accepts only 2Go max
...@free.fr wrote:
Hi, some news this morning...
I added -Xms1024m option and now it works?! no outofmemory ?!
java -jar -Xms1024m -Xmx2048m start.jar
Le 11/07/2012 09:55, Bruno Mannina a écrit :
Hi Yury,
Thanks for your anwer.
ok for to increase memory but I have a problem
news this morning...
I added -Xms1024m option and now it works?! no outofmemory ?!
java -jar -Xms1024m -Xmx2048m start.jar
Le 11/07/2012 09:55, Bruno Mannina a écrit :
Hi Yury,
Thanks for your anwer.
ok for to increase memory but I have a problem with that,
I have 8Go on my computer
Dear Solr Users,
Each time I try to do a request with sort=pubdate+desc
I get:
GRAVE: java.lang.OutOfMemoryError: Java heap space
I use Solr3.6, I have around 80M docs and my request gets around 160
results.
Actually for my test, i use jetty
java -jar -Xmx2g start.jar
PS: If I write
To complete my question:
after having this error, some fields (not all) aren't reachable with the
same error.
Le 10/07/2012 14:25, Bruno Mannina a écrit :
Dear Solr Users,
Each time I try to do a request with sort=pubdate+desc
I get:
GRAVE: java.lang.OutOfMemoryError: Java heap space
Sorting is a memory-intensive operation indeed.
Not sure what you are asking, but it may very well be that your
only option is to give JVM more memory.
On 7/10/2012 8:25 AM, Bruno Mannina wrote:
Dear Solr Users,
Each time I try to do a request with sort=pubdate+desc
I get:
GRAVE:
Good morning!
Recently we slipped into an OOME by optimizing our index. It looks like
it's regarding to the nio class and the memory-handling.
I'll try to describe the environment, the error and what we did to solve
the problem. Nevertheless, none of our approaches was successful.
The
Are you sure you are using a 64 bit JVM?
Are you sure you really changed your vmem limit to unlimited? That
should have resolved the OOME from mmap.
Or: can you run cat /proc/sys/vm/max_map_count? This is a limit on
the total number of maps in a single process, that Linux imposes. But
the
Dear Mike,
thanks for your your reply.
Just a couple of minutes we found a solution or - to be honest - where
we went wrong.
Our failure was the use of ulimit. We missed, that ulimit sets the vmem
for each shell seperatly. So we set 'ulimit -v unlimited' on a shell,
thinking that we've done
OK, excellent. Thanks for bringing closure,
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2011 at 9:00 AM, Ralf Matulat ralf.matu...@bundestag.de wrote:
Dear Mike,
thanks for your your reply.
Just a couple of minutes we found a solution or - to be honest - where we
went
Michael,
What is the best central place on an rpm-based distro (CentOS 6 in my
case) to raise the vmem limit for specific user(s), assuming it's not
already correct? I'm using /etc/security/limits.conf to raise the open
file limit for the user that runs Solr:
ncindex hardnofile
Unfortunately I really don't know ;) Every time I set forth to figure
things like this out I seem to learn some new way...
Maybe someone else knows?
Mike McCandless
http://blog.mikemccandless.com
On Thu, Sep 22, 2011 at 2:15 PM, Shawn Heisey s...@elyograg.org wrote:
Michael,
What is the
: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
The second commit will bring in all changes, from both syncs.
Think of the sync part as a glorified rsync of files on disk. So the
files will have been copied to disk, but the in memory index on the
slave will not have
@lucene.apache.org
Subject: RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
The second commit will bring in all changes, from both syncs.
Think of the sync part as a glorified rsync of files on disk. So the
files will have been copied to disk, but the in memory
: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
The second commit will bring in all changes, from both syncs.
Think of the sync part as a glorified rsync of files on disk. So the
files will have been copied to disk, but the in memory index on the
slave
On 12/14/2010 9:02 AM, Jonathan Rochkind wrote:
1. Will the existing index searcher have problems because the files
have been changed out from under it?
2. Will a future replication -- at which NO new files are available on
master -- still trigger a future commit on slave?
I'm not really
Thanks Shawn, that helps explain things.
So the issue there, with using maxSearchWarmers to try and prevent out
of control RAM/CPU usage from over-lapping on-deck, combined with
replication... is if you're still pulling down replications very
frequently but using maxSearchWarmers to prevent
Thanks for the response.
The date types are defined in our schema file like this
fieldType name=date class=solr.TrieDateField omitNorms=true
precisionStep=0 positionIncrementGap=0/
!-- A Trie based date field for faster date range queries and date
faceting. --
fieldType name=tdate
Forgive me if I've said this in this thread already, but I'm beginning
to think this is the main 'mysterious' cause of Solr RAM/gc issues.
Are you committing very frequently? So frequently that you commit
faster than it takes for warming operations on a new Solr index to
complete, and you're
Wow, you read my mind. We are committing very frequently. We are trying to
get as close to realtime access to the stuff we put in as possible. Our
current commit time is... ahem every 4 seconds.
Is that insane?
I'll try the ConcMarkSweep as well and see if that helps.
On Mon, Dec 13,
On Mon, Dec 13, 2010 at 8:47 PM, John Russell jjruss...@gmail.com wrote:
Wow, you read my mind. We are committing very frequently. We are trying to
get as close to realtime access to the stuff we put in as possible. Our
current commit time is... ahem every 4 seconds.
Is that insane?
1 - 100 of 170 matches
Mail list logo