...@lucidimagination.com]
Sent: Monday, December 13, 2010 10:41 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan Rochkind rochk...@jhu.edu
wrote:
Yonik, how
@lucene.apache.org
Subject: RE: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
The second commit will bring in all changes, from both syncs.
Think of the sync part as a glorified rsync of files on disk. So the
files will have been copied to disk, but the in memory
...@lucidimagination.com]
Sent: Monday, December 13, 2010 10:41 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan Rochkindrochk...@jhu.edu
wrote:
Yonik, how will maxWarmingSearchers
On 12/14/2010 9:02 AM, Jonathan Rochkind wrote:
1. Will the existing index searcher have problems because the files
have been changed out from under it?
2. Will a future replication -- at which NO new files are available on
master -- still trigger a future commit on slave?
I'm not really
Thanks Shawn, that helps explain things.
So the issue there, with using maxSearchWarmers to try and prevent out
of control RAM/CPU usage from over-lapping on-deck, combined with
replication... is if you're still pulling down replications very
frequently but using maxSearchWarmers to prevent
Thanks for the response.
The date types are defined in our schema file like this
fieldType name=date class=solr.TrieDateField omitNorms=true
precisionStep=0 positionIncrementGap=0/
!-- A Trie based date field for faster date range queries and date
faceting. --
fieldType name=tdate
Forgive me if I've said this in this thread already, but I'm beginning
to think this is the main 'mysterious' cause of Solr RAM/gc issues.
Are you committing very frequently? So frequently that you commit
faster than it takes for warming operations on a new Solr index to
complete, and you're
Wow, you read my mind. We are committing very frequently. We are trying to
get as close to realtime access to the stuff we put in as possible. Our
current commit time is... ahem every 4 seconds.
Is that insane?
I'll try the ConcMarkSweep as well and see if that helps.
On Mon, Dec 13,
On Mon, Dec 13, 2010 at 8:47 PM, John Russell jjruss...@gmail.com wrote:
Wow, you read my mind. We are committing very frequently. We are trying to
get as close to realtime access to the stuff we put in as possible. Our
current commit time is... ahem every 4 seconds.
Is that insane?
...@gmail.com]
Sent: Monday, December 13, 2010 8:47 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
Wow, you read my mind. We are committing very frequently. We are trying to
get as close to realtime access
On 12/13/2010 3:38 PM, Jonathan Rochkind wrote:
But if the problem really is just GC issues and not actually too much
RAM being used, try this JVM setting:
-XX:+UseConcMarkSweepGC
That's I use on my shards, I've never had any visible problems with
memory or garbage collection delays. I
...@lucidimagination.com]
Sent: Monday, December 13, 2010 9:07 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
On Mon, Dec 13, 2010 at 8:47 PM, John Russell jjruss...@gmail.com wrote:
Wow, you read my mind. We
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
Yonik, how will maxWarmingSearchers in this scenario effect replication? If
a slave is pulling down new indexes so quickly that the warming searchers
would ordinarily pile up, but maxWarmingSearchers is set to 1
Seeley
[yo...@lucidimagination.com]
Sent: Monday, December 13, 2010 10:41 PM
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't WeakHashMap
getting collected?
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
Yonik, how
To: solr-user@lucene.apache.org
Subject: Re: OutOfMemory GC: GC overhead limit exceeded - Why isn't
WeakHashMap getting collected?
On Mon, Dec 13, 2010 at 9:27 PM, Jonathan Rochkind rochk...@jhu.edu
wrote:
Yonik, how will maxWarmingSearchers in this scenario effect replication?
If a slave
unfortunately I can't check the statistics page. For some reason the solr
webapp itself is only returning a directory listing.
This is very weird and makes me wonder if there's something really wonky
with your system. I'm assuming when you say the solr webapp itself you're
taking about
Hi John,
WeakReferences allow things to get GC'd, if there are no other
references to the object referred to.
My understanding is that WeakHashMaps use weak references for the Keys
in the HashMap.
What this means is that the keys in HashMap can be GC'd, once there
are no other references to the
Thanks a lot for the response.
Unfortunately I can't check the statistics page. For some reason the solr
webapp itself is only returning a directory listing. This is sometimes
fixed when I restart but if I do that I'll lose the state I have now. I can
get at the JMX interface. Can I check my
18 matches
Mail list logo