Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-03-10 Thread Erick Erickson
Thanks for letting us know!

Erick

On Tue, Mar 10, 2015 at 5:20 AM, Dmitry Kan  wrote:
> For the sake of the story completeness, just wanted to confirm these params
> made a positive affect:
>
> -Dsolr.solr.home=cores -Xmx12000m -Djava.awt.headless=true -XX:+UseParNewGC
> -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC
> -XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40
>
> This freed up couple dozen GBs on the solr server!
>
> On Tue, Feb 17, 2015 at 1:47 PM, Dmitry Kan  wrote:
>
>> Thanks Toke!
>>
>> Now I consistently see the saw-tooth pattern on two shards with new GC
>> parameters, next I will try your suggestion.
>>
>> The current params are:
>>
>> -Xmx25600m -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent
>> -XX:+UseConcMarkSweepGC -XX:MaxTenuringThreshold=8
>> -XX:CMSInitiatingOccupancyFraction=40
>>
>> Dmitry
>>
>> On Tue, Feb 17, 2015 at 1:34 PM, Toke Eskildsen 
>> wrote:
>>
>>> On Tue, 2015-02-17 at 11:05 +0100, Dmitry Kan wrote:
>>> > Solr: 4.10.2 (high load, mass indexing)
>>> > Java: 1.7.0_76 (Oracle)
>>> > -Xmx25600m
>>> >
>>> >
>>> > Solr: 4.3.1 (normal load, no mass indexing)
>>> > Java: 1.7.0_11 (Oracle)
>>> > -Xmx25600m
>>> >
>>> > The RAM consumption remained the same after the load has stopped on the
>>> > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
>>> > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM
>>> as
>>> > seen by top remained at 9G level.
>>>
>>> As the JVM does not free OS memory once allocated, top just shows
>>> whatever peak it reached at some point. When you tell the JVM that it is
>>> free to use 25GB, it makes a lot of sense to allocate a fair chunk of
>>> that instead of garbage collecting if there is a period of high usage
>>> (mass indexing for example).
>>>
>>> > What else could be the artifact of such a difference -- Solr or JVM?
>>> Can it
>>> > only be explained by the mass indexing? What is worrisome is that the
>>> > 4.10.2 shard reserves 8x times it uses.
>>>
>>> If you set your Xmx to a lot less, the JVM will probably favour more
>>> frequent garbage collections over extra heap allocation.
>>>
>>> - Toke Eskildsen, State and University Library, Denmark
>>>
>>>
>>>
>>
>>
>> --
>> Dmitry Kan
>> Luke Toolbox: http://github.com/DmitryKey/luke
>> Blog: http://dmitrykan.blogspot.com
>> Twitter: http://twitter.com/dmitrykan
>> SemanticAnalyzer: www.semanticanalyzer.info
>>
>>
>
>
> --
> Dmitry Kan
> Luke Toolbox: http://github.com/DmitryKey/luke
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> SemanticAnalyzer: www.semanticanalyzer.info


Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-03-10 Thread Dmitry Kan
For the sake of the story completeness, just wanted to confirm these params
made a positive affect:

-Dsolr.solr.home=cores -Xmx12000m -Djava.awt.headless=true -XX:+UseParNewGC
-XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC
-XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40

This freed up couple dozen GBs on the solr server!

On Tue, Feb 17, 2015 at 1:47 PM, Dmitry Kan  wrote:

> Thanks Toke!
>
> Now I consistently see the saw-tooth pattern on two shards with new GC
> parameters, next I will try your suggestion.
>
> The current params are:
>
> -Xmx25600m -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent
> -XX:+UseConcMarkSweepGC -XX:MaxTenuringThreshold=8
> -XX:CMSInitiatingOccupancyFraction=40
>
> Dmitry
>
> On Tue, Feb 17, 2015 at 1:34 PM, Toke Eskildsen 
> wrote:
>
>> On Tue, 2015-02-17 at 11:05 +0100, Dmitry Kan wrote:
>> > Solr: 4.10.2 (high load, mass indexing)
>> > Java: 1.7.0_76 (Oracle)
>> > -Xmx25600m
>> >
>> >
>> > Solr: 4.3.1 (normal load, no mass indexing)
>> > Java: 1.7.0_11 (Oracle)
>> > -Xmx25600m
>> >
>> > The RAM consumption remained the same after the load has stopped on the
>> > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
>> > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM
>> as
>> > seen by top remained at 9G level.
>>
>> As the JVM does not free OS memory once allocated, top just shows
>> whatever peak it reached at some point. When you tell the JVM that it is
>> free to use 25GB, it makes a lot of sense to allocate a fair chunk of
>> that instead of garbage collecting if there is a period of high usage
>> (mass indexing for example).
>>
>> > What else could be the artifact of such a difference -- Solr or JVM?
>> Can it
>> > only be explained by the mass indexing? What is worrisome is that the
>> > 4.10.2 shard reserves 8x times it uses.
>>
>> If you set your Xmx to a lot less, the JVM will probably favour more
>> frequent garbage collections over extra heap allocation.
>>
>> - Toke Eskildsen, State and University Library, Denmark
>>
>>
>>
>
>
> --
> Dmitry Kan
> Luke Toolbox: http://github.com/DmitryKey/luke
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> SemanticAnalyzer: www.semanticanalyzer.info
>
>


-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info


Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Dmitry Kan
Thanks Toke!

Now I consistently see the saw-tooth pattern on two shards with new GC
parameters, next I will try your suggestion.

The current params are:

-Xmx25600m -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent
-XX:+UseConcMarkSweepGC -XX:MaxTenuringThreshold=8
-XX:CMSInitiatingOccupancyFraction=40

Dmitry

On Tue, Feb 17, 2015 at 1:34 PM, Toke Eskildsen 
wrote:

> On Tue, 2015-02-17 at 11:05 +0100, Dmitry Kan wrote:
> > Solr: 4.10.2 (high load, mass indexing)
> > Java: 1.7.0_76 (Oracle)
> > -Xmx25600m
> >
> >
> > Solr: 4.3.1 (normal load, no mass indexing)
> > Java: 1.7.0_11 (Oracle)
> > -Xmx25600m
> >
> > The RAM consumption remained the same after the load has stopped on the
> > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
> > seen by top remained at 9G level.
>
> As the JVM does not free OS memory once allocated, top just shows
> whatever peak it reached at some point. When you tell the JVM that it is
> free to use 25GB, it makes a lot of sense to allocate a fair chunk of
> that instead of garbage collecting if there is a period of high usage
> (mass indexing for example).
>
> > What else could be the artifact of such a difference -- Solr or JVM? Can
> it
> > only be explained by the mass indexing? What is worrisome is that the
> > 4.10.2 shard reserves 8x times it uses.
>
> If you set your Xmx to a lot less, the JVM will probably favour more
> frequent garbage collections over extra heap allocation.
>
> - Toke Eskildsen, State and University Library, Denmark
>
>
>


-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info


Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Toke Eskildsen
On Tue, 2015-02-17 at 11:05 +0100, Dmitry Kan wrote:
> Solr: 4.10.2 (high load, mass indexing)
> Java: 1.7.0_76 (Oracle)
> -Xmx25600m
> 
> 
> Solr: 4.3.1 (normal load, no mass indexing)
> Java: 1.7.0_11 (Oracle)
> -Xmx25600m
> 
> The RAM consumption remained the same after the load has stopped on the
> 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
> seen by top remained at 9G level.

As the JVM does not free OS memory once allocated, top just shows
whatever peak it reached at some point. When you tell the JVM that it is
free to use 25GB, it makes a lot of sense to allocate a fair chunk of
that instead of garbage collecting if there is a period of high usage
(mass indexing for example). 

> What else could be the artifact of such a difference -- Solr or JVM? Can it
> only be explained by the mass indexing? What is worrisome is that the
> 4.10.2 shard reserves 8x times it uses.

If you set your Xmx to a lot less, the JVM will probably favour more
frequent garbage collections over extra heap allocation.

- Toke Eskildsen, State and University Library, Denmark




Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Dmitry Kan
;) ok. Currently I'm trying parallel GC options, mentioned here:
http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/101377

At least the saw-tooth RAM chart is starting to shape up.

On Tue, Feb 17, 2015 at 12:55 PM, Markus Jelsma 
wrote:

> I would have shared it if i had one :)
>
> -Original message-
> > From:Dmitry Kan 
> > Sent: Tuesday 17th February 2015 11:40
> > To: solr-user@lucene.apache.org
> > Subject: Re: unusually high 4.10.2 vs 4.3.1 RAM consumption
> >
> > Have you found an explanation to that?
> >
> > On Tue, Feb 17, 2015 at 12:12 PM, Markus Jelsma <
> markus.jel...@openindex.io>
> > wrote:
> >
> > > We have seen an increase between 4.8.1 and 4.10.
> > >
> > > -Original message-
> > > > From:Dmitry Kan 
> > > > Sent: Tuesday 17th February 2015 11:06
> > > > To: solr-user@lucene.apache.org
> > > > Subject: unusually high 4.10.2 vs 4.3.1 RAM consumption
> > > >
> > > > Hi,
> > > >
> > > > We are currently comparing the RAM consumption of two parallel Solr
> > > > clusters with different solr versions: 4.10.2 and 4.3.1.
> > > >
> > > > For comparable index sizes of a shard (20G and 26G), we observed 9G
> vs
> > > 5.6G
> > > > RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.
> > > >
> > > > We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
> > > > reindexed data from scratch. The commits are all controlled on the
> > > client,
> > > > i.e. not auto-commits.
> > > >
> > > > Solr: 4.10.2 (high load, mass indexing)
> > > > Java: 1.7.0_76 (Oracle)
> > > > -Xmx25600m
> > > >
> > > >
> > > > Solr: 4.3.1 (normal load, no mass indexing)
> > > > Java: 1.7.0_11 (Oracle)
> > > > -Xmx25600m
> > > >
> > > > The RAM consumption remained the same after the load has stopped on
> the
> > > > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> > > > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved
> RAM as
> > > > seen by top remained at 9G level.
> > > >
> > > > This unusual spike happened during mass data indexing.
> > > >
> > > > What else could be the artifact of such a difference -- Solr or JVM?
> Can
> > > it
> > > > only be explained by the mass indexing? What is worrisome is that the
> > > > 4.10.2 shard reserves 8x times it uses.
> > > >
> > > > What can be done about this?
> > > >
> > > > --
> > > > Dmitry Kan
> > > > Luke Toolbox: http://github.com/DmitryKey/luke
> > > > Blog: http://dmitrykan.blogspot.com
> > > > Twitter: http://twitter.com/dmitrykan
> > > > SemanticAnalyzer: www.semanticanalyzer.info
> > > >
> > >
> >
> >
> >
> > --
> > Dmitry Kan
> > Luke Toolbox: http://github.com/DmitryKey/luke
> > Blog: http://dmitrykan.blogspot.com
> > Twitter: http://twitter.com/dmitrykan
> > SemanticAnalyzer: www.semanticanalyzer.info
> >
>



-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info


RE: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Markus Jelsma
I would have shared it if i had one :)  
 
-Original message-
> From:Dmitry Kan 
> Sent: Tuesday 17th February 2015 11:40
> To: solr-user@lucene.apache.org
> Subject: Re: unusually high 4.10.2 vs 4.3.1 RAM consumption
> 
> Have you found an explanation to that?
> 
> On Tue, Feb 17, 2015 at 12:12 PM, Markus Jelsma 
> wrote:
> 
> > We have seen an increase between 4.8.1 and 4.10.
> >
> > -Original message-
> > > From:Dmitry Kan 
> > > Sent: Tuesday 17th February 2015 11:06
> > > To: solr-user@lucene.apache.org
> > > Subject: unusually high 4.10.2 vs 4.3.1 RAM consumption
> > >
> > > Hi,
> > >
> > > We are currently comparing the RAM consumption of two parallel Solr
> > > clusters with different solr versions: 4.10.2 and 4.3.1.
> > >
> > > For comparable index sizes of a shard (20G and 26G), we observed 9G vs
> > 5.6G
> > > RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.
> > >
> > > We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
> > > reindexed data from scratch. The commits are all controlled on the
> > client,
> > > i.e. not auto-commits.
> > >
> > > Solr: 4.10.2 (high load, mass indexing)
> > > Java: 1.7.0_76 (Oracle)
> > > -Xmx25600m
> > >
> > >
> > > Solr: 4.3.1 (normal load, no mass indexing)
> > > Java: 1.7.0_11 (Oracle)
> > > -Xmx25600m
> > >
> > > The RAM consumption remained the same after the load has stopped on the
> > > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> > > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
> > > seen by top remained at 9G level.
> > >
> > > This unusual spike happened during mass data indexing.
> > >
> > > What else could be the artifact of such a difference -- Solr or JVM? Can
> > it
> > > only be explained by the mass indexing? What is worrisome is that the
> > > 4.10.2 shard reserves 8x times it uses.
> > >
> > > What can be done about this?
> > >
> > > --
> > > Dmitry Kan
> > > Luke Toolbox: http://github.com/DmitryKey/luke
> > > Blog: http://dmitrykan.blogspot.com
> > > Twitter: http://twitter.com/dmitrykan
> > > SemanticAnalyzer: www.semanticanalyzer.info
> > >
> >
> 
> 
> 
> -- 
> Dmitry Kan
> Luke Toolbox: http://github.com/DmitryKey/luke
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> SemanticAnalyzer: www.semanticanalyzer.info
> 


Re: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Dmitry Kan
Have you found an explanation to that?

On Tue, Feb 17, 2015 at 12:12 PM, Markus Jelsma 
wrote:

> We have seen an increase between 4.8.1 and 4.10.
>
> -Original message-
> > From:Dmitry Kan 
> > Sent: Tuesday 17th February 2015 11:06
> > To: solr-user@lucene.apache.org
> > Subject: unusually high 4.10.2 vs 4.3.1 RAM consumption
> >
> > Hi,
> >
> > We are currently comparing the RAM consumption of two parallel Solr
> > clusters with different solr versions: 4.10.2 and 4.3.1.
> >
> > For comparable index sizes of a shard (20G and 26G), we observed 9G vs
> 5.6G
> > RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.
> >
> > We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
> > reindexed data from scratch. The commits are all controlled on the
> client,
> > i.e. not auto-commits.
> >
> > Solr: 4.10.2 (high load, mass indexing)
> > Java: 1.7.0_76 (Oracle)
> > -Xmx25600m
> >
> >
> > Solr: 4.3.1 (normal load, no mass indexing)
> > Java: 1.7.0_11 (Oracle)
> > -Xmx25600m
> >
> > The RAM consumption remained the same after the load has stopped on the
> > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
> > seen by top remained at 9G level.
> >
> > This unusual spike happened during mass data indexing.
> >
> > What else could be the artifact of such a difference -- Solr or JVM? Can
> it
> > only be explained by the mass indexing? What is worrisome is that the
> > 4.10.2 shard reserves 8x times it uses.
> >
> > What can be done about this?
> >
> > --
> > Dmitry Kan
> > Luke Toolbox: http://github.com/DmitryKey/luke
> > Blog: http://dmitrykan.blogspot.com
> > Twitter: http://twitter.com/dmitrykan
> > SemanticAnalyzer: www.semanticanalyzer.info
> >
>



-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info


RE: unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Markus Jelsma
We have seen an increase between 4.8.1 and 4.10. 
 
-Original message-
> From:Dmitry Kan 
> Sent: Tuesday 17th February 2015 11:06
> To: solr-user@lucene.apache.org
> Subject: unusually high 4.10.2 vs 4.3.1 RAM consumption
> 
> Hi,
> 
> We are currently comparing the RAM consumption of two parallel Solr
> clusters with different solr versions: 4.10.2 and 4.3.1.
> 
> For comparable index sizes of a shard (20G and 26G), we observed 9G vs 5.6G
> RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.
> 
> We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
> reindexed data from scratch. The commits are all controlled on the client,
> i.e. not auto-commits.
> 
> Solr: 4.10.2 (high load, mass indexing)
> Java: 1.7.0_76 (Oracle)
> -Xmx25600m
> 
> 
> Solr: 4.3.1 (normal load, no mass indexing)
> Java: 1.7.0_11 (Oracle)
> -Xmx25600m
> 
> The RAM consumption remained the same after the load has stopped on the
> 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
> seen by top remained at 9G level.
> 
> This unusual spike happened during mass data indexing.
> 
> What else could be the artifact of such a difference -- Solr or JVM? Can it
> only be explained by the mass indexing? What is worrisome is that the
> 4.10.2 shard reserves 8x times it uses.
> 
> What can be done about this?
> 
> -- 
> Dmitry Kan
> Luke Toolbox: http://github.com/DmitryKey/luke
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> SemanticAnalyzer: www.semanticanalyzer.info
> 


unusually high 4.10.2 vs 4.3.1 RAM consumption

2015-02-17 Thread Dmitry Kan
Hi,

We are currently comparing the RAM consumption of two parallel Solr
clusters with different solr versions: 4.10.2 and 4.3.1.

For comparable index sizes of a shard (20G and 26G), we observed 9G vs 5.6G
RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.

We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
reindexed data from scratch. The commits are all controlled on the client,
i.e. not auto-commits.

Solr: 4.10.2 (high load, mass indexing)
Java: 1.7.0_76 (Oracle)
-Xmx25600m


Solr: 4.3.1 (normal load, no mass indexing)
Java: 1.7.0_11 (Oracle)
-Xmx25600m

The RAM consumption remained the same after the load has stopped on the
4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved RAM as
seen by top remained at 9G level.

This unusual spike happened during mass data indexing.

What else could be the artifact of such a difference -- Solr or JVM? Can it
only be explained by the mass indexing? What is worrisome is that the
4.10.2 shard reserves 8x times it uses.

What can be done about this?

-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info