Thanks for the reply Shawn and Erick.

What *exactly* are you looking at that says Solr is using all your
memory?  You must be extremely specific when answering this question.
This will determine whether we should be looking for a bug or not.

A) Before I start the optimization, the server's memory usage
is consistent at around 16GB, when Solr startsup and we did some searching.
However, when I click on the optimization button, the memory usage
increases gradually, until it reaches the maximum of 64GB which the server
has. But this only happens to the collection with index of 200GB, and not
other collections which has smaller index size (they are at most 1GB at the
moment).


In another message thread, you indicated that your max heap was set to
14GB.  Java will only ever use that much memory for the program that is
being run, plus a relatively small amount so that Java itself can
operate.  Any significantly large resident memory allocation beyond the
max heap would be an indication of a bug in Java, not a bug in Solr.

A) I am quite curious at this also, because in the Task Manager of the
server, the amount of memory usage stated does not tally with the
percentage of memory usage. When I start optimizatoin, the memory usage
states the JVM is only using 14GB, but the percentage of memory usage is
almost 100%, when I have 64GB RAM. I have check the other processes running
in the server, and did not found any other processes that takes up a large
amount of memory, and the total amount of memory usage for the whole sever
is only around 16GB.


Regards,
Edwin


On 3 January 2016 at 01:24, Erick Erickson <erickerick...@gmail.com> wrote:

> If you happen to be looking at "top" or the like, you
> might be seeing virtual memory, see Uwe's
> excellent blog here:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
> Best,
> Erick
>
> On Fri, Jan 1, 2016 at 11:46 PM, Shawn Heisey <apa...@elyograg.org> wrote:
> > On 12/31/2015 8:03 PM, Zheng Lin Edwin Yeo wrote:
> >> But the problem I'm facing now is that during optimizing, the memory
> usage
> >> of the server hit the maximum of 64GB, and I believe the optimization
> could
> >> not be completed fully as there is not enough memory, so when I check
> the
> >> index again, it says that it is not optimized. Before the optimization,
> the
> >> memory usage was less than 16GB, so the optimization actually uses up
> more
> >> than 48GB of memory.
> >>
> >> Is it normal for an index size of 200GB to use up so much memory during
> >> optimization?
> >
> > What *exactly* are you looking at that says Solr is using all your
> > memory?  You must be extremely specific when answering this question.
> > This will determine whether we should be looking for a bug or not.
> >
> > It is completely normal for all modern operating systems to use all the
> > memory when the amount of data being handled is large.  Some of the
> > memory will be allocated to programs like Java/Solr, and the operating
> > system will use everything else to cache data from I/O operations on the
> > disk.  This is called the page cache.  For Solr to perform well, the
> > page cache must be large enough to effectively cache your index data.
> >
> > https://en.wikipedia.org/wiki/Page_cache
> >
> > In another message thread, you indicated that your max heap was set to
> > 14GB.  Java will only ever use that much memory for the program that is
> > being run, plus a relatively small amount so that Java itself can
> > operate.  Any significantly large resident memory allocation beyond the
> > max heap would be an indication of a bug in Java, not a bug in Solr.
> >
> > With the index size at 200GB, I would hope to have at least 128GB of
> > memory in the server, but I would *want* 256GB.  64GB may not be enough
> > for good performance.
> >
> > Thanks,
> > Shawn
> >
>

Reply via email to