On 3/5/2016 11:44 PM, YouPeng Yang wrote:
>       We are using Solr Cloud 4.6 in our production for searching service
> since 2 years ago.And now it has 700GB in one cluster which is  comprised
> of 3 machines with ssd. At beginning ,everything go well,while more and
> more business services interfered with our searching service .And a problem
>  which we haunted with is just like a  nightmare . That is the cpu sys
> usage is often growing up to  over 10% even higher, and as a result the
> machine will hang down because system resources have be drained out.We have
> to restart the machine manually.

One of the most common reasons for performance issues with Solr is not
having enough system memory to effectively cache the index.  Another is
running with a heap that's too small, or a heap that's really large with
ineffective garbage collection tuning.  All of these problems get worse
as query rate climbs.

Running on SSD can reduce, but not eliminate, the requirement for plenty
of system memory.

With 700GB of index data, you are likely to need somewhere between 128GB
and 512GB of memory for good performance.  If the query rate is high,
then requirements are more likely to land in the upper end of that
range.  There's no way for me to narrow that range down -- it depends on
a number of factors, and usually has to be determined through trial and
error.  If the data were on regular disks instead of SSD, I would be
recommending even more memory.

https://wiki.apache.org/solr/SolrPerformanceProblems
https://lucidworks.com/blog/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

If you want a single number recommendation for memory size, I would
recommend starting with 256GB, and being ready to add more.  It is very
common for servers to be incapable of handling that much memory,
though.  The servers that I use for Solr max out at 64GB.

You might need to split your index onto additional machines by sharding
it, and gain the additional memory that way.

Thanks,
Shawn

Reply via email to