On 1/8/2015 9:39 AM, Joseph Obernberger wrote: > Yes - it would be 20GBytes of cache per 270GBytes of data.
That's not a lot of cache. One rule of thumb is that you should have at least 50% of the index size available as cache, with 100% being a lot better. The caching should happen on the Solr server itself so there isn't a network bottleneck. This is one of several reasons why local storage on regular filesystems is preferred for Solr. > We've tried lower Xmx but we get OOM errors during faceting of large > datasets. Right now we're running two JVMs per physical box (2 shards > per box), but we're going to be changing that to on JVM and one shard > per box. This wiki page has some info on what can cause high heap requirements and some general ideas for what you can do about it: http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap If you want to discuss your specific situation, we can use the list, direct email, or the #solr IRC channel. http://wiki.apache.org/solr/IRCChannels Thanks, Shawn