Quickly scanned this and from what I can tell it picks values for things like Xmx based on memory found on the host. This is fine first guess, but ultimately one wants control over that and adjusts it based on factors beyond just available RAM, such as whether sorting it used, or faceting, on how many fields, of which type, etc. etc.
Just today we (Sematext) had a client with a rather large Solr index and memory issues. After about an hour of troubleshooting and looking at Solr metrics in SPM we correlated high GC, old gen mem pool hitting 100%, a few other things, and warmup queries that ended up throwing a node into OOMland. It turned out warmup queries using very non-selective filter queries were creating massive entries in the filter cache. In such situations I wouldn't want a script to assume what Xmx is needed. I'd want to set this value based on all the information I have at my disposal. Otis -- Performance Monitoring * Log Analytics * Search Analytics Solr & Elasticsearch Support * http://sematext.com/ On Tue, Nov 12, 2013 at 12:59 PM, Scott Stults < sstu...@opensourceconnections.com> wrote: > We've been using a slightly older version of this script to start Solr in > server environments: > > https://github.com/apache/cassandra/blob/trunk/conf/cassandra-env.sh > > The thing I especially like about it is its ability to dynamically cap > memory usage, and the garbage collection log section is a great reference > when we need to check gc times. > > My question is, does anyone else use a script like this to configure the > JVM for Solr? Would it be useful to have this as a reference in > solr/example/etc? > > > Thanks! > -Scott >