Yeah, that is pretty outdated. At Netflix, I was running an 8 GB heap with Solr 1.3. :-)
Every GC I know about has a stop-the-world collector as a last ditch measure. G1GC limits the time that the world will stop. It gives up after MaxGCPauseMillis milliseconds and leaves the rest of the garbage uncollected. If it has 5 seconds worth of work to do that, it might take 10 seconds, but in 200 ms chunks. It does a lot of other stuff outside of the pauses to make the major collections more effective. We wrote Ultraseek in Python+C because Python used reference counting and did not do garbage collection. That is the only way to have no pauses with automatic memory management. wunder Walter Underwood wun...@wunderwood.org http://observer.wunderwood.org/ (my blog) > On Feb 14, 2020, at 11:35 AM, Tom Burton-West <tburt...@umich.edu> wrote: > > Hello, > > In the section on JVM tuning in the Solr 8.3 documentation ( > https://lucene.apache.org/solr/guide/8_3/jvm-settings.html#jvm-settings) > there is a paragraph which cautions about setting heap sizes over 2 GB: > > "The larger the heap the longer it takes to do garbage collection. This can > mean minor, random pauses or, in extreme cases, "freeze the world" pauses > of a minute or more. As a practical matter, this can become a serious > problem for heap sizes that exceed about **two gigabytes**, even if far > more physical memory is available. On robust hardware, you may get better > results running multiple JVMs, rather than just one with a large memory > heap. " (** added by me) > > I suspect this paragraph is severely outdated, but am not a Java expert. > It seems to be contradicted by the statement in " > https://lucene.apache.org/solr/guide/8_3/taking-solr-to-production.html#memory-and-gc-settings" > "...values between 10 and 20 gigabytes are not uncommon for production > servers" > > Are "freeze the world" pauses still an issue with modern JVM's? > Is it still advisable to avoid heap sizes over 2GB? > > Tom > https://www.hathitrust.org/blogslarge-scale-search