capistrant commented on code in PR #17892: URL: https://github.com/apache/druid/pull/17892#discussion_r2034102336
########## docs/operations/basic-cluster-tuning.md: ########## @@ -40,11 +40,10 @@ The biggest contributions to heap usage on Historicals are: - Partial unmerged query results from segments - The stored maps for [lookups](../querying/lookups.md). -A general rule-of-thumb for sizing the Historical heap is `(0.5GiB * number of CPU cores)`, with an upper limit of ~24GiB. +A general rule-of-thumb for sizing the Historical heap is `(0.5GiB * number of CPU cores)`. -This rule-of-thumb scales using the number of CPU cores as a convenient proxy for hardware size and level of concurrency (note: this formula is not a hard rule for sizing Historical heaps). - -Having a heap that is too large can result in excessively long GC collection pauses, the ~24GiB upper limit is imposed to avoid this. +This is a starting point, not a hard rule for sizing Historical heaps. +Note that with certain garbage collectors, having a large heap can result in excessively long GC pauses. For heaps larger than about 24GiB, we recommend using a collector that can handle large heaps, such as Shenandoah or ZGC. Review Comment: Are we confident that these GCs work well on large heap historicals in the wild? I am in total agreement about lifting the suggestion of a 24GiB cap on historical heaps, but wanted to confirm we are pushing the community somewhere they will find success. If indeed they work well out of the box, that is great, because I have experience having to tinker with G1 collector configs to get historical's running smoothly with large heaps. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
