[ https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16305820#comment-16305820 ]
Erick Erickson commented on LUCENE-7976: ---------------------------------------- bq: So I think maxMergedSegmentMB should win over maxSegments passed to forceMerge Works for me. bq: one way to shoot yourself (3a) is enough ;) WDYT about going one step further and deprecating maxSegments? Does having that extra knob (maxSegments) really add any value? A value of -1 for maxMergedSegmentMB would mean the same thing as the old optimize. That would avoid having to reconcile the two So here's what we tell users (needs to be prettied up): 1> In general invoking forceMerge is unnecessary. Especially in a frequently updated index the default settings should suffice and forceMerge can hurt (there's a blog about that). 2> If you find you have too many deleted documents in your index, consider changing reclaimDeletesWeight in your configuration (and provide some guidance on reasonable values). 3> forceMerge now respects maxMergedSegmentMB. This means that forceMerge will no longer create an index with one segment by default although it will purge all deleted documents. 4> If you require forceMerge to produce a single segment, you must provide a parameter maxMergedSegmentMB=-1 to the forceMerge command. It is not recommended to set maxMergedSegmentMB=-1 as a permanent setting in your config as it will lead to excessive I/O during normal indexing. Invoking forceMerge with maxMergedSegmentMB=-1 is only recommended when you're willing and able to perform this operation whenever the index is changed or it will lead to excessive space occupied by deleted documents. 5> (assuming we deprecate maxSegments). forceMerge no longer supports maxSegments. You can approximate this behavior by selecting an appropriate value for maxMergedSegmentMB based on the total size of your index. > Add a parameter to TieredMergePolicy to merge segments that have more than X > percent deleted documents > ------------------------------------------------------------------------------------------------------ > > Key: LUCENE-7976 > URL: https://issues.apache.org/jira/browse/LUCENE-7976 > Project: Lucene - Core > Issue Type: Improvement > Reporter: Erick Erickson > Attachments: LUCENE-7976.patch > > > We're seeing situations "in the wild" where there are very large indexes (on > disk) handled quite easily in a single Lucene index. This is particularly > true as features like docValues move data into MMapDirectory space. The > current TMP algorithm allows on the order of 50% deleted documents as per a > dev list conversation with Mike McCandless (and his blog here: > https://www.elastic.co/blog/lucenes-handling-of-deleted-documents). > Especially in the current era of very large indexes in aggregate, (think many > TB) solutions like "you need to distribute your collection over more shards" > become very costly. Additionally, the tempting "optimize" button exacerbates > the issue since once you form, say, a 100G segment (by > optimizing/forceMerging) it is not eligible for merging until 97.5G of the > docs in it are deleted (current default 5G max segment size). > The proposal here would be to add a new parameter to TMP, something like > <maxAllowedPctDeletedInBigSegments> (no, that's not serious name, suggestions > welcome) which would default to 100 (or the same behavior we have now). > So if I set this parameter to, say, 20%, and the max segment size stays at > 5G, the following would happen when segments were selected for merging: > > any segment with > 20% deleted documents would be merged or rewritten NO > > MATTER HOW LARGE. There are two cases, > >> the segment has < 5G "live" docs. In that case it would be merged with > >> smaller segments to bring the resulting segment up to 5G. If no smaller > >> segments exist, it would just be rewritten > >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). > >> It would be rewritten into a single segment removing all deleted docs no > >> matter how big it is to start. The 100G example above would be rewritten > >> to an 80G segment for instance. > Of course this would lead to potentially much more I/O which is why the > default would be the same behavior we see now. As it stands now, though, > there's no way to recover from an optimize/forceMerge except to re-index from > scratch. We routinely see 200G-300G Lucene indexes at this point "in the > wild" with 10s of shards replicated 3 or more times. And that doesn't even > include having these over HDFS. > Alternatives welcome! Something like the above seems minimally invasive. A > new merge policy is certainly an alternative. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org