[ https://issues.apache.org/jira/browse/LUCENE-1750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732687#action_12732687 ]
Jason Rutherglen commented on LUCENE-1750: ------------------------------------------ Yeah I realized that later. So a new merge policy that inherits from LogByteSizeMergePolicy that keeps a segment size limit will work. Ideally once a segment reaches a near enough range, segments will stop being merged to it. This was easier when the shards were in separate directories (i.e. fill up the directory, stop when it's at the limit, optimize the directory, and move on). > LogByteSizeMergePolicy doesn't keep segments under maxMergeMB > ------------------------------------------------------------- > > Key: LUCENE-1750 > URL: https://issues.apache.org/jira/browse/LUCENE-1750 > Project: Lucene - Java > Issue Type: Bug > Components: Index > Affects Versions: 2.4.1 > Reporter: Jason Rutherglen > Priority: Minor > Fix For: 2.9 > > Attachments: LUCENE-1750.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Basically I'm trying to create largish 2-4GB shards using > LogByteSizeMergePolicy, however I've found in the attached unit > test segments that exceed maxMergeMB. > The goal is for segments to be merged up to 2GB, then all > merging to that segment stops, and then another 2GB segment is > created. This helps when replicating in Solr where if a single > optimized 60GB segment is created, the machine stops working due > to IO and CPU starvation. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org