[ 
https://issues.apache.org/jira/browse/LUCENE-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264647#comment-14264647
 ] 

ASF subversion and git services commented on LUCENE-6119:
---------------------------------------------------------

Commit 1649539 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1649539 ]

LUCENE-6119: CMS dynamically rate limits IO writes of each merge depending on 
incoming merge rate

> Add auto-io-throttle to ConcurrentMergeScheduler
> ------------------------------------------------
>
>                 Key: LUCENE-6119
>                 URL: https://issues.apache.org/jira/browse/LUCENE-6119
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>             Fix For: 5.0, Trunk
>
>         Attachments: LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch, 
> LUCENE-6119.patch, LUCENE-6119.patch, LUCENE-6119.patch
>
>
> This method returns number of "incoming" bytes IW has written since it
> was opened, excluding merging.
> It tracks flushed segments, new commits (segments_N), incoming
> files/segments by addIndexes, newly written live docs / doc values
> updates files.
> It's an easy statistic for IW to track and should be useful to help
> applications more intelligently set defaults for IO throttling
> (RateLimiter).
> For example, an application that does hardly any indexing but finally
> triggered a large merge can afford to heavily throttle that large
> merge so it won't interfere with ongoing searches.
> But an application that's causing IW to write new bytes at 50 MB/sec
> must set a correspondingly higher IO throttling otherwise merges will
> clearly fall behind.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to