AmatyaAvadhanula opened a new pull request, #13254:
URL: https://github.com/apache/druid/pull/13254

   <!-- Thanks for trying to help us make Apache Druid be the best it can be! 
Please fill out as much of the following information as is possible (where 
relevant, and remove it when irrelevant) to help make the intention and scope 
of this PR clear in order to ease review. -->
   
   <!-- Please read the doc for contribution 
(https://github.com/apache/druid/blob/master/CONTRIBUTING.md) before making 
this PR. Also, once you open a PR, please _avoid using force pushes and 
rebasing_ since these make it difficult for reviewers to see what you've 
changed in response to their reviews. See [the 'If your pull request shows 
conflicts with master' 
section](https://github.com/apache/druid/blob/master/CONTRIBUTING.md#if-your-pull-request-shows-conflicts-with-master)
 for more details. -->
   
   Add a new "sortingCost" Balancer Strategy which is faster than cachingCost 
while being identical to cost in all cases.
   
   <!-- Replace XXXX with the id of the issue fixed in this PR. Remove this 
section if there is no corresponding issue. Don't reference the issue in the 
title of this pull-request. -->
   
   <!-- If you are a committer, follow the PR action item checklist for 
committers:
   
https://github.com/apache/druid/blob/master/dev/committer-instructions.md#pr-and-issue-action-item-checklist-for-committers.
 -->
   
   ### Description
   
   <!-- Describe the goal of this PR, what problem are you fixing. If there is 
a corresponding issue (referenced above), it's not necessary to repeat the 
description here, however, you may choose to keep one summary sentence. -->
   
   https://github.com/apache/druid/pull/2972 proposes a cost function for 
Segment balancing.
   While this helps with an optimal distribution of segments across servers, it 
can be slow on large clusters. 
   
   cachingCost Strategy was an attempt to make the same decisions as cost, but 
faster. However, there are a few discrepancies in the current implementation 
which lead to uneven distribution and slower convergence in the presence of 
segments with multiple granularities
   
   This PR introduces `sortingCost` which is a simple optimization of the 
original cost. It produces the same cost function in the presence of multiple 
granularities while being just as fast, if not faster, than cachingCost
   
   <!-- Describe your patch: what did you change in code? How did you fix the 
problem? -->
   
   
   <!-- If there are several relatively logically separate changes in this PR, 
create a mini-section for each of them. For example: -->
   
   #### Add sortingCost strategy
   The perf improvemnts can be checked by running 
`SortingCostComputerTest`#`perfComparisonTest`
   With 100k segments, cost computation is `2000x faster`. However the overall 
coordinator cycle is unlikely to be affected as drastically.
   
   #### Add simulation 
   Simulate with different cost strategies using 
`SegmentLoadingTest`#`testLoadAndBalanceSeveral`.
   
   Here are the results of the simulation with about 30k segments of hourly, 
weekly and yearly granularity over 500 iterations.
   Segments were loaded for 50 iterations among 3 historicals and then balanced 
for the rest after adding 2 more historicals.
   
   `cachingCost` : 317590 ms
   - Server[tier_t1__hist__1, historical, tier_t1] has 1 left to load, 0 left 
to drop, 9,498 served, 10 bytes queued, 27,930 bytes served.
   - Server[tier_t1__hist__3, historical, tier_t1] has 0 left to load, 0 left 
to drop, 9,363 served, 0 bytes queued, 28,308 bytes served.
   - Server[tier_t1__hist__2, historical, tier_t1] has 1 left to load, 0 left 
to drop, 10,193 served, 1 bytes queued, 28,490 bytes served.
   - Server[tier_t1__hist__5, historical, tier_t1] has 35 left to load, 0 left 
to drop, 14,432 served, 89 bytes queued, 43,781 bytes served.
   - Server[tier_t1__hist__4, historical, tier_t1] has 18 left to load, 0 left 
to drop, 15,914 served, 45 bytes queued, 46,451 bytes served.
   
   
   `cost` : 765574 ms
   - Server[tier_t1__hist__4, historical, tier_t1] has 16 left to load, 0 left 
to drop, 11,659 served, 25 bytes queued, 34,492 bytes served.
   - Server[tier_t1__hist__5, historical, tier_t1] has 10 left to load, 0 left 
to drop, 11,927 served, 19 bytes queued, 34,760 bytes served.
   - Server[tier_t1__hist__2, historical, tier_t1] has 12 left to load, 0 left 
to drop, 11,575 served, 12 bytes queued, 34,867 bytes served.
   - Server[tier_t1__hist__1, historical, tier_t1] has 8 left to load, 0 left 
to drop, 12,144 served, 8 bytes queued, 35,274 bytes served.
   - Server[tier_t1__hist__3, historical, tier_t1] has 11 left to load, 0 left 
to drop, 12,095 served, 29 bytes queued, 35,567 bytes served.
   
   `sortingCost` : 266421 ms
   - Server[tier_t1__hist__4, historical, tier_t1] has 12 left to load, 0 left 
to drop, 11,492 served, 21 bytes queued, 34,001 bytes served.
   - Server[tier_t1__hist__2, historical, tier_t1] has 3 left to load, 0 left 
to drop, 10,258 served, 21 bytes queued, 34,036 bytes served.
   - Server[tier_t1__hist__5, historical, tier_t1] has 28 left to load, 0 left 
to drop, 13,052 served, 37 bytes queued, 35,390 bytes served.
   - Server[tier_t1__hist__1, historical, tier_t1] has 20 left to load, 0 left 
to drop, 12,256 served, 38 bytes queued, 35,701 bytes served.
   - Server[tier_t1__hist__3, historical, tier_t1] has 8 left to load, 0 left 
to drop, 12,342 served, 17 bytes queued, 35,832 bytes served.
   
   <!--
   In each section, please describe design decisions made, including:
    - Choice of algorithms
    - Behavioral aspects. What configuration values are acceptable? How are 
corner cases and error conditions handled, such as when there are insufficient 
resources?
    - Class organization and design (how the logic is split between classes, 
inheritance, composition, design patterns)
    - Method organization and design (how the logic is split between methods, 
parameters and return types)
    - Naming (class, method, API, configuration, HTTP endpoint, names of 
emitted metrics)
   -->
   
   
   <!-- It's good to describe an alternative design (or mention an alternative 
name) for every design (or naming) decision point and compare the alternatives 
with the designs that you've implemented (or the names you've chosen) to 
highlight the advantages of the chosen designs and names. -->
   
   <!-- If there was a discussion of the design of the feature implemented in 
this PR elsewhere (e. g. a "Proposal" issue, any other issue, or a thread in 
the development mailing list), link to that discussion from this PR description 
and explain what have changed in your final design compared to your original 
proposal or the consensus version in the end of the discussion. If something 
hasn't changed since the original discussion, you can omit a detailed 
discussion of those aspects of the design here, perhaps apart from brief 
mentioning for the sake of readability of this PR description. -->
   
   <!-- Some of the aspects mentioned above may be omitted for simple and small 
changes. --
   
   
   <hr>
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not 
all of these items apply to every PR. Remove the items which are not done or 
not relevant to the PR. None of the items from the checklist below are strictly 
necessary, but it would be very helpful if you at least self-review the PR. -->
   
   This PR has:
   
   - [x] been self-reviewed.
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] a release note entry in the PR description.
   - [x] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
   - [x] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [x] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [x] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to