abhishekrb19 opened a new pull request, #16719: URL: https://github.com/apache/druid/pull/16719
### Problem Currently, there is a fairness problem in the `KillUnusedSegments` duty where the duty consistently selects the same set of datasources as discovered from the metadata store or dynamic config parameters. This is particularly problematic when there are a steady stream of unused segments created by retention rules and fewer kill task slots. ### Fix To address the fairness problem, this patch adds a simple round-robin iterator to select datasources to kill. The `RoundRobinIterator` and has the following properties: 1. Starts with an initial random cursor position in an ordered list of candidates. 2. Consecutive iterations on the iterator are guaranteed to be deterministic unless the set of candidates change when `updateCandidates()` is called. 3. Guarantees that no duplicate candidates are returned in two consecutive iterations on the iterator. In my testing, I observed that using a random index across multiple updates and iterations results in uniform selection of datasources. The property of avoiding consecutive duplicates is an optimization that can particularly benefit smaller clusters where task slots are limited by default. A few other heuristic greedy-based approaches that were considered: 1. Total number of unused segments 2. Total size of unused segments The problem with the above approaches is that computing and determining the heuristics can be expensive and non-trivial in the scope of the kill duty. Also, these greedy approaches won't necessarily address the fairness problem at hand in all scenarios. #### Miscellaneous changes This patch also includes a few annotation fixes and refactoring/logging improvements. While testing this patch in a medium-sized cluster, sometimes I couldn't tell what was happening in the duty. So I've added some info logs for the exit conditions to improve observability. These are relatively small changes, but I'm happy to pull these out to a separate PR to make reviewing easier if needed. #### Testing changes Since the round robin iterator has some notion of randomness baked in, it was difficult to write tests that are easy to follow. To that effect, I've added a couple of testing hooks `getInitialCursorPosition()` and `getCurrentPosition()`: 1. `RoundRobinIteratorTest` includes unit tests that control the initial cursor position as well as a test that uses the built-in randomness without influencing the order of iteration. 3. `KillUnusedSegmentsTest` doesn't need to care about the randomization, as it is an implementation detail of the underlying iterator. Therefore, it sets the round-robin iterator with an initial cursor position of 0, ensuring deterministic iteration that is straightforward to assert when multiple datasources are involved. #### Release note The `KillUnusedSegments` coordinator duty now uses a round-robin iterator to select datasources during each run, ensuring varied selection instead of repeatedly choosing the same set of datasources. This PR has: - [x] been self-reviewed. - [x] a release note entry in the PR description. - [x] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links. - [x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader. - [x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for [code coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md) is met. - [x] been tested in a test Druid cluster. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
