capistrant opened a new pull request #10622:
URL: https://github.com/apache/druid/pull/10622


   <!-- Thanks for trying to help us make Apache Druid be the best it can be! 
Please fill out as much of the following information as is possible (where 
relevant, and remove it when irrelevant) to help make the intention and scope 
of this PR clear in order to ease review. -->
   
   
   <!-- Replace XXXX with the id of the issue fixed in this PR. Remove this 
section if there is no corresponding issue. Don't reference the issue in the 
title of this pull-request. -->
   
   <!-- If you are a committer, follow the PR action item checklist for 
committers:
   
https://github.com/apache/druid/blob/master/dev/committer-instructions.md#pr-and-issue-action-item-checklist-for-committers.
 -->
   
   ### Description
   
   <!-- Describe the goal of this PR, what problem are you fixing. If there is 
a corresponding issue (referenced above), it's not necessary to repeat the 
description here, however, you may choose to keep one summary sentence. -->
   
   <!-- Describe your patch: what did you change in code? How did you fix the 
problem? -->
   
   <!-- If there are several relatively logically separate changes in this PR, 
create a mini-section for each of them. For example: -->
   
   # Background
   
   Some batch ingestion jobs dynamically identify the intervals being ingested 
as well as the sharding within those intervals. This dynamic discovery is very 
user friendly. However, we have found, at my company, that on our multi-tenant 
cluster where many users submit ingestion jobs through a managed service that 
creates and submits ingestion specs, users can mistakenly start jobs that index 
more than they or we would like them to index in one job. For instance, a user 
may try to start a hadoop batch job that indexes a year of raw source data at 
hourly granularity. This generates up to 365 * 24 = 8,760 intervals that may 
have even more sharding within intervals. To combat this, we have decided to 
limit the number of segment intervals that a single HadoopIndexTask or 
IndexTask can create. We also limit the aggregate number of shards across the 
whole ingestion job when possible too. Doing so has allowed us to improve 
quality of service to our many tenants. Now we have decided to explor
 e upstreaming a similar implementation of our tooling. Our thought is that we 
will open this PR and gauge the community interest in such a feature. I think 
others who run multi tenant clusters could benefit from this if merged.
   
   # Description of feature
   
   These new configs are limited to only IndexTask (non-parallel) and 
HadoopIndexTask. While we have not fully explored implementation for 
ParallelIndexTask, it appears that it may be difficult or impossible to find a 
way to cleanly identify when and how to stop tasks if they hit limits.
   
   The tuning config for the applicable tasks adds two new configurations.
   * `maxSegmentIntervalsPermitted`: The number of segment intervals that a 
single job can identify for ingest dynamically
   * `maxAggregateSegmentIntervalShardsPermitted`: The aggregate number of 
shards across all intervals a job can create if sharding is discovered 
dynamically before ingestion.
   
   It is important to note that these limits are only applied when the 
information is obtained at runtime by the indexing job. For the segment 
intervals, we only ever enforce the limit if the spec has `null` intervals.
   
   For the aggregate sharding, we only ever enforce the limit if we run a 
determine partitions phase where we scan the data to determine bucket counts 
for each interval.
   
   Therefore, it is assumed that if the user is supplying the information up 
front for intervals and sharding, they are well aware of the scope of their 
ingest and we should not interfere with that.
   
   <!--
   In each section, please describe design decisions made, including:
    - Choice of algorithms
    - Behavioral aspects. What configuration values are acceptable? How are 
corner cases and error conditions handled, such as when there are insufficient 
resources?
    - Class organization and design (how the logic is split between classes, 
inheritance, composition, design patterns)
    - Method organization and design (how the logic is split between methods, 
parameters and return types)
    - Naming (class, method, API, configuration, HTTP endpoint, names of 
emitted metrics)
   -->
   
   
   <!-- It's good to describe an alternative design (or mention an alternative 
name) for every design (or naming) decision point and compare the alternatives 
with the designs that you've implemented (or the names you've chosen) to 
highlight the advantages of the chosen designs and names. -->
   
   <!-- If there was a discussion of the design of the feature implemented in 
this PR elsewhere (e. g. a "Proposal" issue, any other issue, or a thread in 
the development mailing list), link to that discussion from this PR description 
and explain what have changed in your final design compared to your original 
proposal or the consensus version in the end of the discussion. If something 
hasn't changed since the original discussion, you can omit a detailed 
discussion of those aspects of the design here, perhaps apart from brief 
mentioning for the sake of readability of this PR description. -->
   
   <!-- Some of the aspects mentioned above may be omitted for simple and small 
changes. -->
   
   <hr>
   
   This PR has:
   - [ ] been self-reviewed.
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not 
all of these items apply to every PR. Remove the items which are not done or 
not relevant to the PR. None of the items from the checklist above are strictly 
necessary, but it would be very helpful if you at least self-review the PR. -->
   
   <hr>
   
   ##### Key changed/added classes in this PR
   * HadoopTuningConfig
   * HadoopIndexTask
   * IndexTask
   * TuningConfig 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to