capistrant opened a new pull request #11913:
URL: https://github.com/apache/druid/pull/11913


   <!-- Thanks for trying to help us make Apache Druid be the best it can be! 
Please fill out as much of the following information as is possible (where 
relevant, and remove it when irrelevant) to help make the intention and scope 
of this PR clear in order to ease review. -->
   
   <!-- Please read the doc for contribution 
(https://github.com/apache/druid/blob/master/CONTRIBUTING.md) before making 
this PR. Also, once you open a PR, please _avoid using force pushes and 
rebasing_ since these make it difficult for reviewers to see what you've 
changed in response to their reviews. See [the 'If your pull request shows 
conflicts with master' 
section](https://github.com/apache/druid/blob/master/CONTRIBUTING.md#if-your-pull-request-shows-conflicts-with-master)
 for more details. -->
   
   <!-- Replace XXXX with the id of the issue fixed in this PR. Remove this 
section if there is no corresponding issue. Don't reference the issue in the 
title of this pull-request. -->
   
   <!-- If you are a committer, follow the PR action item checklist for 
committers:
   
https://github.com/apache/druid/blob/master/dev/committer-instructions.md#pr-and-issue-action-item-checklist-for-committers.
 -->
   
   ### Description
   
   <!-- Describe the goal of this PR, what problem are you fixing. If there is 
a corresponding issue (referenced above), it's not necessary to repeat the 
description here, however, you may choose to keep one summary sentence. -->
   
   <!-- Describe your patch: what did you change in code? How did you fix the 
problem? -->
   
   <!-- If there are several relatively logically separate changes in this PR, 
create a mini-section for each of them. For example: -->
   
   Add configurations to `index_hadoop` and `index` task type tuning configs 
that allow for certain ingestion jobs to short circuit early if they are 
determined to breach the configured thresholds that have been added by this PR. 
The defaults for these circuit breaker configs are turned off by default.
   
   short circuit 1: maxSegmentsIngested - short circuits the ingestion job if 
it is determined that the job will generate a number of segments greater than 
what is specified in the tuningConfig
   
   short circuit 2: maxIntervalsIngested - short circuits the ingestion job if 
it is determined that the job will generate a number of segment intervals 
greater than what is specified in the tuningConfig
   
   These short circuits only apply in certain scenarios:
     * index_hadoop
       * hashed partitioning
         * both the segments circuit breaker and intervals circuit breaker are 
in effect if the job has to determine partitions
       * single dim partitioning
         * only the intervals circuit breaker is in effect if the job has to 
determine intervals at runtime
     * index
       * dynamic partitioning
         * only the intervals circuit breaker is in effect if the job has to 
determine intervals at runtime
       * hashed partitioning
         * both the segments circuit breaker and intervals circuit breaker are 
in effect if the job has to determine partitions
   
   why not have both circuit breakers in effect for all batch job types?
     * first of all, the circuit breakers only really make sense when the 
intervals and/or partitions are generated at runtime because this is a spec 
level config and if the spec knows the partitions and/or intervals due to 
explicit inclusion in the spec, the thresholds are not really needed.. the spec 
creator just needs to  not submit that spec.
     * It was not obvious how this would be implemented for index_parallel due 
to the architecture of that task type
     * It is not possible for dynamic partitioning in the `index` task type, 
because the partitions are generated dynamically when the segments are being 
generated.
     * It was not obvious if it was possible to do the segments threshold for 
single_dim in `index_hadoop` 
   
   Why is this useful?
     * Prevented jobs of unexpected or undesired size.
       * my use case is a multi-tenant cluster where data engineers ingest 
their own data. we have control over the spec submit, but not what underlying 
data the spec points to. That is why it is helpful for us to inject these 
configs to prevent jobs from creating an obscene amount of segments that would 
hurt the quality of service for other users on the multi-tenant cluster. 
     * even in the case where the spec and data owner are the same. it may be 
useful for the spec owner to add these in order to guard against mistakenly 
creating something they did not intend to (perhaps generating far more number 
of segments than they wanted due to misunderstanding of underlying data being 
ingested)
   
   <!--
   In each section, please describe design decisions made, including:
    - Choice of algorithms
    - Behavioral aspects. What configuration values are acceptable? How are 
corner cases and error conditions handled, such as when there are insufficient 
resources?
    - Class organization and design (how the logic is split between classes, 
inheritance, composition, design patterns)
    - Method organization and design (how the logic is split between methods, 
parameters and return types)
    - Naming (class, method, API, configuration, HTTP endpoint, names of 
emitted metrics)
   -->
   
   
   <!-- It's good to describe an alternative design (or mention an alternative 
name) for every design (or naming) decision point and compare the alternatives 
with the designs that you've implemented (or the names you've chosen) to 
highlight the advantages of the chosen designs and names. -->
   
   <!-- If there was a discussion of the design of the feature implemented in 
this PR elsewhere (e. g. a "Proposal" issue, any other issue, or a thread in 
the development mailing list), link to that discussion from this PR description 
and explain what have changed in your final design compared to your original 
proposal or the consensus version in the end of the discussion. If something 
hasn't changed since the original discussion, you can omit a detailed 
discussion of those aspects of the design here, perhaps apart from brief 
mentioning for the sake of readability of this PR description. -->
   
   <!-- Some of the aspects mentioned above may be omitted for simple and small 
changes. -->
   
   <hr>
   
   ##### Key changed/added classes in this PR
    * `JobHelper`
   * `IndexTask`
   * `HashPartitionAnalysis`
   * `HadoopTuningConfig`
   * `HadoopDruidDetermineConfigurationJob`
   
   <hr>
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not 
all of these items apply to every PR. Remove the items which are not done or 
not relevant to the PR. None of the items from the checklist below are strictly 
necessary, but it would be very helpful if you at least self-review the PR. -->
   
   This PR has:
   - [ ] been self-reviewed.
      - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to