maytasm opened a new pull request #11311:
URL: https://github.com/apache/druid/pull/11311


   Fix bug 502 bad gateway thrown when we edit/delete any auto compaction 
config created 0.21.0 or before
   
   ### Description
   
   This is a regression introduced by https://github.com/apache/druid/pull/11144
   
   This issue will only happen if user has one or more auto compaction 
configuration created in a version before latest (0.22.0 or later) then upgrade 
to latest (0.22.0 or later). If this was the case, once user upgraded to 0.22.0 
or later,
   - The user will not be able to add, edited or deleted any auto-compaction 
configuration (for both old datasource created prior to upgrade and new 
datasources created after the upgrade). 
   - The user will also not be able to change auto compaction task ratio and 
max task slot.
   - Existing auto compactions created prior to the upgrade will still run and 
create compaction tasks as expected 
   
   This happen because when you have a auto compaction created before 0.22.0 or 
later,
   the spec stored in db will be something like
   
`{"compactionConfigs":[{"dataSource":"wikipedia","taskPriority":25,"inputSegmentSizeBytes":419430400,"maxRowsPerSegment":null,"skipOffsetFromLatest":"P1D","tuningConfig":{"maxRowsInMemory":null,"maxBytesInMemory":null,"maxTotalRows":null,"splitHintSpec":null,"partitionsSpec":{"type":"hashed","numShards":null,"partitionDimensions":[],"partitionFunction":"murmur3_32_abs","maxRowsPerSegment":5000000},"indexSpec":null,"indexSpecForIntermediatePersists":null,"maxPendingPersists":null,"pushTimeout":null,"segmentWriteOutMediumFactory":null,"maxNumConcurrentSubTasks":null,"maxRetry":null,"taskStatusCheckPeriodMs":null,"chatHandlerTimeout":null,"chatHandlerNumRetries":null,"maxNumSegmentsToMerge":null,"totalNumMergeTasks":null,"forceGuaranteedRollup":true,"type":"index_parallel"},"taskContext":null}],"compactionTaskSlotRatio":0.1,"maxCompactionTaskSlots":2147483647}`
   then when we update the auto compaction config, we read the current config 
and serde it into Java object
   the Java object changed in latest (0.22.0) and have additional fields such 
as granularitySpec, ioConfig, etc.
   Then when we try to write to the db, we do a compareAndSwap 
(https://github.com/apache/druid/pull/11144)
   which make sure that `current` config we read (in Java Object) converted to 
byte is the same as what is in the db
   however, the current config have the new fields when it was serde to Java as 
those new fields are added so it doesn’t match the db
   
   This PR has:
   - [ ] been self-reviewed.
      - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/dev/license.md)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to