cecemei commented on code in PR #18968:
URL: https://github.com/apache/druid/pull/18968#discussion_r2851843520
##########
server/src/main/java/org/apache/druid/server/compaction/CompactionStatus.java:
##########
@@ -346,8 +361,10 @@ static DimensionRangePartitionsSpec
getEffectiveRangePartitionsSpec(DimensionRan
*/
private static class Evaluator
Review Comment:
added a new CheckResult class
##########
server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java:
##########
@@ -371,8 +369,11 @@ public static ClientCompactionTaskQuery
createCompactionTask(
}
final Map<String, Object> autoCompactionContext =
newAutoCompactionContext(config.getTaskContext());
- if (candidate.getCurrentStatus() != null) {
- autoCompactionContext.put(COMPACTION_REASON_KEY,
candidate.getCurrentStatus().getReason());
+ if (CompactionMode.NOT_APPLICABLE.equals(candidate.getMode())) {
Review Comment:
true updated
##########
server/src/main/java/org/apache/druid/server/compaction/CompactionStatusTracker.java:
##########
@@ -80,43 +80,48 @@ public Set<String> getSubmittedTaskIds()
* This method assumes that the given candidate is eligible for compaction
* based on the current compaction config/supervisor of the datasource.
*/
- public CompactionStatus computeCompactionStatus(
Review Comment:
dont we still want to check `final DateTime snapshotTime =
segmentSnapshotTime.get()`, and trigger compaction after new segment?
##########
server/src/main/java/org/apache/druid/server/compaction/CompactionCandidate.java:
##########
@@ -37,76 +39,200 @@
*/
public class CompactionCandidate
{
- private final List<DataSegment> segments;
- private final Interval umbrellaInterval;
- private final Interval compactionInterval;
- private final String dataSource;
- private final long totalBytes;
- private final int numIntervals;
-
- private final CompactionStatus currentStatus;
-
- public static CompactionCandidate from(
- List<DataSegment> segments,
- @Nullable Granularity targetSegmentGranularity
- )
+ /**
+ * Non-empty list of segments of a datasource being proposed for compaction.
+ * A proposed compaction typically contains all the segments of a single
time chunk.
+ */
+ public static class ProposedCompaction
Review Comment:
moved search policy check outside of data source iterator to job queue
##########
indexing-service/src/main/java/org/apache/druid/indexing/compact/CompactionJobQueue.java:
##########
@@ -282,18 +281,17 @@ private boolean startJobIfPendingAndReady(
}
// Check if the job is already running, completed or skipped
- final CompactionStatus compactionStatus = getCurrentStatusForJob(job,
policy);
- switch (compactionStatus.getState()) {
- case RUNNING:
+ final CompactionCandidate.TaskState candidateState =
getCurrentTaskStateForJob(job);
Review Comment:
updated to use option 1, and add queuedIntervals to CompactionSimulateResult
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]