maytasm commented on a change in pull request #12334:
URL: https://github.com/apache/druid/pull/12334#discussion_r827641884



##########
File path: 
server/src/main/java/org/apache/druid/client/indexing/ClientCompactionIntervalSpec.java
##########
@@ -44,12 +45,30 @@
   @Nullable
   private final String sha256OfSortedSegmentIds;
 
-  public static ClientCompactionIntervalSpec fromSegments(List<DataSegment> 
segments)
+  public static ClientCompactionIntervalSpec fromSegments(List<DataSegment> 
segments, @Nullable Granularity segmentGranularity)
   {
+    Interval interval = 
JodaUtils.umbrellaInterval(segments.stream().map(DataSegment::getInterval).collect(Collectors.toList()));
+    if (segmentGranularity != null) {
+      // If segmentGranularity is set, then the segmentGranularity of the 
segments may not align with the configured segmentGranularity
+      // We must adjust the interval of the compaction task to fully cover and 
align with the segmentGranularity
+      // For example,
+      // - The umbrella interval of the segments is 2015-04-11/2015-04-12 but 
configured segmentGranularity is YEAR,
+      // if the compaction task's interval is 2015-04-11/2015-04-12 then we 
can run into race condition where after submitting
+      // the compaction task, a new segment outside of the interval (i.e. 
2015-02-11/2015-02-12) got created will be lost as it is
+      // overshadowed by the compacted segment (compacted segment has interval 
2015-01-01/2016-01-01.
+      // Hence, in this case, we must adjust the compaction task interval to 
2015-01-01/2016-01-01.
+      // - The segment to be compacted has MONTH segmentGranularity with the 
interval 2015-02-01/2015-03-01 but configured
+      // segmentGranularity is WEEK. If the compaction task's interval is 
2015-02-01/2015-03-01 then compacted segments created will be
+      // 2015-01-26/2015-02-02, 2015-02-02/2015-02-09, 2015-02-09/2015-02-16, 
2015-02-16/2015-02-23, 2015-02-23/2015-03-02.
+      // This is because Druid's WEEK segments alway start and end on Monday. 
In the above example, 2015-01-26 and 2015-03-02
+      // are Mondays but 2015-02-01 and 2015-03-01 are not. Hence, the WEEK 
segments have to start and end on 2015-01-26 and 2015-03-02.
+      // If the compaction task's interval is 2015-02-01/2015-03-01, then the 
compacted segment would cause existing data
+      // from 2015-01-26 to 2015-02-01 and 2015-03-01 to 2015-03-02 to be 
lost. Hence, in this case,
+      // we must adjust the compaction task interval to 2015-01-26/2015-03-02
+      interval = 
JodaUtils.umbrellaInterval(segmentGranularity.getIterable(interval));
+    }

Review comment:
       Added some extra logs. 
   Regarding `inputSegmentSizeBytes`, I think `inputSegmentSizeBytes` should 
actually be deprecated now that the issued compaction task can run in parallel. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to