gianm commented on code in PR #18802:
URL: https://github.com/apache/druid/pull/18802#discussion_r2583835135


##########
server/src/main/java/org/apache/druid/server/compaction/MostFragmentedIntervalFirstPolicy.java:
##########
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.compaction;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+
+import javax.annotation.Nullable;
+
+/**
+ * {@link CompactionCandidateSearchPolicy} which prioritizes compaction of the
+ * intervals with the largest number of small uncompacted segments.
+ * <p>
+ * This policy favors cluster stability (by prioritizing reduction of segment
+ * count) over performance of queries on newer intervals. For the latter, use
+ * {@link NewestSegmentFirstPolicy}.
+ */
+public class MostFragmentedIntervalFirstPolicy implements 
CompactionCandidateSearchPolicy
+{
+  private static final long SIZE_2_GB = 2_000_000_000;
+  private static final long SIZE_10_MB = 10_000_000;
+
+  private final int minUncompactedCount;
+  private final long minUncompactedBytes;
+  private final long maxUncompactedSize;
+
+  @JsonCreator
+  public MostFragmentedIntervalFirstPolicy(
+      @JsonProperty("minUncompactedCount") @Nullable Integer 
minUncompactedCount,
+      @JsonProperty("minUncompactedBytes") @Nullable Long minUncompactedBytes,

Review Comment:
   Use `HumanReadableBytes`?



##########
server/src/main/java/org/apache/druid/server/compaction/MostFragmentedIntervalFirstPolicy.java:
##########
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.compaction;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+
+import javax.annotation.Nullable;
+
+/**
+ * {@link CompactionCandidateSearchPolicy} which prioritizes compaction of the
+ * intervals with the largest number of small uncompacted segments.
+ * <p>
+ * This policy favors cluster stability (by prioritizing reduction of segment
+ * count) over performance of queries on newer intervals. For the latter, use
+ * {@link NewestSegmentFirstPolicy}.
+ */
+public class MostFragmentedIntervalFirstPolicy implements 
CompactionCandidateSearchPolicy
+{
+  private static final long SIZE_2_GB = 2_000_000_000;
+  private static final long SIZE_10_MB = 10_000_000;
+
+  private final int minUncompactedCount;
+  private final long minUncompactedBytes;
+  private final long maxUncompactedSize;
+
+  @JsonCreator
+  public MostFragmentedIntervalFirstPolicy(
+      @JsonProperty("minUncompactedCount") @Nullable Integer 
minUncompactedCount,
+      @JsonProperty("minUncompactedBytes") @Nullable Long minUncompactedBytes,
+      @JsonProperty("maxUncompactedSize") @Nullable Long maxUncompactedSize
+  )
+  {
+    this.minUncompactedCount = Configs.valueOrDefault(minUncompactedCount, 
100);
+    this.minUncompactedBytes = Configs.valueOrDefault(minUncompactedBytes, 
SIZE_10_MB);
+    this.maxUncompactedSize = Configs.valueOrDefault(maxUncompactedSize, 
SIZE_2_GB);
+  }
+
+  /**
+   * Minimum number of uncompacted segments that must be present in an interval
+   * to make it eligible for compaction.
+   */
+  @JsonProperty
+  public int getMinUncompactedCount()
+  {
+    return minUncompactedCount;
+  }
+
+  /**
+   * Minimum total bytes of uncompacted segments that must be present in an
+   * interval to make it eligible for compaction. Default value is {@link 
#SIZE_10_MB}.
+   */
+  @JsonProperty
+  public long getMinUncompactedBytes()
+  {
+    return minUncompactedBytes;
+  }
+
+  /**
+   * Maximum average size of uncompacted segments in an interval eligible for
+   * compaction. Default value is {@link #SIZE_2_GB}.
+   */
+  @JsonProperty
+  public long getMaxUncompactedSize()

Review Comment:
   The names of `minUncompactedBytes` and `maxUncompactedSize` are confusing, 
because they look very similar but one refers to total and one refers to 
average. How about `minUncompactedBytes` and 
`maxAverageUncompactedBytesPerSegment`?



##########
server/src/main/java/org/apache/druid/server/compaction/MostFragmentedIntervalFirstPolicy.java:
##########
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.compaction;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+
+import javax.annotation.Nullable;
+
+/**
+ * {@link CompactionCandidateSearchPolicy} which prioritizes compaction of the
+ * intervals with the largest number of small uncompacted segments.
+ * <p>
+ * This policy favors cluster stability (by prioritizing reduction of segment
+ * count) over performance of queries on newer intervals. For the latter, use
+ * {@link NewestSegmentFirstPolicy}.
+ */
+public class MostFragmentedIntervalFirstPolicy implements 
CompactionCandidateSearchPolicy
+{
+  private static final long SIZE_2_GB = 2_000_000_000;
+  private static final long SIZE_10_MB = 10_000_000;
+
+  private final int minUncompactedCount;
+  private final long minUncompactedBytes;
+  private final long maxUncompactedSize;
+
+  @JsonCreator
+  public MostFragmentedIntervalFirstPolicy(
+      @JsonProperty("minUncompactedCount") @Nullable Integer 
minUncompactedCount,
+      @JsonProperty("minUncompactedBytes") @Nullable Long minUncompactedBytes,
+      @JsonProperty("maxUncompactedSize") @Nullable Long maxUncompactedSize
+  )
+  {
+    this.minUncompactedCount = Configs.valueOrDefault(minUncompactedCount, 
100);
+    this.minUncompactedBytes = Configs.valueOrDefault(minUncompactedBytes, 
SIZE_10_MB);
+    this.maxUncompactedSize = Configs.valueOrDefault(maxUncompactedSize, 
SIZE_2_GB);
+  }
+
+  /**
+   * Minimum number of uncompacted segments that must be present in an interval
+   * to make it eligible for compaction.
+   */
+  @JsonProperty
+  public int getMinUncompactedCount()
+  {
+    return minUncompactedCount;
+  }
+
+  /**
+   * Minimum total bytes of uncompacted segments that must be present in an
+   * interval to make it eligible for compaction. Default value is {@link 
#SIZE_10_MB}.
+   */
+  @JsonProperty
+  public long getMinUncompactedBytes()
+  {
+    return minUncompactedBytes;
+  }
+
+  /**
+   * Maximum average size of uncompacted segments in an interval eligible for
+   * compaction. Default value is {@link #SIZE_2_GB}.
+   */
+  @JsonProperty
+  public long getMaxUncompactedSize()
+  {
+    return maxUncompactedSize;
+  }
+
+  @Override
+  public int compareCandidates(CompactionCandidate candidateA, 
CompactionCandidate candidateB)
+  {
+    return computePriority(candidateA) - computePriority(candidateB) > 0
+           ? 1 : -1;
+  }
+
+  @Override
+  public boolean isEligibleForCompaction(
+      CompactionCandidate candidate,
+      CompactionTaskStatus latestTaskStatus
+  )
+  {
+    final CompactionStatistics uncompacted = candidate.getUncompactedStats();
+    if (uncompacted == null) {
+      return true;
+    } else if (uncompacted.getNumSegments() < 1) {
+      return false;
+    } else {
+      return uncompacted.getNumSegments() >= minUncompactedCount
+          && uncompacted.getTotalBytes() >= minUncompactedBytes
+          && (uncompacted.getTotalBytes() / uncompacted.getNumSegments()) <= 
maxUncompactedSize;
+    }
+  }
+
+  /**
+   * Computes the priority of the given compaction candidate by checking the
+   * total number and average size of uncompacted segments.
+   */
+  private double computePriority(CompactionCandidate candidate)
+  {
+    final CompactionStatistics compacted = candidate.getCompactedStats();
+    final CompactionStatistics uncompacted = candidate.getUncompactedStats();
+    if (uncompacted == null || compacted == null) {
+      return 0;
+    }
+
+    final long avgUncompactedSize = Math.max(1, uncompacted.getTotalBytes() / 
uncompacted.getNumSegments());
+
+    // Priority increases as size decreases and number increases
+    final double normalizingFactor = 1000f;

Review Comment:
   What's the purpose of this factor? seems like it wouldn't affect the results.



##########
server/src/main/java/org/apache/druid/server/compaction/MostFragmentedIntervalFirstPolicy.java:
##########
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.compaction;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+
+import javax.annotation.Nullable;
+
+/**
+ * {@link CompactionCandidateSearchPolicy} which prioritizes compaction of the
+ * intervals with the largest number of small uncompacted segments.
+ * <p>
+ * This policy favors cluster stability (by prioritizing reduction of segment
+ * count) over performance of queries on newer intervals. For the latter, use
+ * {@link NewestSegmentFirstPolicy}.
+ */
+public class MostFragmentedIntervalFirstPolicy implements 
CompactionCandidateSearchPolicy
+{
+  private static final long SIZE_2_GB = 2_000_000_000;
+  private static final long SIZE_10_MB = 10_000_000;
+
+  private final int minUncompactedCount;
+  private final long minUncompactedBytes;
+  private final long maxUncompactedSize;
+
+  @JsonCreator
+  public MostFragmentedIntervalFirstPolicy(
+      @JsonProperty("minUncompactedCount") @Nullable Integer 
minUncompactedCount,
+      @JsonProperty("minUncompactedBytes") @Nullable Long minUncompactedBytes,
+      @JsonProperty("maxUncompactedSize") @Nullable Long maxUncompactedSize
+  )
+  {
+    this.minUncompactedCount = Configs.valueOrDefault(minUncompactedCount, 
100);
+    this.minUncompactedBytes = Configs.valueOrDefault(minUncompactedBytes, 
SIZE_10_MB);
+    this.maxUncompactedSize = Configs.valueOrDefault(maxUncompactedSize, 
SIZE_2_GB);
+  }
+
+  /**
+   * Minimum number of uncompacted segments that must be present in an interval
+   * to make it eligible for compaction.
+   */
+  @JsonProperty
+  public int getMinUncompactedCount()
+  {
+    return minUncompactedCount;
+  }
+
+  /**
+   * Minimum total bytes of uncompacted segments that must be present in an
+   * interval to make it eligible for compaction. Default value is {@link 
#SIZE_10_MB}.
+   */
+  @JsonProperty
+  public long getMinUncompactedBytes()
+  {
+    return minUncompactedBytes;
+  }
+
+  /**
+   * Maximum average size of uncompacted segments in an interval eligible for
+   * compaction. Default value is {@link #SIZE_2_GB}.
+   */
+  @JsonProperty
+  public long getMaxUncompactedSize()
+  {
+    return maxUncompactedSize;
+  }
+
+  @Override
+  public int compareCandidates(CompactionCandidate candidateA, 
CompactionCandidate candidateB)
+  {
+    return computePriority(candidateA) - computePriority(candidateB) > 0
+           ? 1 : -1;
+  }
+
+  @Override
+  public boolean isEligibleForCompaction(
+      CompactionCandidate candidate,
+      CompactionTaskStatus latestTaskStatus
+  )
+  {
+    final CompactionStatistics uncompacted = candidate.getUncompactedStats();
+    if (uncompacted == null) {
+      return true;
+    } else if (uncompacted.getNumSegments() < 1) {
+      return false;
+    } else {
+      return uncompacted.getNumSegments() >= minUncompactedCount
+          && uncompacted.getTotalBytes() >= minUncompactedBytes
+          && (uncompacted.getTotalBytes() / uncompacted.getNumSegments()) <= 
maxUncompactedSize;
+    }
+  }
+
+  /**
+   * Computes the priority of the given compaction candidate by checking the
+   * total number and average size of uncompacted segments.

Review Comment:
   Intuitively what does the priority "mean"? It's helpful to add that to the 
javadocs.
   
   Reading through, it seems like the priority boils down to 
`pow(uncompacted.getNumSegments, 2) / uncompacted.getTotalBytes`. Why that 
particular formula?



##########
server/src/main/java/org/apache/druid/server/compaction/MostFragmentedIntervalFirstPolicy.java:
##########
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.server.compaction;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+
+import javax.annotation.Nullable;
+
+/**
+ * {@link CompactionCandidateSearchPolicy} which prioritizes compaction of the
+ * intervals with the largest number of small uncompacted segments.
+ * <p>
+ * This policy favors cluster stability (by prioritizing reduction of segment
+ * count) over performance of queries on newer intervals. For the latter, use
+ * {@link NewestSegmentFirstPolicy}.
+ */
+public class MostFragmentedIntervalFirstPolicy implements 
CompactionCandidateSearchPolicy
+{
+  private static final long SIZE_2_GB = 2_000_000_000;
+  private static final long SIZE_10_MB = 10_000_000;
+
+  private final int minUncompactedCount;
+  private final long minUncompactedBytes;
+  private final long maxUncompactedSize;
+
+  @JsonCreator
+  public MostFragmentedIntervalFirstPolicy(
+      @JsonProperty("minUncompactedCount") @Nullable Integer 
minUncompactedCount,
+      @JsonProperty("minUncompactedBytes") @Nullable Long minUncompactedBytes,
+      @JsonProperty("maxUncompactedSize") @Nullable Long maxUncompactedSize
+  )
+  {
+    this.minUncompactedCount = Configs.valueOrDefault(minUncompactedCount, 
100);
+    this.minUncompactedBytes = Configs.valueOrDefault(minUncompactedBytes, 
SIZE_10_MB);
+    this.maxUncompactedSize = Configs.valueOrDefault(maxUncompactedSize, 
SIZE_2_GB);
+  }
+
+  /**
+   * Minimum number of uncompacted segments that must be present in an interval
+   * to make it eligible for compaction.
+   */
+  @JsonProperty
+  public int getMinUncompactedCount()
+  {
+    return minUncompactedCount;
+  }
+
+  /**
+   * Minimum total bytes of uncompacted segments that must be present in an
+   * interval to make it eligible for compaction. Default value is {@link 
#SIZE_10_MB}.
+   */
+  @JsonProperty
+  public long getMinUncompactedBytes()
+  {
+    return minUncompactedBytes;
+  }
+
+  /**
+   * Maximum average size of uncompacted segments in an interval eligible for
+   * compaction. Default value is {@link #SIZE_2_GB}.
+   */
+  @JsonProperty
+  public long getMaxUncompactedSize()

Review Comment:
   Also, I wish this could be specified in terms of row counts rather than byte 
counts. The target segment size `rowsPerSegment` is specified as a number of 
rows, so it's most natural to specify the max average uncompacted size in terms 
of rows as well. Like, I might say that the target is 3M rows per segment but 
don't bother compacting if the segments average 2.5M already.
   
   I recognize this requires additional metadata to be available that may not 
currently be available, so it doesn't have to be done in this PR. I'm just 
having a thought.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to