jihoonson commented on issue #9132: how auto compaction work?
URL: https://github.com/apache/druid/issues/9132#issuecomment-576832985
 
 
   Hi @smildlzj, if any compaction task is not being triggered, probably the 
coordinator decided to skip to compact segments in some intervals. Looking at 
your configuration, would you please double check the followings?
   
   - Is the total segment size in the same interval smaller than 
`inputSegmentSizeBytes`?
   - Is the total number of segments in the same interval smaller than 
`maxNumSegmentsToCompact`?
   - Does the interval of the segments you want to compact overlap with the 
interval of `(endTimeOfTheMostRecentSegment - skipOffsetFromLatest, 
endTimeOfTheMostRecentSegment)`
   
   Also, the reason should be logged in the coordinator log file as below:
   
   ```java
           if (!isCompactibleSize) {
             log.warn(
                 "total segment size[%d] for datasource[%s] and interval[%s] is 
larger than inputSegmentSize[%d]."
                 + " Continue to the next interval.",
                 candidates.getTotalSize(),
                 candidates.segments.get(0).getDataSource(),
                 candidates.segments.get(0).getInterval(),
                 inputSegmentSize
             );
           }
           if (!isCompactibleNum) {
             log.warn(
                 "Number of segments[%d] for datasource[%s] and interval[%s] is 
larger than "
                 + "maxNumSegmentsToCompact[%d]. If you see lots of shards are 
being skipped due to too many "
                 + "segments, consider increasing 'numTargetCompactionSegments' 
and "
                 + "'druid.indexer.runner.maxZnodeBytes'. Continue to the next 
interval.",
                 candidates.getNumSegments(),
                 candidates.segments.get(0).getDataSource(),
                 candidates.segments.get(0).getInterval(),
                 maxNumSegmentsToCompact
             );
           }
           if (!needsCompaction) {
             log.warn(
                 "Size of most of segments[%s] is larger than 
targetCompactionSizeBytes[%s] "
                 + "for datasource[%s] and interval[%s]. Skipping compaction 
for this interval.",
                 
candidates.segments.stream().map(DataSegment::getSize).collect(Collectors.toList()),
                 targetCompactionSizeBytes,
                 candidates.segments.get(0).getDataSource(),
                 candidates.segments.get(0).getInterval()
             );
           }
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to