Maplejw opened a new issue #9016: druid dynamic configuration
URL: https://github.com/apache/incubator-druid/issues/9016
 
 
   The 'sw_template_behavior' datasource generates 24 segements each day. I 
want to compact 24 to 1 segements every day dynamicly.
   This is my config and datasource.
   
![avatar](http://storage.ikeeplock.com/common/1576120164353-2198b422575f4f3ca8624ab733a16fa5.png)
   ```curl
   druid.coordinator.period.indexingPeriod=PT1800S
   curl http://172.26.51.19:8081/druid/coordinator/v1/config/compaction
   {
   "compactionConfigs":[
       {
       "dataSource":"sw_template_behavior",
       "taskPriority":25,
       "inputSegmentSizeBytes":419430400,
       "targetCompactionSizeBytes":419430400,
       "maxRowsPerSegment":null,
       "maxNumSegmentsToCompact":150,
       "skipOffsetFromLatest":"P1D",
       "tuningConfig":null,
       "taskContext":null
       }
   ],
   "compactionTaskSlotRatio":0.2,
   "maxCompactionTaskSlots":5
   }
   ```
   when 1800s comming,the log only shows
   ```log
   org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentCompactor 
- Found [1] available task slots for compaction out of [1] max compaction task 
capacity
   ```
   So I check the source code DruidCoordinatorSegmentCompactor.java. 
iterator.hasNext()  is false? I see the iterator is about exclue skip interval 
segement. Do I set skipOffsetFromLatest wrong?
   ```code
   for (; iterator.hasNext() && numSubmittedTasks < 
numAvailableCompactionTaskSlots;) {
                             ............
   }
   ```
   
   Please help me. Thanks
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to