JRobTS opened a new issue, #15912:
URL: https://github.com/apache/druid/issues/15912

   When a retention policy of a datasource is greater than the last coordinator 
issued kill for that datasource, it will not issue kill commands for older 
segments. This is a regression from previous versions of Druid.
   
   ### Affected Version
   
   28.0.1
   
   ### Description
   
   Suppose the Druid kill configuration is:
   ```
   druid.coordinator.kill.on=true
   druid.coordinator.kill.period=PT1H
   druid.coordinator.kill.durationToRetain=P1D
   druid.coordinator.kill.bufferPeriod=PT6H
   druid.coordinator.kill.maxSegments=10000
   ```
   
   And we have one or more datasources with a retention policy like:
   ```
   loadByPeriod P7D
   dropForever
   ```
   
   The Druid Coordinator will issue kill tasks but only for segments that are 
older than the last run of KillUnusedSegments. It will not issue kill tasks for 
segments that are older than the last run which means that the Deep Storage 
will grow endlessly instead of Druid cleaning up segments that are older than 
the last run of the KillUnusedSegments.
   
   The task should start with the oldest segments per datasource, and work its 
way forward. Any segment marked as unused (used=0) in druid_segments metadata 
table should be fair game.
   
   For further details including debugging, see Slack discussion here:
   https://apachedruidworkspace.slack.com/archives/C0309C9L90D/p1707931446776669
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to