[
https://issues.apache.org/jira/browse/FLINK-36039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rui Fan resolved FLINK-36039.
-----------------------------
Fix Version/s: kubernetes-operator-1.10.0
Resolution: Fixed
Merged to main(1.10.0) via: 6a426b2ff60331b89d67371279f400f8761bf1f3
> Support clean historical event handler records in JDBC event handler
> --------------------------------------------------------------------
>
> Key: FLINK-36039
> URL: https://issues.apache.org/jira/browse/FLINK-36039
> Project: Flink
> Issue Type: Improvement
> Components: Autoscaler
> Reporter: RocMarshal
> Assignee: RocMarshal
> Priority: Minor
> Labels: pull-request-available
> Fix For: kubernetes-operator-1.10.0
>
>
> Currently, the autoscaler generates a large amount of historical data for
> event handlers. As the system runs for a long time, the volume of historical
> data will continue to grow. It is necessary to support automatic cleanup of
> data within a fixed period.
> Based on the creation time timestamp, the following approach for cleaning up
> historical data might be a way:
> * Introduce the parameter {{autoscaler.standalone.jdbc-event-handler.ttl}}
> *
> ** Type: Duration
> ** Default value: 90 days
> ** Setting it to 0 means disabling the cleanup functionality.
> * In the {{JdbcAutoScalerEventHandler}} constructor, introduce a scheduled
> job. Also, add an internal interface method {{close}} for
> {{AutoScalerEventHandler & JobAutoScaler}} to stop and clean up related
> logic.
> * Cleanup logic:
> #
> ## Query the messages with {{create_time}} less than {{(currentTime - ttl)}}
> and find the maximum {{maxId}} in this collection.
> ## Delete 4096 messages at a time from the collection with IDs less than
> {{{}maxId{}}}.
> ## Wait 10 ms between each deletion until the cleanup is complete.
> ## Scan and delete expired data daily
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)