[ 
https://issues.apache.org/jira/browse/FLINK-28205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17558027#comment-17558027
 ] 

Lijie Wang commented on FLINK-28205:
------------------------------------

I think this is the same issue with FLINK-24677

> memory leak in the timing flush of the jdbc-connector
> -----------------------------------------------------
>
>                 Key: FLINK-28205
>                 URL: https://issues.apache.org/jira/browse/FLINK-28205
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / JDBC
>    Affects Versions: 1.15.0, 1.13.6, 1.14.5
>            Reporter: michaelxiang
>            Priority: Major
>
> Bug position: 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.scheduler
> When writing with the jdbc-connector, the RuntimeException thrown by the 
> scheduled thread to process the flush record is caught, this will cause the 
> flink task to not fail out until new data arrives. So, during this time, the 
> scheduled thread will continue to wrap the previous flushException by 
> creating a RuntimeException. For each flushException, the object reference 
> cannot be released and cannot be reclaimed by the GC, resulting in a memory 
> leak.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to