[ 
https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17106814#comment-17106814
 ] 

Xintong Song commented on FLINK-17493:
--------------------------------------

FYI, there's an improvement in the upcoming 1.11 release that might solve your 
problems.
https://issues.apache.org/jira/browse/FLINK-16408

With this feature, we use separated class loader for executing user codes 
(i.e., codes for one specific job), and the class loader's life cycle ends when 
the job is terminated.

> Possible direct memory leak in cassandra sink
> ---------------------------------------------
>
>                 Key: FLINK-17493
>                 URL: https://issues.apache.org/jira/browse/FLINK-17493
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Cassandra
>    Affects Versions: 1.9.3, 1.10.0
>            Reporter: nobleyd
>            Priority: Major
>
> # Cassandra Sink use direct memorys.
>  # Start a standalone cluster(1 machines) for test.
>  # After the cluster started, check the flink web-ui, and record the task 
> manager's memory info. I mean the direct memory part info.
>  # Start a job which read from kafka and write to cassandra using the 
> cassandra sink, and you can see that the direct memory count in 'Outside JVM' 
> part go up.
>  # Stop the job, and the direct memory count is not decreased(using 'jmap 
> -histo:live pid' to make the task manager gc).
>  # Repeat serveral times, the direct memory count will be more and more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to