[ 
https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393437#comment-17393437
 ] 

Chris Tennant commented on FLINK-17493:
---------------------------------------

just an FYI that I'm still running into what appears to be the same issue.

I'm using apache/flink:1.13-java11 on Kubernetes.

I set MALLOC_ARENA_MAX=2

behavior is that off-heap memory increases every time the job is restarted, and 
is never reclaimed. After some number of restarts, I get the direct memory OOM 
and have to kill and relaunch the container. Increasing 
taskmanager.memory.framework.off-heap.size (I've increased it to 512M) defers 
the problem (I can survive more restarts) but doesn't fix it. 

> Possible direct memory leak in cassandra sink
> ---------------------------------------------
>
>                 Key: FLINK-17493
>                 URL: https://issues.apache.org/jira/browse/FLINK-17493
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Cassandra
>    Affects Versions: 1.9.0, 1.10.0
>            Reporter: nobleyd
>            Priority: Minor
>              Labels: auto-deprioritized-major
>         Attachments: image-2020-05-14-21-58-59-152.png
>
>
> # Cassandra Sink use direct memorys.
>  # Start a standalone cluster(1 machines) for test.
>  # After the cluster started, check the flink web-ui, and record the task 
> manager's memory info. I mean the direct memory part info.
>  # Start a job which read from kafka and write to cassandra using the 
> cassandra sink, and you can see that the direct memory count in 'Outside JVM' 
> part go up.
>  # Stop the job, and the direct memory count is not decreased(using 'jmap 
> -histo:live pid' to make the task manager gc).
>  # Repeat serveral times, the direct memory count will be more and more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to