[ 
https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17106809#comment-17106809
 ] 

Xintong Song commented on FLINK-17493:
--------------------------------------

Hi [~destynova],

I'm not entirely sure whether your problem is same as the others or not.

This {{NoHostAvailableException}} you encountered seems to be unrelated. The 
error stack does not suggest anything related to the memory, and the exception 
has not been reported by [~nobleyd] or [~monika.h].

The observation that off-heap / direct memory grows every time job is restarted 
might be caused by the same problem as the others.

>From what you described, it does not feels like a memory leak problem to me. 
>If there's indeed a memory leak, the memory footprint should always grow as 
>job restarted, and eventually you should run into a metaspace / direct OOM. 
>According to your description, the memory footprint increases initially but 
>eventually become stable. This might due to that JVM has not release the 
>memory until the limits are reached and full GC is triggered. To verify this, 
>you can try to manually trigger full GC and see if the off-heap / direct 
>memory footprint falls down.

To create a heap dump, you have to login to the TM host machine / container / 
pod and execute jmap / jcmd directly on the JVM process. 

> Possible direct memory leak in cassandra sink
> ---------------------------------------------
>
>                 Key: FLINK-17493
>                 URL: https://issues.apache.org/jira/browse/FLINK-17493
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Cassandra
>    Affects Versions: 1.9.3, 1.10.0
>            Reporter: nobleyd
>            Priority: Major
>
> # Cassandra Sink use direct memorys.
>  # Start a standalone cluster(1 machines) for test.
>  # After the cluster started, check the flink web-ui, and record the task 
> manager's memory info. I mean the direct memory part info.
>  # Start a job which read from kafka and write to cassandra using the 
> cassandra sink, and you can see that the direct memory count in 'Outside JVM' 
> part go up.
>  # Stop the job, and the direct memory count is not decreased(using 'jmap 
> -histo:live pid' to make the task manager gc).
>  # Repeat serveral times, the direct memory count will be more and more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to