[
https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17108200#comment-17108200
]
Oisín Mac Fhearaí commented on FLINK-17493:
-------------------------------------------
[~xintongsong] you might be right that it's an unrelated problem. I thought it
might be a memory allocation problem because:
# The memory increases over time as the job increases (especially off-heap)
# There is some stateful behaviour: the Cassandra sink works exactly once, but
never after starting the job again, until the task manager is restarted.
I'll open a separate ticket to avoid adding confusion to this one.
> Possible direct memory leak in cassandra sink
> ---------------------------------------------
>
> Key: FLINK-17493
> URL: https://issues.apache.org/jira/browse/FLINK-17493
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Cassandra
> Affects Versions: 1.9.0, 1.10.0
> Reporter: nobleyd
> Priority: Major
> Attachments: image-2020-05-14-21-58-59-152.png
>
>
> # Cassandra Sink use direct memorys.
> # Start a standalone cluster(1 machines) for test.
> # After the cluster started, check the flink web-ui, and record the task
> manager's memory info. I mean the direct memory part info.
> # Start a job which read from kafka and write to cassandra using the
> cassandra sink, and you can see that the direct memory count in 'Outside JVM'
> part go up.
> # Stop the job, and the direct memory count is not decreased(using 'jmap
> -histo:live pid' to make the task manager gc).
> # Repeat serveral times, the direct memory count will be more and more.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)