[
https://issues.apache.org/jira/browse/FLINK-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16118511#comment-16118511
]
ASF GitHub Bot commented on FLINK-7316:
---------------------------------------
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/4481
ok, one test fixed, the other is not so simple but maybe @tillrohrmann can
help with it:
Inside `ContaineredTaskManagerParameters#create()`, we calculate the amount
of off-heap space that we need and for yarn, we use exactly this amount for
setting the `-XX:MaxDirectMemorySize` JVM property without letting room for
other components and libraries. This worked so far for the network buffers when
memory as a whole was set to off-/on-heap and the flink-reserved memory was not
completely used. Now, however, if set to on-heap, the `-XX:MaxDirectMemorySize`
is too sharp. I'm unsure about the solutions:
1) remove setting `-XX:MaxDirectMemorySize` and let the JVM adjust
automatically, or
2) add some "sane" default to our off-heap usage?
The same may apply to Mesos if `ResourceProfile(cpuCores, heapMemoryInMB,
directMemoryInMB, nativeMemoryInMB)` is used. At the moment, only the other
constructors are used leading to solution 1.
> always use off-heap network buffers
> -----------------------------------
>
> Key: FLINK-7316
> URL: https://issues.apache.org/jira/browse/FLINK-7316
> Project: Flink
> Issue Type: Sub-task
> Components: Core, Network
> Affects Versions: 1.4.0
> Reporter: Nico Kruber
> Assignee: Nico Kruber
>
> In order to send flink buffers through netty into the network, we need to
> make the buffers use off-heap memory. Otherwise, there will be a hidden copy
> happening in the NIO stack.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)