[
https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597339#comment-14597339
]
ASF GitHub Bot commented on FLINK-2235:
---------------------------------------
Github user mxm commented on a diff in the pull request:
https://github.com/apache/flink/pull/859#discussion_r33019549
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java
---
@@ -137,7 +137,13 @@ public static long getSizeOfFreeHeapMemoryWithDefrag()
{
*/
public static long getSizeOfFreeHeapMemory() {
Runtime r = Runtime.getRuntime();
- return r.maxMemory() - r.totalMemory() + r.freeMemory();
+ long maxMemory = r.maxMemory();
+ if (maxMemory == Long.MAX_VALUE) {
+ // workaround for some JVM versions
+ return r.freeMemory();
--- End diff --
You're right. For a simple WordCount it works but for anything advanced,
this will fail with "Too few memory segments provided".
> Local Flink cluster allocates too much memory
> ---------------------------------------------
>
> Key: FLINK-2235
> URL: https://issues.apache.org/jira/browse/FLINK-2235
> Project: Flink
> Issue Type: Bug
> Components: Local Runtime, TaskManager
> Affects Versions: 0.9
> Environment: Oracle JDK: 1.6.0_65-b14-462
> Eclipse
> Reporter: Maximilian Michels
> Priority: Minor
>
> When executing a Flink job locally, the task manager gets initialized with an
> insane amount of memory. After a quick look in the code it seems that the
> call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}}
> returns a wrong estimate of the heap memory size.
> Moreover, the same user switched to Oracle JDK 1.8 and that made the error
> disappear. So I'm guessing this is some Java 1.6 quirk.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)