[
https://issues.apache.org/jira/browse/FLINK-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17323745#comment-17323745
]
Flink Jira Bot commented on FLINK-1101:
---------------------------------------
This issue is assigned but has not received an update in 7 days so it has been
labeled "stale-assigned". If you are still working on the issue, please give an
update and remove the label. If you are no longer working on the issue, please
unassign so someone else may work on it. In 7 days the issue will be
automatically unassigned.
> Make memory management adaptive
> -------------------------------
>
> Key: FLINK-1101
> URL: https://issues.apache.org/jira/browse/FLINK-1101
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Task
> Affects Versions: 0.7.0-incubating
> Reporter: Stephan Ewen
> Assignee: Stephan Ewen
> Priority: Major
> Labels: stale-assigned
>
> I suggest to rework the memory management.
> Right now, it works the following way: When the program is submitted, it is
> checked how many memory consuming operations happen (sort, hash, explicit
> cache, ... ) Each one is assigned a static relative memory fraction, which
> the taskmanager provides.
> This is a very conservative planning and mostly due to the fact that with the
> streaming runtime, we may have all operations running concurrently. But in
> fact we mostly have not and are therefore wasting memory by being too
> conservative.
> To make the most of the available memory, I suggest to make the management
> adaptive:
> - Operators need to be able to request memory bit by bit
> - Operators need to be able to release memory on request. The sorter /
> hash table / cache do this naturally by spilling.
> - Memory has to be redistributed between operators when new requesters come.
> This also plays nicely with the idea of leaving all non-assigned memory to
> intermediate results, to allow for maximum caching of historic intermediate
> results.
> This would solve [FLINK-170] and [FLINK-84].
--
This message was sent by Atlassian Jira
(v8.3.4#803005)