[
https://issues.apache.org/jira/browse/FLINK-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129219#comment-17129219
]
Yu Li edited comment on FLINK-18203 at 6/9/20, 12:35 PM:
---------------------------------------------------------
Thanks for filing the JIRA [~wind_ljy]. This is actually also pointed out and
marked as a TODO item in our recent discussion about increasing
`state.backend.fs.memory-threshold` in ML, see [this
thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-increase-state-backend-fs-memory-threshold-from-1K-to-100K-tp41475p41491.html]
for more details.
Let's have more focused discussion here.
cc [~sewen] [~yunta] [~klion26]
was (Author: carp84):
Thanks for filing the JIRA [~wind_ljy]. This is actually pointed out and marked
as a TODO item in our recent discussion about increasing
`state.backend.fs.memory-threshold` in ML, see [this
thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-increase-state-backend-fs-memory-threshold-from-1K-to-100K-tp41475p41491.html].
Let's have more focused discussion here.
cc [~sewen] [~yunta] [~klion26]
> Reduce objects usage in redistributing union states
> ---------------------------------------------------
>
> Key: FLINK-18203
> URL: https://issues.apache.org/jira/browse/FLINK-18203
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Checkpointing
> Affects Versions: 1.10.1
> Reporter: Jiayi Liao
> Priority: Major
>
> #{{RoundRobinOperatorStateRepartitioner}}#{{repartitionUnionState}} creates a
> new {{OperatorStreamStateHandle}} instance for every {{StreamStateHandle}}
> instance used in every execution, which causes the number of new
> {{OperatorStreamStateHandle}} instances up to m * n (jobvertex parallelism *
> count of all executions' StreamStateHandle).
> But in fact, all executions can share the same collection of
> {{StreamStateHandle}} and the number of {{OperatorStreamStateHandle}} can be
> reduced down to the count of all executions' StreamStateHandle.
> I met this problem on production when we're testing a job with
> parallelism=10k and the memory problem is getting more serious when yarn
> containers go dead and the job starts doing failover.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)