[ 
https://issues.apache.org/jira/browse/FLINK-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17129256#comment-17129256
 ] 

Jiayi Liao commented on FLINK-18203:
------------------------------------

[~liyu]
 Mmm... my problem here is not about the {{ByteStreamStateHandle}} from union 
state, differently said, my problem still occurs even the task's state is 
bigger than `state.backend.fs.memory-threshold`.

Specifically, I think we can reduce the overhead of new objects created from 
#RoundRobinOperatorStateRepartitioner#repartitionUnionState. I've done a simple 
but not so elegant change to avoid this: 
[https://github.com/Jiayi-Liao/flink/blob/b71f011a050a9fa0442d9daec3f1f04bbcd17875/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/RoundRobinOperatorStateRepartitioner.java#L327].

Assuming we have a Kafka source job with parallelism=10k, Flink will create 10k 
* 10k  \{{OperatorStreamStateHandle}} instances in #repartitionUnionState for 
the source executions. But this can reduced down to 10k after my change.

> Reduce objects usage in redistributing union states
> ---------------------------------------------------
>
>                 Key: FLINK-18203
>                 URL: https://issues.apache.org/jira/browse/FLINK-18203
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Checkpointing
>    Affects Versions: 1.10.1
>            Reporter: Jiayi Liao
>            Priority: Major
>
> #{{RoundRobinOperatorStateRepartitioner}}#{{repartitionUnionState}} creates a 
> new {{OperatorStreamStateHandle}} instance for every {{StreamStateHandle}} 
> instance used in every execution, which causes the number of new 
> {{OperatorStreamStateHandle}} instances up to m * n (jobvertex parallelism * 
> count of all executions' StreamStateHandle). 
> But in fact, all executions can share the same collection of 
> {{StreamStateHandle}} and the number of {{OperatorStreamStateHandle}} can be 
> reduced down to the count of all executions' StreamStateHandle. 
> I met this problem on production when we're testing a job with 
> parallelism=10k and the memory problem is getting more serious when yarn 
> containers go dead and the job starts doing failover.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to