[
https://issues.apache.org/jira/browse/BEAM-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Micah Wylde reassigned BEAM-5724:
---------------------------------
Assignee: Micah Wylde
> Beam creates too many sdk_worker processes with --sdk-worker-parallelism=stage
> ------------------------------------------------------------------------------
>
> Key: BEAM-5724
> URL: https://issues.apache.org/jira/browse/BEAM-5724
> Project: Beam
> Issue Type: Improvement
> Components: runner-flink
> Reporter: Micah Wylde
> Assignee: Micah Wylde
> Priority: Major
> Labels: portability-flink
>
> In the flink portable runner, we currently support two options for sdk worker
> parallelism (how many python worker processes we run). The default is one per
> taskmanager, and with --sdk-worker-parallelism=stage you get one per stage.
> However, for complex pipelines with many beam operators that get fused into a
> single flink task this can produce hundreds of worker processes per TM.
> Flink uses the notion of task slots to limit resource utilization on a box; I
> think that beam should try to respect those limits as well. I think ideally
> we'd produce a single python worker per task slot/flink operator chain.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)