[ 
https://issues.apache.org/jira/browse/BEAM-5724?focusedWorklogId=160154&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-160154
 ]

ASF GitHub Bot logged work on BEAM-5724:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Oct/18 19:55
            Start Date: 29/Oct/18 19:55
    Worklog Time Spent: 10m 
      Work Description: mwylde commented on a change in pull request #6835: 
[BEAM-5724] Generalize flink executable context to allow more than 1 worker 
process per task manager 
URL: https://github.com/apache/beam/pull/6835#discussion_r229075009
 
 

 ##########
 File path: 
runners/flink/src/main/java/org/apache/beam/runners/flink/FlinkJobServerDriver.java
 ##########
 @@ -90,9 +89,9 @@ String getFlinkMasterUrl() {
       name = "--sdk-worker-parallelism",
       usage = "Default parallelism for SDK worker processes (see portable 
pipeline options)"
     )
-    String sdkWorkerParallelism = 
PortablePipelineOptions.SDK_WORKER_PARALLELISM_PIPELINE;
+    Long sdkWorkerParallelism = 1L;
 
-    String getSdkWorkerParallelism() {
+    Long getSdkWorkerParallelism() {
 
 Review comment:
   Actually, I think I misunderstood your comment about -1. Currently we use 
null to indicate that this config was not set in a particular place. There are 
two place this config can be set (as a job server option or as a pipeline 
option in python) and we need to be able to merge them, and set a default (1) 
if neither sets it. It appears that existing code uses boxed primitives and 
null for that and I've followed that precedent. I think it'd be worth 
re-examining this configuration code, but probably that's out of scope of this 
PR.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 160154)
    Time Spent: 3h 20m  (was: 3h 10m)

> Beam creates too many sdk_worker processes with --sdk-worker-parallelism=stage
> ------------------------------------------------------------------------------
>
>                 Key: BEAM-5724
>                 URL: https://issues.apache.org/jira/browse/BEAM-5724
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-flink
>            Reporter: Micah Wylde
>            Assignee: Micah Wylde
>            Priority: Major
>              Labels: portability-flink
>          Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In the flink portable runner, we currently support two options for sdk worker 
> parallelism (how many python worker processes we run). The default is one per 
> taskmanager, and with --sdk-worker-parallelism=stage you get one per stage. 
> However, for complex pipelines with many beam operators that get fused into a 
> single flink task this can produce hundreds of worker processes per TM.
> Flink uses the notion of task slots to limit resource utilization on a box; I 
> think that beam should try to respect those limits as well. I think ideally 
> we'd produce a single python worker per task slot/flink operator chain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to