[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17033758#comment-17033758
 ] 

Stephan Ewen commented on FLINK-15959:
--------------------------------------

I think Xintong's suggestion is a good direction. There is a related ticket 
about a maximum value for the number of TaskManagers.
I can see that adding a {{min}} (default 0) and a {{max}} (default MAX_INT) to 
the configuration can make sense, where the ResourceManager allocates within 
those boundaries.

Concerning spread-out-scheduling: This options is mainly intended to support 
static standalone sessions, and sharing this between jobs and balancing these 
jobs across the cluster. I feel hesitant to specifically adjust many parts of 
the system back from the dynamic approach to the static approach.


> Add TaskExecutor number option in FlinkYarnSessionCli
> -----------------------------------------------------
>
>                 Key: FLINK-15959
>                 URL: https://issues.apache.org/jira/browse/FLINK-15959
>             Project: Flink
>          Issue Type: New Feature
>          Components: Runtime / Coordination
>    Affects Versions: 1.11.0
>            Reporter: YufeiLiu
>            Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a TaskExecutor number option is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> #  Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> I only changed YarnResourceManager, start all container in `initialize` 
> stage, and don't comlete slot request until minimum number of slots are 
> registered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to