KarmaGYZ commented on PR #17873:
URL: https://github.com/apache/flink/pull/17873#issuecomment-1139227543

   > > > > 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/finegrained_resource/#notice
   > > > 
   > > > 
   > > > ah, wasn't realized this! thanks for the reference! I'll give a 
through read over the weekend.
   > > > From the doc section you link, it says that slot sharing group are not 
enforced on the same slot. However, our use case is slightly different: we want 
to calculate the max slots required, so the cluster has enough resources before 
deploying. IIUC, different slot sharing group can't schedule on the same slot, 
so adding slot sharing group has the implication on increasing 
parallelism/slots required i.e. `max slots required = sum(max parallelism of 
each slot sharing group)`. We don't concern about if operators in the same 
sharing group are deployed together or not.
   > > > Is my understanding correct?
   > > 
   > > 
   > > I'm afraid not. For example, you put two operators `A` and `B`(with 
parallelism 1) into a slot sharing group. Then, we can deploy them into two 
physical slots.
   > 
   > Ahh, good to know! One follow up question: does this scenario happens 
often, or it only happens if **fine-grained resource management** is 
configured? From our experience, the calculation formula is correct so far but 
we only use the `.slotSharingGroup` API but not the fine-grained resource 
management feature.
   > 
   > > For example, you put two operators A and B(with parallelism 1) into a 
slot sharing group. Then, we can deploy them into two physical slots.
   > 
   > Just for my knowledge, in what situation can this happen?
   
   This scenario will not occur in current Flink. But it can occur in the 
future.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to