HeartSaVioR edited a comment on pull request #29630:
URL: https://github.com/apache/spark/pull/29630#issuecomment-686870943


   I don't think Spark has the concept of "clusters". Even I don't think Spark 
has the concept of "cluster", unless you use standalone mode. More 
specifically, there's no strong relation between applications, and there's no 
sort of control plane for Spark side to control all applications in the 
cluster. The cluster is actually resource scheduler's cluster.
   
   If the rationalization of SPARK-32097 and SPARK-32135 is to make SHS be 
cluster-wise, then probably the concept of "cluster" needs to be defined and 
introduced, instead of having some fixes for workaround.
   
   Everyone has different views on being "cluster-wise". e.g. If SHS is 
cluster-wise and supports multi-clusters, I would prefer "isolated view" per 
cluster, like selecting the cluster first, and see filtered view for the 
cluster. I wouldn't prefer listing all applications from all clusters in the 
same list. This would be the opposite view you're proposing in SPARK-32135. So 
that's not a trivial thing. It warrants the discussion, including we really 
want to make it be "cluster-wise".


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to