Github user zhonghaihua commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10794#discussion_r53579600
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
    @@ -169,6 +172,24 @@ private[yarn] class YarnAllocator(
       }
     
       /**
    +   * Init `executorIdCounter`
    +   */
    +  def initExecutorIdCounter(): Unit = {
    +    val port = sparkConf.getInt("spark.yarn.am.port", 0)
    +    SparkHadoopUtil.get.runAsSparkUser { () =>
    +      val init = RpcEnv.create(
    +        "executorIdCounterInit",
    +        Utils.localHostName,
    +        port,
    +        sparkConf,
    +        new SecurityManager(sparkConf))
    +      val driver = init.setupEndpointRefByURI(driverUrl)
    --- End diff --
    
    Hi @andrewor14 , `driverRef` doesn't work in this case. Because, for my 
understanding, `driverRef` which endpoint name called `YarnScheduler` send 
message to `YarnSchedulerEndpoint` (or get message from 
`YarnSchedulerEndpoint`), while we should get max executorId from 
`CoarseGrainedSchedulerBackend.DriverEndpoint` which endpoint name called 
`CoarseGrainedScheduler`.
    
    So, I think we should need a method to initialize `executorIdCounter`. And 
as you said, we should add huge comment huge comment related to SPARK-12864 to 
explain why we need to do this at this method. What‘s your opinion ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to