Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/21663
  
    Can you please describe the usage scenario for this under standalone mode? 
I know it is used in yarn mode, because Hadoop and Spark are two distributions, 
they may build and run with different JVMs, but standalone cluster manager is 
distributed with Spark package, so I'm not sure the real use case here.
    
    Also I'm not sure if there's an issue in RPC communication. For example if 
standalone cluster manager is running with JDK7 and Spark application is JDK8, 
I'm not sure if JDK8 serialized message from executors can be read by JDK7 
worker, have you tried this?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to