Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/21663
I see, thanks for verifying. I'm neutral to the fix, because I cannot see
the strong requirement of this feature compared to running on YARN, usually in
standalone mode we deploy standalone cluste
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/21663
@jerryshao My Spark Application is built on top of JDK10, but the
standalone cluster manager is running with JDK8 which does not support JDK10.
Java 7 support has been removed since Spark 2
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/21663
Can you please describe the usage scenario for this under standalone mode?
I know it is used in yarn mode, because Hadoop and Spark are two distributions,
they may build and run with different JVM
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21663
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21663
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21663
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional