zjf2012 commented on a change in pull request #23560: [SPARK-26632][Core]
Separate Thread Configurations of Driver and Executor
URL: https://github.com/apache/spark/pull/23560#discussion_r264951407
##########
File path:
core/src/main/scala/org/apache/spark/network/netty/SparkTransportConf.scala
##########
@@ -39,13 +39,18 @@ object SparkTransportConf {
*/
def fromSparkConf(_conf: SparkConf, module: String, numUsableCores: Int =
0): TransportConf = {
val conf = _conf.clone
-
- // Specify thread configuration based on our JVM's allocation of cores
(rather than necessarily
- // assuming we have all the machine's cores).
- // NB: Only set if serverThreads/clientThreads not already set.
+ val executorId = conf.get("spark.executor.id", "")
+ val isDriver = executorId == SparkContext.DRIVER_IDENTIFIER ||
+ executorId == SparkContext.LEGACY_DRIVER_IDENTIFIER
+ val role = if (isDriver) "driver" else "executor"
Review comment:
@vanzin , I just changed code to fix the potential issue of daemon
miss-using executor's configuration. Each executor has executor id being set to
spark.executor.id which gives me a chance to judge whether it's executor or
daemons. The default role is set to None and only RPCEnv tries to set role if
there are driver or executor specific configurations. thanks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]