vanzin commented on a change in pull request #23560: [SPARK-26632][Core]
Separate Thread Configurations of Driver and Executor
URL: https://github.com/apache/spark/pull/23560#discussion_r264339458
##########
File path:
core/src/main/scala/org/apache/spark/network/netty/SparkTransportConf.scala
##########
@@ -39,13 +39,18 @@ object SparkTransportConf {
*/
def fromSparkConf(_conf: SparkConf, module: String, numUsableCores: Int =
0): TransportConf = {
val conf = _conf.clone
-
- // Specify thread configuration based on our JVM's allocation of cores
(rather than necessarily
- // assuming we have all the machine's cores).
- // NB: Only set if serverThreads/clientThreads not already set.
+ val executorId = conf.get("spark.executor.id", "")
+ val isDriver = executorId == SparkContext.DRIVER_IDENTIFIER ||
+ executorId == SparkContext.LEGACY_DRIVER_IDENTIFIER
+ val role = if (isDriver) "driver" else "executor"
Review comment:
The problem is if you have the configuration. Your code is not enforcing
that in the case of the shuffle service, you never even look at the driver or
executor configuration, and that's the problem.
So if some admin uses the same defaults conf file for apps and for the
daemons, for example, you get into the situation where the daemons are using
the wrong configuration.
It's much better if you're explicit about what configuration you want. Just
like for the case of the module, where it's an explicit argument to this method.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]