mridulm commented on code in PR #41746:
URL: https://github.com/apache/spark/pull/41746#discussion_r1249165894
##########
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:
##########
@@ -311,13 +311,24 @@ class CoarseGrainedSchedulerBackend(scheduler:
TaskSchedulerImpl, val rpcEnv: Rp
decommissionExecutors(Array((executorId, v._1)), v._2, v._3)
unknownExecutorsPendingDecommission.invalidate(executorId)
})
+ // propagate current log level to new executor only if flag is true
+ if (conf.get(EXECUTOR_ALLOW_SYNC_LOG_LEVEL)) {
+ data.executorEndpoint.send(RefreshExecutor(Map("logLevel" ->
Utils.getLogLevel)))
+ }
Review Comment:
To clarify, I meant this change at this specific location alone -
* Set log level in SparkContext.conf when initializing and/or when user
calls `setLogLevel`
* When executors are initialized, they already pull the conf from driver -
and at this point, apply the log level.
If/when user changes log level dynamically, that would still need to be
propagated to executors (as executor(s) have already been initialized) - but we
dont need to push it each time a new executor comes up: only when level is
changed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]