vinodkc commented on code in PR #41746:
URL: https://github.com/apache/spark/pull/41746#discussion_r1250119121
##########
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:
##########
@@ -311,13 +311,24 @@ class CoarseGrainedSchedulerBackend(scheduler:
TaskSchedulerImpl, val rpcEnv: Rp
decommissionExecutors(Array((executorId, v._1)), v._2, v._3)
unknownExecutorsPendingDecommission.invalidate(executorId)
})
+ // propagate current log level to new executor only if flag is true
+ if (conf.get(EXECUTOR_ALLOW_SYNC_LOG_LEVEL)) {
+ data.executorEndpoint.send(RefreshExecutor(Map("logLevel" ->
Utils.getLogLevel)))
+ }
Review Comment:
@mridulm , Done, when executors are initialized, applied the log level of
the driver in the executor.
@grundprinzip , If the user turns on the "sync", it will override the
executor log configuration only if `spark.log.level` is applied on
`SparkContext` or `sc.setLogLevel()` explicitly called with a different log
level than default log level configured on log4j2 , hence there won't be a side
effect in that scenario.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]