cloud-fan commented on a change in pull request #26624:
URL: https://github.com/apache/spark/pull/26624#discussion_r427024809
##########
File path: core/src/main/scala/org/apache/spark/executor/Executor.scala
##########
@@ -320,7 +321,12 @@ private[spark] class Executor(
val taskId = taskDescription.taskId
val threadName = s"Executor task launch worker for task $taskId"
- private val taskName = taskDescription.name
+ val taskName = taskDescription.name
+ val mdcProperties = taskDescription.properties.asScala
+ .filter(_._1.startsWith("mdc.")).map { item =>
+ val key = item._1.substring(4)
+ (key, item._2)
+ }.toMap
Review comment:
we don't really need to look up the resulting map. Seems calling `toSeq`
and keeping it as `Seq[(String, String)]` is good enough.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]