pan3793 commented on PR #45729: URL: https://github.com/apache/spark/pull/45729#issuecomment-2038083361
@gengliangwang I see the SPIP docs say > Spark identifiers (e.g., query ID, executor ID, task ID) will be [tagged using ThreadContext](https://logging.apache.org/log4j/2.x/manual/thread-context.html#fish-tagging), e.g., ThreadContext.set(EXECUTOR_ID, executorId). Seems it does not get implemented in this PR, in the migration PRs, we still inject the Spark identifiers like APP_ID manually in each message. Another question is, as we use the `enum LogKey` to track all known MDC keys, is it possible to inject custom keys? For example, users may have custom labels on the Spark nodes, and they also want to aggregate logs by those custom labels. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org