pan3793 commented on PR #45729:

   @gengliangwang I see the SPIP docs say
   > Spark identifiers (e.g., query ID, executor ID, task ID) will be [tagged 
 e.g., ThreadContext.set(EXECUTOR_ID, executorId).
   Seems it does not get implemented in this PR, in the migration PRs, we still 
inject the Spark identifiers like APP_ID manually in each message. Another 
question is, as we use the `enum LogKey` to track all known MDC keys, is it 
possible to inject custom keys? For example, users may have custom labels on 
the Spark nodes, and they also want to aggregate logs by those custom labels.

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail:

For queries about this service, please contact Infrastructure at:

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to