[
https://issues.apache.org/jira/browse/SPARK-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910941#comment-17910941
]
melin commented on SPARK-8981:
------------------------------
Currently, spark only adds taskName to mdc, can add executorId to MDC as well?
Plan to write logs to kafka via kafka appender, and then periodically write
kafka data to s3 for consumption.
[https://aws.github.io/aws-emr-containers-best-practices/troubleshooting/docs/where-to-look-for-spark-logs/]
Executor Logs -
s3://my_s3_log_location/${virtual-cluster-id}/jobs/${job-id}/containers/${spark-application-id}/${spark-job-id-driver-executor-id}/(stderr.gz/stdout.gz)
> Add MDC support in Executor
> ---------------------------
>
> Key: SPARK-8981
> URL: https://issues.apache.org/jira/browse/SPARK-8981
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Reporter: Paweł Kopiczko
> Assignee: Izek Greenfield
> Priority: Minor
> Labels: pull-request-available
> Fix For: 3.1.0
>
>
> It would be nice to have, because it's good to have logs in one file when
> using log agents (like logentires) in standalone mode. Also allows
> configuring rolling file appender without a mess when multiple applications
> are running.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]