Github user kalvinnchau commented on the issue:
https://github.com/apache/spark/pull/17404
Agreed, I made it configurable just to avoid changing what the default was,
but we were just configuring it with the app name anyway. Are you thinking of
using `spark.app.name` in place of Task?
We run a lot of our streaming jobs in client mode, and as we move to having
more and more tenants in our cluster, we'd like to be able to label which job
belongs to who, and we can use those labels to tag the logs coming from each
job for centralized logging.
We're trying to link the driver logs with the executor logs. When we query
the mesos API for all the running tasks/frameworks there's no easy way to link
drivers to executors. Looking at the framework_id doesn't work since the
dirver's framework_id is the parent (marathon in this case). And the executors
have no information through that API that we can use to link to the driver.
Should I make a new JIRA/PR and discuss it there? Or should we discuss it
here before I open those up?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]