Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4571#issuecomment-74427338
Overall, looks good to me. I do like the idea of failing here, since
otherwise it can be very confusing to people why their settings are not being
picked up. To avoid surprise here I think it's best to avoid backporting this
patch (once merged) into older versions and just enforce it in new versions.
One other issue, it's not super well documented in the existing code base
what type of error propagation we expect when initializing a spark environment
on the executors. It would be good to verify that this gets thrown up the chain
in a way that gets presented nicely to the user. I wouldn't block the patch on
that (it's not newly introduced here), but having nicer internal documentation
about the expectations that callers should have if SparkEnv's fail to
initialize would be nice.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]