Github user florianverhein commented on the pull request:
https://github.com/apache/spark/pull/5257#issuecomment-87570414
Some things to think about:
- Do we want an option for this (e.g. as for ganglia)? I haven't done this
as I think it would be confusing at the moment, since a user would assume that
the option would enable the hdfs nfs gateway on the cluster. However as far as
I'm aware, spark-ec2 doesn't do this yet (#6601). So I think it would be better
if the option were added as part of that work.
- Further, since the ports are opened to the authorized address, I don't
see a problem in having this done by default for now.
I have tested this with a spark-ec2 cluster running the gateway (i.e. with
these settings, I can mount the hfds on my local machine - which is really
handy!)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]