[ 
https://issues.apache.org/jira/browse/SPARK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14033001#comment-14033001
 ] 

Andrew Ash commented on SPARK-2157:
-----------------------------------

[~epakhomov] you were looking at making the ports for HttpBroadcast and 
HttpFileServer configurable in SPARK-1174 and SPARK-1176.  From looking at your 
pull requests these were never merged though.

Have you been running a patched version of Spark instead?  I'm interested in 
how you've been dealing with this issue.

Many thanks!
Andrew

> Can't write tight firewall rules for Spark
> ------------------------------------------
>
>                 Key: SPARK-2157
>                 URL: https://issues.apache.org/jira/browse/SPARK-2157
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy, Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Andrew Ash
>            Priority: Critical
>
> In order to run Spark in places with strict firewall rules, you need to be 
> able to specify every port that's used between all parts of the stack.
> Per the [network activity section of the 
> docs|http://spark.apache.org/docs/latest/spark-standalone.html#configuring-ports-for-network-security]
>  most of the ports are configurable, but there are a few ports that aren't 
> configurable.
> We need to make every port configurable to a particular port, so that we can 
> run Spark in highly locked-down environments.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to