Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/5722#issuecomment-136971153
  
    @andrewor14 my short summary of the problem and/or appeal of a change is 
that, right now, if you set a port to P, then Spark will always try ports P to 
P+(N-1), where N is `spark.port.maxRetries`. This makes sense for some ports 
(executors, etc.) even if you can imagine that in some cases you want executors 
to try 9000-9010, and web UIs to try 8000-8080 or something, to match firewall 
rules -- that is, different N. More importantly imagine running the web UI on 
port 80; it doesn't make sense to march through 81, 82, 83 as those are 
different well-known ports.
    
    The problem is indeed that it's a fairly invasive change. I think it would 
go something like this:
    - If a port is specified as a number, continue to try several ports 
following according to `spark.port.maxRetries` for backwards compatibility
    - Deprecate `spark.port.maxRetries` I think
    - Introduce new "[min,max]" range syntax for ports
    - Change all the port handling code everywhere to handle `(Int,Int)`
    - Change port arg parsing code to handle both cases
    
    I admit I think this would be a good change; I'm also put off by how much 
change it would be.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to