Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/5085#issuecomment-84476359
  
    Just to clarify, personally, I still thing there's nothing wrong and that 
anybody who uses Spark the way it expects to be used (i.e. make-distribution.sh 
+ calling bin/spark-submit) is unaffected by any recent changes. Nishkam feels 
strongly that I've broken his use case, even though I believe his use case is 
technically unsupported since he's modifying the launch scripts. But if he can 
change the library in a way that achieves many of the same original goals and 
still supports his use case, then I'm fine with it.
    
    Pulling more logic into the shell scripts is something that goes against 
those goals, but it feels like a small enough change that I'm willing to 
overlook it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to