Github user steveloughran commented on the pull request:

    https://github.com/apache/spark/pull/8758#issuecomment-140319344
  
    I came across this problem (and filed the JIRA) when I was trying to set a 
properties file up,
    
    {code}
    sbin/spark-history-server --properties-file 
../testclusters/devix/spark-defaults.conf
    {code}
    
    —but instead of having the specific config get picked up, I got bounced 
with a usage message.
    
    On the JIRA I listed some obvious options; the bash fixes being simple, but 
not the one I'd prefer. I think having HistoryServerArguments do the parsing is 
the appropriate strategy.
    
    1. There's already code to parse things there —it's just not getting 
involved
    1. Doing anything in the bash script will simply spread parsing across .sh 
and scala code —bad for maintenance
    1. If it is all done in a single scala class, it's easy to write some unit 
tests to check parsing works.
    
    w.r.t the current pull request, it doesn't work for me. I don't want to 
specify a log dir. In fact, given I was testing the SPARK-1537 YARN timeline 
server integration, I don't even have a log dir, just a URL to the ATS instance.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to