Github user abraithwaite commented on the pull request:

    https://github.com/apache/spark/pull/2573#issuecomment-140622568
  
    Hello!
    
    I was reading the explanation and I'm not quite sure I understand the 
reasoning still.  I spent a bit too long trying to figure out how to configure 
the executors to log to the correct hdfs directory.
    
    How exactly does a spark application connect _directly_ to a spark history 
server?  It's my understanding (correct me if I'm wrong) that the application 
logs to a directory and the history server reads that directory.  So even if 
you had two history servers, they'd presumably both only have one log directory 
configuration parameter, no?
    
    Clearly, the docs should at least be cleared up on the monitoring page.  
https://spark.apache.org/docs/latest/monitoring.html has no mention of 
spark.eventLog.dir (although it does mention spark.eventLog.enabled).  It seems 
intuitive that these would be the same property.
    
    /cc @andrewor14 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to