Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/3409#issuecomment-65241674
  
    memory should not be specified through the java options (I filed a separate 
jira to support that - I would expect to have something like 
spark.yarn.am.memory for that).   Like I mentioned in one of my original 
comments we should do a check like SparkConf has. 
    
    I can see valid use cases for wanting to set the java options or classpath 
for an application master differently then the driver and I don't see any harm 
in allowing users to do that.  Most users wouldn't have to set anything for 
these.  The reason I brought up doing all 3 at once is I don't want to do one 
and then 2 weeks later someone ask for the other. I would rather have it 
consistent across executor/driver/am.  But since there has been so much 
discussion back and forth perhaps just the java options for now and see if 
there is a need for the others.  I will file a separate jira to look at making 
the driver.extra* apply similarly across the board also. I find it inconsistent 
that the driver.extraClassPath applies to AM but the other don't.  
    
    @vanzin I see what you were getting at now. You were just wondering if we 
should do something specific to hadoop configs.  The thing is you can do a lot 
of stuff in the configs and its deployment specific.  Did you have something in 
mind?  



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to