Github user ryan-williams commented on the pull request:

    https://github.com/apache/spark/pull/3525#issuecomment-73589151
  
    Thanks for resurrecting this, @srowen. I think there are 3 potentially 
separable changes here, in rough order of increasing controversial-ness:
    
    1. Make `spark.yarn.executor.memoryOverhead` take a "memory string" (e.g. 
`1g` or `100m`)
        * defaulting to `m` as the unit for backwards-compatibility.
    1. General refactoring of "memory string" handling in conf parameters in 
some other parts of the codebase.
    1. Addition of `spark.yarn.executor.memoryOverheadFraction`
    
    If people prefer, I can make a separate PR with 1∪2 or individual PRs for 
each of 1 and 2.
    
    Re: 3, I am sympathetic to @tgravescs's arguments, but worry that we are 
conflating [wishing Spark's configuration surface area was smaller] with 
[forcing users to build custom Sparks to get at real, existing niches of said 
configuration surface area (in this case to change a hard-coded `.07` that 
reasonable people can disagree on the ideal value of)]. Additionally, the 
arguments about possible confusion around interplay between `memoryOverhead` 
and `memoryOverheadFraction` are applicable to the status quo, imho.
    
    However, I don't mind much one way or another at this point, so I'm fine 
dropping 3 if that's what consensus here prefers.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to