Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/4683#issuecomment-75128247
  
    @zhzhan the problem is not creating 'alpha' features, but consuming them 
from elsewhere. These APIs don't exist at all until recent versions of YARN, so 
Spark needs another build profile and module to even compile this code without 
breaking existing users. 
    
    It won't work for people using Spark with less than the latest YARN. That 
much is OK; it's only usable to people who want to build or package their own, 
although it won't benefit most Spark users yet. But this happens any time you 
want to add features that can only work with a newer version of, say, Hadoop. 
You just have to do this, if you really need to take advantage of some advanced 
functionality.
    
    The build complexity is a bit painful. Only recently was YARN alpha support 
dropped and maintaining the two was a headache. The overhead is smaller for 
this kind of feature. But it's a modest nice-to-have, and, may end up requiring 
yet a second implementation if any API changes between, say, 2.6 and 2.7 again. 
If it were vital, it might be something that just has to be done, but IMHO I 
sympathize with waiting for stable APIs for a nice-to-have feature. Hence, 
anything that can be done to bless the APIs that are required in YARN for 2.7 
seems like the best use of time.
    
    Last question: I know the build already dodges around some tiny differences 
in API across YARN versions with reflection. Is that feasible here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to