[ 
https://issues.apache.org/jira/browse/SPARK-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14014592#comment-14014592
 ] 

Sean Owen commented on SPARK-1954:
----------------------------------

How about the profiles just set a default Hadoop / YARN version? i.e. 
hadoop-2.3 sets hadoop.version=2.3.0? I think this was briefly discussed, and 
the idea was to make sure the user is explicitly setting a version, but I'm not 
sure this side-effect was in mind at the time.

This makes the regular Maven build work in an IDE like this. I think it would 
be better to run directly off the project build rather than re-export the SBT 
build repeatedly.

> Make it easier to get Spark on YARN code to compile in IntelliJ
> ---------------------------------------------------------------
>
>                 Key: SPARK-1954
>                 URL: https://issues.apache.org/jira/browse/SPARK-1954
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.0.0
>            Reporter: Sandy Ryza
>
> When loading a project through a Maven pom, IntelliJ allows switching on 
> profiles, but, to my knowledge, doesn't provide a way to set arbitrary 
> properties. 
> To get Spark-on-YARN code to compile in IntelliJ, I need to manually change 
> the hadoop.version in the root pom.xml to 2.2.0 or higher.  This is very 
> cumbersome when switching branches.
> It would be really helpful to add a profile that sets the Hadoop version that 
> IntelliJ can switch on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to