spark git commit: SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project

2015-01-09 Thread pwendell
Repository: spark
Updated Branches:
  refs/heads/master b4034c3f8 - 547df9771


SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project

This PR simply points to the IntelliJ wiki page instead of also including 
IntelliJ notes in the docs. The intent however is to also update the wiki page 
with updated tips. This is the text I propose for the IntelliJ section on the 
wiki. I realize it omits some of the existing instructions on the wiki, about 
enabling Hive, but I think those are actually optional.

--

IntelliJ supports both Maven- and SBT-based projects. It is recommended, 
however, to import Spark as a Maven project. Choose Import Project... from 
the File menu, and select the `pom.xml` file in the Spark root directory.

It is fine to leave all settings at their default values in the Maven import 
wizard, with two caveats. First, it is usually useful to enable Import Maven 
projects automatically, sincchanges to the project structure will 
automatically update the IntelliJ project.

Second, note the step that prompts you to choose active Maven build profiles. 
As documented above, some build configuration require specific profiles to be 
enabled. The same profiles that are enabled with `-P[profile name]` above may 
be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN 
support, enable profiles `yarn` and `hadoop-2.4`.

These selections can be changed later by accessing the Maven Projects tool 
window from the View menu, and expanding the Profiles section.

Rebuild Project can fail the first time the project is compiled, because 
generate source files are not automatically generated. Try clicking the  
Generate Sources and Update Folders For All Projects button in the Maven 
Projects tool window to manually generate these sources.

Compilation may fail with an error like scalac: bad option: 
-P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar.
 If so, go to Preferences  Build, Execution, Deployment  Scala Compiler and 
clear the Additional compiler options field. It will work then although the 
option will come back when the project reimports.

Author: Sean Owen so...@cloudera.com

Closes #3952 from srowen/SPARK-5136 and squashes the following commits:

f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link
016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including 
IntelliJ notes in the docs


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/547df977
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/547df977
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/547df977

Branch: refs/heads/master
Commit: 547df97715580f99ae573a49a86da12bf20cbc3d
Parents: b4034c3
Author: Sean Owen so...@cloudera.com
Authored: Fri Jan 9 09:35:46 2015 -0800
Committer: Patrick Wendell pwend...@gmail.com
Committed: Fri Jan 9 09:35:46 2015 -0800

--
 docs/building-spark.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/547df977/docs/building-spark.md
--
diff --git a/docs/building-spark.md b/docs/building-spark.md
index c1bcd91..fb93017 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -151,9 +151,10 @@ Thus, the full flow for running continuous-compilation of 
the `core` submodule m
  $ mvn scala:cc
 ```
 
-# Using With IntelliJ IDEA
+# Building Spark with IntelliJ IDEA or Eclipse
 
-This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via 
the pom.xml file in the project root folder, you only need to activate either 
the hadoop1 or hadoop2 profile in the Maven Properties popout. We have not 
tried Eclipse/Scala IDE with this.
+For help in setting up IntelliJ IDEA or Eclipse for Spark development, and 
troubleshooting, refer to the
+[wiki page for IDE 
setup](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-IDESetup).
 
 # Building Spark Debian Packages
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project

2015-01-09 Thread pwendell
Repository: spark
Updated Branches:
  refs/heads/branch-1.2 71471bd79 - 2f4e73d8f


SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project

This PR simply points to the IntelliJ wiki page instead of also including 
IntelliJ notes in the docs. The intent however is to also update the wiki page 
with updated tips. This is the text I propose for the IntelliJ section on the 
wiki. I realize it omits some of the existing instructions on the wiki, about 
enabling Hive, but I think those are actually optional.

--

IntelliJ supports both Maven- and SBT-based projects. It is recommended, 
however, to import Spark as a Maven project. Choose Import Project... from 
the File menu, and select the `pom.xml` file in the Spark root directory.

It is fine to leave all settings at their default values in the Maven import 
wizard, with two caveats. First, it is usually useful to enable Import Maven 
projects automatically, sincchanges to the project structure will 
automatically update the IntelliJ project.

Second, note the step that prompts you to choose active Maven build profiles. 
As documented above, some build configuration require specific profiles to be 
enabled. The same profiles that are enabled with `-P[profile name]` above may 
be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN 
support, enable profiles `yarn` and `hadoop-2.4`.

These selections can be changed later by accessing the Maven Projects tool 
window from the View menu, and expanding the Profiles section.

Rebuild Project can fail the first time the project is compiled, because 
generate source files are not automatically generated. Try clicking the  
Generate Sources and Update Folders For All Projects button in the Maven 
Projects tool window to manually generate these sources.

Compilation may fail with an error like scalac: bad option: 
-P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar.
 If so, go to Preferences  Build, Execution, Deployment  Scala Compiler and 
clear the Additional compiler options field. It will work then although the 
option will come back when the project reimports.

Author: Sean Owen so...@cloudera.com

Closes #3952 from srowen/SPARK-5136 and squashes the following commits:

f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link
016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including 
IntelliJ notes in the docs

(cherry picked from commit 547df97715580f99ae573a49a86da12bf20cbc3d)
Signed-off-by: Patrick Wendell pwend...@gmail.com


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2f4e73d8
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2f4e73d8
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2f4e73d8

Branch: refs/heads/branch-1.2
Commit: 2f4e73d8f55c9a59dbd28b95688c3a09b44773a9
Parents: 71471bd
Author: Sean Owen so...@cloudera.com
Authored: Fri Jan 9 09:35:46 2015 -0800
Committer: Patrick Wendell pwend...@gmail.com
Committed: Fri Jan 9 09:36:06 2015 -0800

--
 docs/building-spark.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2f4e73d8/docs/building-spark.md
--
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 72a9bfd..a4b3472 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -161,9 +161,10 @@ Thus, the full flow for running continuous-compilation of 
the `core` submodule m
  $ mvn scala:cc
 ```
 
-# Using With IntelliJ IDEA
+# Building Spark with IntelliJ IDEA or Eclipse
 
-This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via 
the pom.xml file in the project root folder, you only need to activate either 
the hadoop1 or hadoop2 profile in the Maven Properties popout. We have not 
tried Eclipse/Scala IDE with this.
+For help in setting up IntelliJ IDEA or Eclipse for Spark development, and 
troubleshooting, refer to the
+[wiki page for IDE 
setup](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-IDESetup).
 
 # Building Spark Debian Packages
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org