[spark] Git Push Summary

2017-04-14 Thread pwendell
Repository: spark
Updated Tags:  refs/tags/v2.1.1-rc3 [created] 2ed19cff2

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[2/2] spark git commit: Preparing development version 2.1.2-SNAPSHOT

2017-04-14 Thread pwendell
Preparing development version 2.1.2-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2a3e50e2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2a3e50e2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2a3e50e2

Branch: refs/heads/branch-2.1
Commit: 2a3e50e24b1c99bb12cd42d4c648213852dd26bf
Parents: 2ed19cf
Author: Patrick Wendell 
Authored: Fri Apr 14 15:37:47 2017 -0700
Committer: Patrick Wendell 
Committed: Fri Apr 14 15:37:47 2017 -0700

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/java8-tests/pom.xml  | 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 launcher/pom.xml  | 2 +-
 mesos/pom.xml | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 yarn/pom.xml  | 2 +-
 39 files changed, 40 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2a3e50e2/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 1ceda7b..2d461ca 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.1.1
+Version: 2.1.2
 Title: R Frontend for Apache Spark
 Description: The SparkR package provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/2a3e50e2/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index cc290c0..6e092ef 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.1
+2.1.2-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2a3e50e2/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index ccf4b27..77a4b64 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.1
+2.1.2-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2a3e50e2/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 98a2324..1a2d85a 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.1
+2.1.2-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2a3e50e2/common/network-yarn/pom.xml
--
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index dc1ad14..7a57e89 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.1
+

[1/2] spark git commit: Preparing Spark release v2.1.1-rc3

2017-04-14 Thread pwendell
Repository: spark
Updated Branches:
  refs/heads/branch-2.1 6f715c01d -> 2a3e50e24


Preparing Spark release v2.1.1-rc3


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2ed19cff
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2ed19cff
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2ed19cff

Branch: refs/heads/branch-2.1
Commit: 2ed19cff2f6ab79a718526e5d16633412d8c4dd4
Parents: 6f715c0
Author: Patrick Wendell 
Authored: Fri Apr 14 15:37:43 2017 -0700
Committer: Patrick Wendell 
Committed: Fri Apr 14 15:37:43 2017 -0700

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/java8-tests/pom.xml  | 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 launcher/pom.xml  | 2 +-
 mesos/pom.xml | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 yarn/pom.xml  | 2 +-
 39 files changed, 40 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2ed19cff/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 2d461ca..1ceda7b 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.1.2
+Version: 2.1.1
 Title: R Frontend for Apache Spark
 Description: The SparkR package provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/2ed19cff/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 6e092ef..cc290c0 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.2-SNAPSHOT
+2.1.1
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2ed19cff/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 77a4b64..ccf4b27 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.2-SNAPSHOT
+2.1.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2ed19cff/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 1a2d85a..98a2324 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.1.2-SNAPSHOT
+2.1.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/2ed19cff/common/network-yarn/pom.xml
--
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index 7a57e89..dc1ad14 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ 

spark git commit: [SPARK-20243][TESTS] DebugFilesystem.assertNoOpenStreams thread race

2017-04-14 Thread hvanhovell
Repository: spark
Updated Branches:
  refs/heads/branch-2.1 bca7ce285 -> 6f715c01d


[SPARK-20243][TESTS] DebugFilesystem.assertNoOpenStreams thread race

## What changes were proposed in this pull request?

Synchronize access to openStreams map.

## How was this patch tested?

Existing tests.

Author: Bogdan Raducanu 

Closes #17592 from bogdanrdc/SPARK-20243.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6f715c01
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6f715c01
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6f715c01

Branch: refs/heads/branch-2.1
Commit: 6f715c01dd09db52866fd93ff49eb206d157f8c3
Parents: bca7ce2
Author: Bogdan Raducanu 
Authored: Mon Apr 10 17:34:15 2017 +0200
Committer: Herman van Hovell 
Committed: Fri Apr 14 15:49:02 2017 +0200

--
 .../org/apache/spark/DebugFilesystem.scala  | 26 
 1 file changed, 16 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6f715c01/core/src/test/scala/org/apache/spark/DebugFilesystem.scala
--
diff --git a/core/src/test/scala/org/apache/spark/DebugFilesystem.scala 
b/core/src/test/scala/org/apache/spark/DebugFilesystem.scala
index 72aea84..91355f7 100644
--- a/core/src/test/scala/org/apache/spark/DebugFilesystem.scala
+++ b/core/src/test/scala/org/apache/spark/DebugFilesystem.scala
@@ -20,7 +20,6 @@ package org.apache.spark
 import java.io.{FileDescriptor, InputStream}
 import java.lang
 import java.nio.ByteBuffer
-import java.util.concurrent.ConcurrentHashMap
 
 import scala.collection.JavaConverters._
 import scala.collection.mutable
@@ -31,21 +30,29 @@ import org.apache.spark.internal.Logging
 
 object DebugFilesystem extends Logging {
   // Stores the set of active streams and their creation sites.
-  private val openStreams = new ConcurrentHashMap[FSDataInputStream, 
Throwable]()
+  private val openStreams = mutable.Map.empty[FSDataInputStream, Throwable]
 
-  def clearOpenStreams(): Unit = {
+  def addOpenStream(stream: FSDataInputStream): Unit = 
openStreams.synchronized {
+openStreams.put(stream, new Throwable())
+  }
+
+  def clearOpenStreams(): Unit = openStreams.synchronized {
 openStreams.clear()
   }
 
-  def assertNoOpenStreams(): Unit = {
-val numOpen = openStreams.size()
+  def removeOpenStream(stream: FSDataInputStream): Unit = 
openStreams.synchronized {
+openStreams.remove(stream)
+  }
+
+  def assertNoOpenStreams(): Unit = openStreams.synchronized {
+val numOpen = openStreams.values.size
 if (numOpen > 0) {
-  for (exc <- openStreams.values().asScala) {
+  for (exc <- openStreams.values) {
 logWarning("Leaked filesystem connection created at:")
 exc.printStackTrace()
   }
   throw new IllegalStateException(s"There are $numOpen possibly leaked 
file streams.",
-openStreams.values().asScala.head)
+openStreams.values.head)
 }
   }
 }
@@ -60,8 +67,7 @@ class DebugFilesystem extends LocalFileSystem {
 
   override def open(f: Path, bufferSize: Int): FSDataInputStream = {
 val wrapped: FSDataInputStream = super.open(f, bufferSize)
-openStreams.put(wrapped, new Throwable())
-
+addOpenStream(wrapped)
 new FSDataInputStream(wrapped.getWrappedStream) {
   override def setDropBehind(dropBehind: lang.Boolean): Unit = 
wrapped.setDropBehind(dropBehind)
 
@@ -98,7 +104,7 @@ class DebugFilesystem extends LocalFileSystem {
 
   override def close(): Unit = {
 wrapped.close()
-openStreams.remove(wrapped)
+removeOpenStream(wrapped)
   }
 
   override def read(): Int = wrapped.read()


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-20318][SQL] Use Catalyst type for min/max in ColumnStat for ease of estimation

2017-04-14 Thread wenchen
Repository: spark
Updated Branches:
  refs/heads/master 7536e2849 -> fb036c441


[SPARK-20318][SQL] Use Catalyst type for min/max in ColumnStat for ease of 
estimation

## What changes were proposed in this pull request?

Currently when estimating predicates like col > literal or col = literal, we 
will update min or max in column stats based on literal value. However, literal 
value is of Catalyst type (internal type), while min/max is of external type. 
Then for the next predicate, we again need to do type conversion to compare and 
update column stats. This is awkward and causes many unnecessary conversions in 
estimation.

To solve this, we use Catalyst type for min/max in `ColumnStat`. Note that the 
persistent format in metastore is still of external type, so there's no 
inconsistency for statistics in metastore.

This pr also fixes a bug for boolean type in `IN` condition.

## How was this patch tested?

The changes for ColumnStat are covered by existing tests.
For bug fix, a new test for boolean type in IN condition is added

Author: wangzhenhua 

Closes #17630 from wzhfy/refactorColumnStat.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fb036c44
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fb036c44
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fb036c44

Branch: refs/heads/master
Commit: fb036c4413c2cd4d90880d080f418ec468d6c0fc
Parents: 7536e28
Author: wangzhenhua 
Authored: Fri Apr 14 19:16:47 2017 +0800
Committer: Wenchen Fan 
Committed: Fri Apr 14 19:16:47 2017 +0800

--
 .../sql/catalyst/plans/logical/Statistics.scala | 95 +---
 .../statsEstimation/EstimationUtils.scala   | 30 ++-
 .../statsEstimation/FilterEstimation.scala  | 68 +-
 .../plans/logical/statsEstimation/Range.scala   | 70 +++
 .../statsEstimation/FilterEstimationSuite.scala | 41 +
 .../statsEstimation/JoinEstimationSuite.scala   | 15 ++--
 .../ProjectEstimationSuite.scala| 21 ++---
 .../command/AnalyzeColumnCommand.scala  |  8 +-
 .../spark/sql/StatisticsCollectionSuite.scala   | 19 ++--
 .../spark/sql/hive/HiveExternalCatalog.scala|  4 +-
 10 files changed, 189 insertions(+), 182 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/fb036c44/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
index f24b240..3d4efef 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
@@ -25,6 +25,7 @@ import org.apache.spark.internal.Logging
 import org.apache.spark.sql.{AnalysisException, Row}
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.util.DateTimeUtils
 import org.apache.spark.sql.types._
 import org.apache.spark.util.Utils
 
@@ -74,11 +75,10 @@ case class Statistics(
  * Statistics collected for a column.
  *
  * 1. Supported data types are defined in `ColumnStat.supportsType`.
- * 2. The JVM data type stored in min/max is the external data type (used in 
Row) for the
- * corresponding Catalyst data type. For example, for DateType we store 
java.sql.Date, and for
- * TimestampType we store java.sql.Timestamp.
- * 3. For integral types, they are all upcasted to longs, i.e. shorts are 
stored as longs.
- * 4. There is no guarantee that the statistics collected are accurate. 
Approximation algorithms
+ * 2. The JVM data type stored in min/max is the internal data type for the 
corresponding
+ *Catalyst data type. For example, the internal type of DateType is Int, 
and that the internal
+ *type of TimestampType is Long.
+ * 3. There is no guarantee that the statistics collected are accurate. 
Approximation algorithms
  *(sketches) might have been used, and the data collected can also be 
stale.
  *
  * @param distinctCount number of distinct values
@@ -104,22 +104,43 @@ case class ColumnStat(
   /**
* Returns a map from string to string that can be used to serialize the 
column stats.
* The key is the name of the field (e.g. "distinctCount" or "min"), and the 
value is the string
-   * representation for the value. The deserialization side is defined in 
[[ColumnStat.fromMap]].
+   * representation for the value. min/max values are converted to the 
external data type. For
+   * 

[spark-website] Git Push Summary

2017-04-14 Thread srowen
Repository: spark-website
Updated Branches:
  refs/heads/add_more_intellij_instructions [deleted] fe9e3a88c

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark-website git commit: add intellij information

2017-04-14 Thread srowen
Repository: spark-website
Updated Branches:
  refs/heads/asf-site d39c4ecac -> fe9e3a88c


add intellij information


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/cbe2a9b8
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/cbe2a9b8
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/cbe2a9b8

Branch: refs/heads/asf-site
Commit: cbe2a9b863e520a511136fdf6df67e2a07d5cc14
Parents: d39c4ec
Author: samelamin 
Authored: Thu Apr 13 16:30:17 2017 +0100
Committer: samelamin 
Committed: Thu Apr 13 16:30:17 2017 +0100

--
 developer-tools.md | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/cbe2a9b8/developer-tools.md
--
diff --git a/developer-tools.md b/developer-tools.md
index 547d8aa..239be34 100644
--- a/developer-tools.md
+++ b/developer-tools.md
@@ -292,6 +292,7 @@ so, open the "Project Settings" and select "Modules". Based 
on your selected Mav
 may need to add source folders to the following modules:
 - spark-hive: add v0.13.1/src/main/scala
 - spark-streaming-flume-sink: add 
target\scala-2.10\src_managed\main\compiled_avro
+- spark-catalyst: add target\scala-2.11\src_managed\main
 - Compilation may fail with an error like "scalac: bad option: 
 
-P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar".
 
 If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and 
clear the "Additional 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[2/2] spark-website git commit: regenerate html pointing to 2.11

2017-04-14 Thread srowen
regenerate html pointing to 2.11


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/fe9e3a88
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/fe9e3a88
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/fe9e3a88

Branch: refs/heads/asf-site
Commit: fe9e3a88cfc56c1b10f9814bdeb28f9f1ff47286
Parents: cbe2a9b
Author: samelamin 
Authored: Thu Apr 13 17:09:28 2017 +0100
Committer: samelamin 
Committed: Thu Apr 13 17:09:28 2017 +0100

--
 developer-tools.md| 2 +-
 site/developer-tools.html | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/fe9e3a88/developer-tools.md
--
diff --git a/developer-tools.md b/developer-tools.md
index 239be34..7c14de8 100644
--- a/developer-tools.md
+++ b/developer-tools.md
@@ -291,7 +291,7 @@ In these cases, you may need to add source locations 
explicitly to compile the e
 so, open the "Project Settings" and select "Modules". Based on your selected 
Maven profiles, you 
 may need to add source folders to the following modules:
 - spark-hive: add v0.13.1/src/main/scala
-- spark-streaming-flume-sink: add 
target\scala-2.10\src_managed\main\compiled_avro
+- spark-streaming-flume-sink: add 
target\scala-2.11\src_managed\main\compiled_avro
 - spark-catalyst: add target\scala-2.11\src_managed\main
 - Compilation may fail with an error like "scalac: bad option: 
 
-P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar".
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fe9e3a88/site/developer-tools.html
--
diff --git a/site/developer-tools.html b/site/developer-tools.html
index a44bfde..ffa1240 100644
--- a/site/developer-tools.html
+++ b/site/developer-tools.html
@@ -328,7 +328,7 @@ JIRA number of the issue youre working on as well as 
its title.
 
 For the problem described above, we might add the following:
 
-// [SPARK-zz][CORE] Fix an issue
+// [SPARK-zz][CORE] Fix an 
issue
 ProblemFilters.exclude[DirectMissingMethodProblem](org.apache.spark.SomeClass.this)
 
 Otherwise, you will have to resolve those incompatibilies before opening or
@@ -463,7 +463,8 @@ so, open the Project Settings and select 
Modules. Ba
 may need to add source folders to the following modules:
 
   spark-hive: add v0.13.1/src/main/scala
-  spark-streaming-flume-sink: add 
target\scala-2.10\src_managed\main\compiled_avro
+  spark-streaming-flume-sink: add 
target\scala-2.11\src_managed\main\compiled_avro
+  spark-catalyst: add target\scala-2.11\src_managed\main
 
   
   Compilation may fail with an error like scalac: bad option: 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark-website git commit: add intellij information

2017-04-14 Thread srowen
Repository: spark-website
Updated Branches:
  refs/heads/add_more_intellij_instructions [created] fe9e3a88c


add intellij information


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/cbe2a9b8
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/cbe2a9b8
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/cbe2a9b8

Branch: refs/heads/add_more_intellij_instructions
Commit: cbe2a9b863e520a511136fdf6df67e2a07d5cc14
Parents: d39c4ec
Author: samelamin 
Authored: Thu Apr 13 16:30:17 2017 +0100
Committer: samelamin 
Committed: Thu Apr 13 16:30:17 2017 +0100

--
 developer-tools.md | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/cbe2a9b8/developer-tools.md
--
diff --git a/developer-tools.md b/developer-tools.md
index 547d8aa..239be34 100644
--- a/developer-tools.md
+++ b/developer-tools.md
@@ -292,6 +292,7 @@ so, open the "Project Settings" and select "Modules". Based 
on your selected Mav
 may need to add source folders to the following modules:
 - spark-hive: add v0.13.1/src/main/scala
 - spark-streaming-flume-sink: add 
target\scala-2.10\src_managed\main\compiled_avro
+- spark-catalyst: add target\scala-2.11\src_managed\main
 - Compilation may fail with an error like "scalac: bad option: 
 
-P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar".
 
 If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and 
clear the "Additional 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org