Repository: spark
Updated Branches:
refs/heads/master bb2bb0cf6 - f2eb070ac
Updated doc for spark.closure.serializer to indicate only Java serializer work.
See discussion from
http://apache-spark-developers-list.1001551.n3.nabble.com/bug-using-kryo-as-closure-serializer-td6473.html
Author:
Repository: spark
Updated Branches:
refs/heads/master 73b0cbcc2 - 3292e2a71
SPARK-1721: Reset the thread classLoader in the Mesos Executor
This is because Mesos calls it with a different environment or something, the
result is that the Spark jar is missing and it can't load classes.
This
Repository: spark
Updated Branches:
refs/heads/master 3292e2a71 - a975a19f2
[SPARK-1504], [SPARK-1505], [SPARK-1558] Updated Spark Streaming guide
- SPARK-1558: Updated custom receiver guide to match it with the new API
- SPARK-1504: Added deployment and monitoring subsection to streaming
-
Repository: spark
Updated Branches:
refs/heads/branch-1.0 80f4360e7 - 1fac4ecbd
[SPARK-1504], [SPARK-1505], [SPARK-1558] Updated Spark Streaming guide
- SPARK-1558: Updated custom receiver guide to match it with the new API
- SPARK-1504: Added deployment and monitoring subsection to streaming
Repository: spark
Updated Branches:
refs/heads/master a975a19f2 - cf0a8f020
[SPARK-1681] Include datanucleus jars in Spark Hive distribution
This copies the datanucleus jars over from `lib_managed` into `dist/lib`, if
any. The `CLASSPATH` must also be updated to reflect this change.
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.0 a5f765cab - 32c960a01
http://git-wip-us.apache.org/repos/asf/spark/blob/32c960a0/mllib/src/test/scala/org/apache/spark/mllib/tree/DecisionTreeSuite.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-1.0 32c960a01 - 2853e56f6
[SPARK-1678][SPARK-1679] In-memory compression bug fix and made compression
configurable, disabled by default
In-memory compression is now configurable in `SparkConf` by the
Repository: spark
Updated Branches:
refs/heads/master 98750a74d - 6d721c5f7
[SPARK-1678][SPARK-1679] In-memory compression bug fix and made compression
configurable, disabled by default
In-memory compression is now configurable in `SparkConf` by the
`spark.sql.inMemoryCompression.enabled`
Repository: spark
Updated Branches:
refs/heads/branch-1.0 2853e56f6 - 4708eff67
[SPARK-1735] Add the missing special profiles to make-distribution.sh
73b0cbcc241cca3d318ff74340e80b02f884acbd introduced a few special profiles that
are not covered in the `make-distribution.sh`. This affects