spark git commit: [CORE][TESTS] minor fix of JavaSerializerSuite

2015-12-18 Thread rxin
Repository: spark Updated Branches: refs/heads/master 0370abdfd -> 40e52a27c [CORE][TESTS] minor fix of JavaSerializerSuite Not jira is created. The original test is passed because the class cast is lazy (only when the object's method is invoked). Author: Jeff Zhang

spark git commit: [SPARK-12413] Fix Mesos ZK persistence

2015-12-18 Thread sarutak
Repository: spark Updated Branches: refs/heads/master 40e52a27c -> 2bebaa39d [SPARK-12413] Fix Mesos ZK persistence I believe this fixes SPARK-12413. I'm currently running an integration test to verify. Author: Michael Gummelt Closes #10366 from

spark git commit: [SPARK-12413] Fix Mesos ZK persistence

2015-12-18 Thread sarutak
Repository: spark Updated Branches: refs/heads/branch-1.6 9177ea383 -> df0231952 [SPARK-12413] Fix Mesos ZK persistence I believe this fixes SPARK-12413. I'm currently running an integration test to verify. Author: Michael Gummelt Closes #10366 from

spark git commit: [SPARK-10500][SPARKR] sparkr.zip cannot be created if /R/lib is unwritable

2015-12-18 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-1.5 a8d14cc06 -> d2f71c27c [SPARK-10500][SPARKR] sparkr.zip cannot be created if /R/lib is unwritable Backport https://github.com/apache/spark/pull/9390 and https://github.com/apache/spark/pull/9744 to branch-1.5. Author: Sun Rui

spark git commit: [SPARK-12054] [SQL] Consider nullability of expression in codegen

2015-12-18 Thread davies
Repository: spark Updated Branches: refs/heads/master ee444fe4b -> 4af647c77 [SPARK-12054] [SQL] Consider nullability of expression in codegen This could simplify the generated code for expressions that is not nullable. This PR fix lots of bugs about nullability. Author: Davies Liu

spark git commit: [SPARK-11619][SQL] cannot use UDTF in DataFrame.selectExpr

2015-12-18 Thread yhuai
Repository: spark Updated Branches: refs/heads/master 278281828 -> ee444fe4b [SPARK-11619][SQL] cannot use UDTF in DataFrame.selectExpr Description of the problem from cloud-fan Actually this line:

spark git commit: [SPARK-12350][CORE] Don't log errors when requested stream is not found.

2015-12-18 Thread vanzin
Repository: spark Updated Branches: refs/heads/master ea59b0f3a -> 278281828 [SPARK-12350][CORE] Don't log errors when requested stream is not found. If a client requests a non-existent stream, just send a failure message back, without logging any error on the server side (since it's not a

spark git commit: [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API

2015-12-18 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.6 df0231952 -> 1dc71ec77 [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API JIRA: https://issues.apache.org/jira/browse/SPARK-12218 When creating filters for Parquet/ORC, we should not push nested

spark git commit: [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API

2015-12-18 Thread yhuai
Repository: spark Updated Branches: refs/heads/master 4af647c77 -> 41ee7c57a [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API JIRA: https://issues.apache.org/jira/browse/SPARK-12218 When creating filters for Parquet/ORC, we should not push nested AND

spark git commit: [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API

2015-12-18 Thread yhuai
Repository: spark Updated Branches: refs/heads/branch-1.5 d2f71c27c -> afffe24c0 [SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source filter API JIRA: https://issues.apache.org/jira/browse/SPARK-12218 When creating filters for Parquet/ORC, we should not push nested

spark git commit: [SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs

2015-12-18 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 6eba65525 -> 2377b707f [SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs - Provide example on `message handler` - Provide bit on KPL record de-aggregation - Fix typos Author: Burak Yavuz Closes #9970 from

spark git commit: [SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs

2015-12-18 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-1.6 bd33d4ee8 -> eca401ee5 [SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs - Provide example on `message handler` - Provide bit on KPL record de-aggregation - Fix typos Author: Burak Yavuz Closes #9970

spark git commit: [SPARK-12345][CORE] Do not send SPARK_HOME through Spark submit REST interface

2015-12-18 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 007a32f90 -> ba9332edd [SPARK-12345][CORE] Do not send SPARK_HOME through Spark submit REST interface It is usually an invalid location on the remote machine executing the job. It is picked up by the Mesos support in cluster mode, and most

spark git commit: [SPARK-12411][CORE] Decrease executor heartbeat timeout to match heartbeat interval

2015-12-18 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master 60da0e11f -> 0514e8d4b [SPARK-12411][CORE] Decrease executor heartbeat timeout to match heartbeat interval Previously, the rpc timeout was the default network timeout, which is the same value the driver uses to determine dead executors.

[3/3] spark git commit: Revert "[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos cluster mode."

2015-12-18 Thread andrewor14
Revert "[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos cluster mode." This reverts commit ad8c1f0b840284d05da737fb2cc5ebf8848f4490. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a78a91f4

[1/3] spark git commit: Revert "[SPARK-12413] Fix Mesos ZK persistence"

2015-12-18 Thread andrewor14
Repository: spark Updated Branches: refs/heads/master ba9332edd -> a78a91f4d Revert "[SPARK-12413] Fix Mesos ZK persistence" This reverts commit 2bebaa39d9da33bc93ef682959cd42c1968a6a3e. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

[2/3] spark git commit: Revert "[SPARK-12345][MESOS] Properly filter out SPARK_HOME in the Mesos REST server"

2015-12-18 Thread andrewor14
Revert "[SPARK-12345][MESOS] Properly filter out SPARK_HOME in the Mesos REST server" This reverts commit 8184568810e8a2e7d5371db2c6a0366ef4841f70. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8a9417bc Tree:

spark git commit: [SPARK-12404][SQL] Ensure objects passed to StaticInvoke is Serializable

2015-12-18 Thread marmbrus
Repository: spark Updated Branches: refs/heads/master 41ee7c57a -> 6eba65525 [SPARK-12404][SQL] Ensure objects passed to StaticInvoke is Serializable Now `StaticInvoke` receives `Any` as a object and `StaticInvoke` can be serialized but sometimes the object passed is not serializable. For

spark git commit: Revert "[SPARK-12365][CORE] Use ShutdownHookManager where Runtime.getRuntime.addShutdownHook() is called"

2015-12-18 Thread andrewor14
Repository: spark Updated Branches: refs/heads/branch-1.6 1dc71ec77 -> 3b903e44b Revert "[SPARK-12365][CORE] Use ShutdownHookManager where Runtime.getRuntime.addShutdownHook() is called" This reverts commit 4af64385b085002d94c54d11bbd144f9f026bbd8. Project:

spark git commit: [SPARK-12091] [PYSPARK] Deprecate the JAVA-specific deserialized storage levels

2015-12-18 Thread davies
Repository: spark Updated Branches: refs/heads/master a78a91f4d -> 499ac3e69 [SPARK-12091] [PYSPARK] Deprecate the JAVA-specific deserialized storage levels The current default storage level of Python persist API is MEMORY_ONLY_SER. This is different from the default level MEMORY_ONLY in the