Repository: spark
Updated Branches:
refs/heads/master 0370abdfd -> 40e52a27c
[CORE][TESTS] minor fix of JavaSerializerSuite
Not jira is created.
The original test is passed because the class cast is lazy (only when the
object's method is invoked).
Author: Jeff Zhang
Repository: spark
Updated Branches:
refs/heads/master 40e52a27c -> 2bebaa39d
[SPARK-12413] Fix Mesos ZK persistence
I believe this fixes SPARK-12413. I'm currently running an integration test to
verify.
Author: Michael Gummelt
Closes #10366 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9177ea383 -> df0231952
[SPARK-12413] Fix Mesos ZK persistence
I believe this fixes SPARK-12413. I'm currently running an integration test to
verify.
Author: Michael Gummelt
Closes #10366 from
Repository: spark
Updated Branches:
refs/heads/branch-1.5 a8d14cc06 -> d2f71c27c
[SPARK-10500][SPARKR] sparkr.zip cannot be created if /R/lib is unwritable
Backport https://github.com/apache/spark/pull/9390 and
https://github.com/apache/spark/pull/9744 to branch-1.5.
Author: Sun Rui
Repository: spark
Updated Branches:
refs/heads/master ee444fe4b -> 4af647c77
[SPARK-12054] [SQL] Consider nullability of expression in codegen
This could simplify the generated code for expressions that is not nullable.
This PR fix lots of bugs about nullability.
Author: Davies Liu
Repository: spark
Updated Branches:
refs/heads/master 278281828 -> ee444fe4b
[SPARK-11619][SQL] cannot use UDTF in DataFrame.selectExpr
Description of the problem from cloud-fan
Actually this line:
Repository: spark
Updated Branches:
refs/heads/master ea59b0f3a -> 278281828
[SPARK-12350][CORE] Don't log errors when requested stream is not found.
If a client requests a non-existent stream, just send a failure message
back, without logging any error on the server side (since it's not a
Repository: spark
Updated Branches:
refs/heads/branch-1.6 df0231952 -> 1dc71ec77
[SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source
filter API
JIRA: https://issues.apache.org/jira/browse/SPARK-12218
When creating filters for Parquet/ORC, we should not push nested
Repository: spark
Updated Branches:
refs/heads/master 4af647c77 -> 41ee7c57a
[SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source
filter API
JIRA: https://issues.apache.org/jira/browse/SPARK-12218
When creating filters for Parquet/ORC, we should not push nested AND
Repository: spark
Updated Branches:
refs/heads/branch-1.5 d2f71c27c -> afffe24c0
[SPARK-12218][SQL] Invalid splitting of nested AND expressions in Data Source
filter API
JIRA: https://issues.apache.org/jira/browse/SPARK-12218
When creating filters for Parquet/ORC, we should not push nested
Repository: spark
Updated Branches:
refs/heads/master 6eba65525 -> 2377b707f
[SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs
- Provide example on `message handler`
- Provide bit on KPL record de-aggregation
- Fix typos
Author: Burak Yavuz
Closes #9970 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 bd33d4ee8 -> eca401ee5
[SPARK-11985][STREAMING][KINESIS][DOCS] Update Kinesis docs
- Provide example on `message handler`
- Provide bit on KPL record de-aggregation
- Fix typos
Author: Burak Yavuz
Closes #9970
Repository: spark
Updated Branches:
refs/heads/master 007a32f90 -> ba9332edd
[SPARK-12345][CORE] Do not send SPARK_HOME through Spark submit REST interface
It is usually an invalid location on the remote machine executing the job.
It is picked up by the Mesos support in cluster mode, and most
Repository: spark
Updated Branches:
refs/heads/master 60da0e11f -> 0514e8d4b
[SPARK-12411][CORE] Decrease executor heartbeat timeout to match heartbeat
interval
Previously, the rpc timeout was the default network timeout, which is the same
value
the driver uses to determine dead executors.
Revert "[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with
Mesos cluster mode."
This reverts commit ad8c1f0b840284d05da737fb2cc5ebf8848f4490.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a78a91f4
Repository: spark
Updated Branches:
refs/heads/master ba9332edd -> a78a91f4d
Revert "[SPARK-12413] Fix Mesos ZK persistence"
This reverts commit 2bebaa39d9da33bc93ef682959cd42c1968a6a3e.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Revert "[SPARK-12345][MESOS] Properly filter out SPARK_HOME in the Mesos REST
server"
This reverts commit 8184568810e8a2e7d5371db2c6a0366ef4841f70.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8a9417bc
Tree:
Repository: spark
Updated Branches:
refs/heads/master 41ee7c57a -> 6eba65525
[SPARK-12404][SQL] Ensure objects passed to StaticInvoke is Serializable
Now `StaticInvoke` receives `Any` as a object and `StaticInvoke` can be
serialized but sometimes the object passed is not serializable.
For
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1dc71ec77 -> 3b903e44b
Revert "[SPARK-12365][CORE] Use ShutdownHookManager where
Runtime.getRuntime.addShutdownHook() is called"
This reverts commit 4af64385b085002d94c54d11bbd144f9f026bbd8.
Project:
Repository: spark
Updated Branches:
refs/heads/master a78a91f4d -> 499ac3e69
[SPARK-12091] [PYSPARK] Deprecate the JAVA-specific deserialized storage levels
The current default storage level of Python persist API is MEMORY_ONLY_SER.
This is different from the default level MEMORY_ONLY in the
20 matches
Mail list logo