Author: pwendell
Date: Fri Oct 12 05:16:35 2018
New Revision: 30008
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_10_11_22_02-1961f8e docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/branch-2.4 e80ab130e -> 1961f8e62
[SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and
can be applied infinitely
## What changes were proposed in this pull request?
The HandleNullInputsForUDF rule can generate new If
Repository: spark
Updated Branches:
refs/heads/master c9d7d83ed -> 368513048
[SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and
can be applied infinitely
## What changes were proposed in this pull request?
The HandleNullInputsForUDF rule can generate new If node
Author: pwendell
Date: Fri Oct 12 03:17:31 2018
New Revision: 30007
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_11_20_03-39872af docs
[This commit notification would consist of 1481 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/master 39872af88 -> c9d7d83ed
[SPARK-25388][TEST][SQL] Detect incorrect nullable of DataType in the result
## What changes were proposed in this pull request?
This PR can correctly cause assertion failure when incorrect nullable of
DataType in
Author: pwendell
Date: Fri Oct 12 01:18:58 2018
New Revision: 30006
Log:
Apache Spark 2.3.3-SNAPSHOT-2018_10_11_18_02-5324a85 docs
[This commit notification would consist of 1443 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Author: pwendell
Date: Fri Oct 12 01:18:07 2018
New Revision: 30005
Log:
Apache Spark 2.4.1-SNAPSHOT-2018_10_11_18_02-e80ab13 docs
[This commit notification would consist of 1472 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/master a00181418 -> 39872af88
[SPARK-25684][SQL] Organize header related codes in CSV datasource
## What changes were proposed in this pull request?
1. Move `CSVDataSource.makeSafeHeader` to `CSVUtils.makeSafeHeader` (as is).
- Historically
Author: pwendell
Date: Thu Oct 11 23:16:40 2018
New Revision: 30004
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_11_16_02-a001814 docs
[This commit notification would consist of 1481 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
[SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3
## What changes were proposed in this pull request?
Removes all vestiges of Flume in the build, for Spark 3.
I don't think this needs Jenkins config changes.
## How was this patch tested?
Existing tests.
Closes
http://git-wip-us.apache.org/repos/asf/spark/blob/a0018141/external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master 69f5e9cce -> a00181418
http://git-wip-us.apache.org/repos/asf/spark/blob/a0018141/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.4 cd4065596 -> e80ab130e
[SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the
number of bytes might rarely ever get updated
## What changes were proposed in this pull request?
If the records are incremented by
Repository: spark
Updated Branches:
refs/heads/branch-2.3 7102aeeb2 -> 5324a85a2
[SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the
number of bytes might rarely ever get updated
## What changes were proposed in this pull request?
If the records are incremented by
Repository: spark
Updated Branches:
refs/heads/master adf648b5b -> 69f5e9cce
[SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the
number of bytes might rarely ever get updated
## What changes were proposed in this pull request?
If the records are incremented by more
Repository: spark
Updated Branches:
refs/heads/master 1bb63ae51 -> adf648b5b
[SPARK-25615][SQL][TEST] Improve the test runtime of KafkaSinkSuite: streaming
write to non-existing topic
## What changes were proposed in this pull request?
Specify `kafka.max.block.ms` to 10 seconds while
Repository: spark
Updated Branches:
refs/heads/master 65f75db61 -> 1bb63ae51
[SPARK-24109][CORE] Remove class SnappyOutputStreamWrapper
## What changes were proposed in this pull request?
Remove SnappyOutputStreamWrapper and other workaround now that new Snappy fixes
these.
See also
Repository: spark
Updated Branches:
refs/heads/master 8115e6b26 -> 65f75db61
[MINOR][SQL] remove Redundant semicolons
## What changes were proposed in this pull request?
remove Redundant semicolons in SortMergeJoinExecï¼ thanks.
## How was this patch tested?
N/A
Closes #22695 from
Repository: spark
Updated Branches:
refs/heads/master 83e19d5b8 -> 8115e6b26
[SPARK-25662][SQL][TEST] Refactor DataSourceReadBenchmark to use main method
## What changes were proposed in this pull request?
1. Refactor DataSourceReadBenchmark
## How was this patch tested?
Manually tested
Author: pwendell
Date: Thu Oct 11 19:17:33 2018
New Revision: 30002
Log:
Apache Spark 3.0.0-SNAPSHOT-2018_10_11_12_03-83e19d5 docs
[This commit notification would consist of 1482 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Branches:
refs/heads/master 80813e198 -> 83e19d5b8
[SPARK-25700][SQL] Creates ReadSupport in only Append Mode in Data Source V2
write path
## What changes were proposed in this pull request?
This PR proposes to avoid to make a readsupport and read schema when it
21 matches
Mail list logo