svn commit: r30008 - in /dev/spark/2.4.1-SNAPSHOT-2018_10_11_22_02-1961f8e-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Fri Oct 12 05:16:35 2018 New Revision: 30008 Log: Apache Spark 2.4.1-SNAPSHOT-2018_10_11_22_02-1961f8e docs [This commit notification would consist of 1472 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and can be applied infinitely

2018-10-11 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.4 e80ab130e -> 1961f8e62 [SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and can be applied infinitely ## What changes were proposed in this pull request? The HandleNullInputsForUDF rule can generate new If

spark git commit: [SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and can be applied infinitely

2018-10-11 Thread lixiao
Repository: spark Updated Branches: refs/heads/master c9d7d83ed -> 368513048 [SPARK-25690][SQL] Analyzer rule HandleNullInputsForUDF does not stabilize and can be applied infinitely ## What changes were proposed in this pull request? The HandleNullInputsForUDF rule can generate new If node

svn commit: r30007 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_11_20_03-39872af-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Fri Oct 12 03:17:31 2018 New Revision: 30007 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_11_20_03-39872af docs [This commit notification would consist of 1481 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25388][TEST][SQL] Detect incorrect nullable of DataType in the result

2018-10-11 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 39872af88 -> c9d7d83ed [SPARK-25388][TEST][SQL] Detect incorrect nullable of DataType in the result ## What changes were proposed in this pull request? This PR can correctly cause assertion failure when incorrect nullable of DataType in

svn commit: r30006 - in /dev/spark/2.3.3-SNAPSHOT-2018_10_11_18_02-5324a85-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Fri Oct 12 01:18:58 2018 New Revision: 30006 Log: Apache Spark 2.3.3-SNAPSHOT-2018_10_11_18_02-5324a85 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

svn commit: r30005 - in /dev/spark/2.4.1-SNAPSHOT-2018_10_11_18_02-e80ab13-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Fri Oct 12 01:18:07 2018 New Revision: 30005 Log: Apache Spark 2.4.1-SNAPSHOT-2018_10_11_18_02-e80ab13 docs [This commit notification would consist of 1472 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25684][SQL] Organize header related codes in CSV datasource

2018-10-11 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master a00181418 -> 39872af88 [SPARK-25684][SQL] Organize header related codes in CSV datasource ## What changes were proposed in this pull request? 1. Move `CSVDataSource.makeSafeHeader` to `CSVUtils.makeSafeHeader` (as is). - Historically

svn commit: r30004 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_11_16_02-a001814-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Thu Oct 11 23:16:40 2018 New Revision: 30004 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_11_16_02-a001814 docs [This commit notification would consist of 1481 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

[3/3] spark git commit: [SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3

2018-10-11 Thread srowen
[SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3 ## What changes were proposed in this pull request? Removes all vestiges of Flume in the build, for Spark 3. I don't think this needs Jenkins config changes. ## How was this patch tested? Existing tests. Closes

[2/3] spark git commit: [SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3

2018-10-11 Thread srowen
http://git-wip-us.apache.org/repos/asf/spark/blob/a0018141/external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala -- diff --git

[1/3] spark git commit: [SPARK-25598][STREAMING][BUILD][TEST-MAVEN] Remove flume connector in Spark 3

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/master 69f5e9cce -> a00181418 http://git-wip-us.apache.org/repos/asf/spark/blob/a0018141/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala

spark git commit: [SPARK-25674][SQL] If the records are incremented by more than 1 at a time, the number of bytes might rarely ever get updated

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.4 cd4065596 -> e80ab130e [SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the number of bytes might rarely ever get updated ## What changes were proposed in this pull request? If the records are incremented by

spark git commit: [SPARK-25674][SQL] If the records are incremented by more than 1 at a time, the number of bytes might rarely ever get updated

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.3 7102aeeb2 -> 5324a85a2 [SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the number of bytes might rarely ever get updated ## What changes were proposed in this pull request? If the records are incremented by

spark git commit: [SPARK-25674][SQL] If the records are incremented by more than 1 at a time, the number of bytes might rarely ever get updated

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/master adf648b5b -> 69f5e9cce [SPARK-25674][SQL] If the records are incremented by more than 1 at a time,the number of bytes might rarely ever get updated ## What changes were proposed in this pull request? If the records are incremented by more

spark git commit: [SPARK-25615][SQL][TEST] Improve the test runtime of KafkaSinkSuite: streaming write to non-existing topic

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/master 1bb63ae51 -> adf648b5b [SPARK-25615][SQL][TEST] Improve the test runtime of KafkaSinkSuite: streaming write to non-existing topic ## What changes were proposed in this pull request? Specify `kafka.max.block.ms` to 10 seconds while

spark git commit: [SPARK-24109][CORE] Remove class SnappyOutputStreamWrapper

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/master 65f75db61 -> 1bb63ae51 [SPARK-24109][CORE] Remove class SnappyOutputStreamWrapper ## What changes were proposed in this pull request? Remove SnappyOutputStreamWrapper and other workaround now that new Snappy fixes these. See also

spark git commit: [MINOR][SQL] remove Redundant semicolons

2018-10-11 Thread srowen
Repository: spark Updated Branches: refs/heads/master 8115e6b26 -> 65f75db61 [MINOR][SQL] remove Redundant semicolons ## What changes were proposed in this pull request? remove Redundant semicolons in SortMergeJoinExec, thanks. ## How was this patch tested? N/A Closes #22695 from

spark git commit: [SPARK-25662][SQL][TEST] Refactor DataSourceReadBenchmark to use main method

2018-10-11 Thread dbtsai
Repository: spark Updated Branches: refs/heads/master 83e19d5b8 -> 8115e6b26 [SPARK-25662][SQL][TEST] Refactor DataSourceReadBenchmark to use main method ## What changes were proposed in this pull request? 1. Refactor DataSourceReadBenchmark ## How was this patch tested? Manually tested

svn commit: r30002 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_11_12_03-83e19d5-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-11 Thread pwendell
Author: pwendell Date: Thu Oct 11 19:17:33 2018 New Revision: 30002 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_11_12_03-83e19d5 docs [This commit notification would consist of 1482 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25700][SQL] Creates ReadSupport in only Append Mode in Data Source V2 write path

2018-10-11 Thread dongjoon
Repository: spark Updated Branches: refs/heads/master 80813e198 -> 83e19d5b8 [SPARK-25700][SQL] Creates ReadSupport in only Append Mode in Data Source V2 write path ## What changes were proposed in this pull request? This PR proposes to avoid to make a readsupport and read schema when it