spark git commit: [SPARK-25521][SQL] Job id showing null in the logs when insert into command Job is finished.

2018-10-04 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 5ae20cf1a -> 459700727 [SPARK-25521][SQL] Job id showing null in the logs when insert into command Job is finished. ## What changes were proposed in this pull request? ``As part of insert command in FileFormatWriter, a job context is

svn commit: r29886 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_04_20_02-44c1e1a-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-04 Thread pwendell
Author: pwendell Date: Fri Oct 5 03:16:51 2018 New Revision: 29886 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_04_20_02-44c1e1a docs [This commit notification would consist of 1485 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: Revert "[SPARK-25408] Move to mode ideomatic Java8"

2018-10-04 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 44c1e1ab1 -> 5ae20cf1a Revert "[SPARK-25408] Move to mode ideomatic Java8" This reverts commit 44c1e1ab1c26560371831b1593f96f30344c4363. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

spark git commit: [SPARK-25408] Move to mode ideomatic Java8

2018-10-04 Thread srowen
Repository: spark Updated Branches: refs/heads/master 8113b9c96 -> 44c1e1ab1 [SPARK-25408] Move to mode ideomatic Java8 While working on another PR, I noticed that there is quite some legacy Java in there that can be beautified. For example the use og features from Java8, such as: -

spark git commit: [SPARK-25605][TESTS] Run cast string to timestamp tests for a subset of timezones

2018-10-04 Thread lixiao
Repository: spark Updated Branches: refs/heads/master f27d96b9f -> 8113b9c96 [SPARK-25605][TESTS] Run cast string to timestamp tests for a subset of timezones ## What changes were proposed in this pull request? The test `cast string to timestamp` used to run for all time zones. So it run

spark git commit: [SPARK-25606][TEST] Reduce DateExpressionsSuite test time costs in Jenkins

2018-10-04 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 85a93595d -> f27d96b9f [SPARK-25606][TEST] Reduce DateExpressionsSuite test time costs in Jenkins ## What changes were proposed in this pull request? Reduce `DateExpressionsSuite.Hour` test time costs in Jenkins by reduce iteration

spark git commit: [SPARK-25609][TESTS] Reduce time of test for SPARK-22226

2018-10-04 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 3ae4f07de -> 85a93595d [SPARK-25609][TESTS] Reduce time of test for SPARK-6 ## What changes were proposed in this pull request? The PR changes the test introduced for SPARK-6, so that we don't run analysis and optimization on the

spark git commit: [SPARK-17159][STREAM] Significant speed up for running spark streaming against Object store.

2018-10-04 Thread srowen
Repository: spark Updated Branches: refs/heads/master 95ae20946 -> 3ae4f07de [SPARK-17159][STREAM] Significant speed up for running spark streaming against Object store. ## What changes were proposed in this pull request? Original work by Steve Loughran. Based on #17745. This is a minimal

spark git commit: [SPARK-25479][TEST] Refactor DatasetBenchmark to use main method

2018-10-04 Thread dongjoon
Repository: spark Updated Branches: refs/heads/master 71c24aad3 -> 95ae20946 [SPARK-25479][TEST] Refactor DatasetBenchmark to use main method ## What changes were proposed in this pull request? Refactor `DatasetBenchmark` to use main method. Generate benchmark result: ```sh

svn commit: r29877 - in /dev/spark/3.0.0-SNAPSHOT-2018_10_04_08_02-71c24aa-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-04 Thread pwendell
Author: pwendell Date: Thu Oct 4 15:17:17 2018 New Revision: 29877 Log: Apache Spark 3.0.0-SNAPSHOT-2018_10_04_08_02-71c24aa docs [This commit notification would consist of 1485 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

svn commit: r29874 - in /dev/spark/2.4.1-SNAPSHOT-2018_10_04_06_02-c9bb83a-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-10-04 Thread pwendell
Author: pwendell Date: Thu Oct 4 13:16:49 2018 New Revision: 29874 Log: Apache Spark 2.4.1-SNAPSHOT-2018_10_04_06_02-c9bb83a docs [This commit notification would consist of 1472 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-25602][SQL] SparkPlan.getByteArrayRdd should not consume the input when not necessary

2018-10-04 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.4 0763b758d -> c9bb83a7d [SPARK-25602][SQL] SparkPlan.getByteArrayRdd should not consume the input when not necessary ## What changes were proposed in this pull request? In `SparkPlan.getByteArrayRdd`, we should only call `it.hasNext`

spark git commit: [SPARK-25602][SQL] SparkPlan.getByteArrayRdd should not consume the input when not necessary

2018-10-04 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 927e52793 -> 71c24aad3 [SPARK-25602][SQL] SparkPlan.getByteArrayRdd should not consume the input when not necessary ## What changes were proposed in this pull request? In `SparkPlan.getByteArrayRdd`, we should only call `it.hasNext` when