spark git commit: [SPARK-24997][SQL] Enable support of MINUS ALL

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master b0d6967d4 -> 19a453191 [SPARK-24997][SQL] Enable support of MINUS ALL ## What changes were proposed in this pull request? Enable support for MINUS ALL which was gated at AstBuilder. ## How was this patch tested? Added tests in

spark git commit: [SPARK-24788][SQL] RelationalGroupedDataset.toString with unresolved exprs should not fail

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master f45d60a5a -> b0d6967d4 [SPARK-24788][SQL] RelationalGroupedDataset.toString with unresolved exprs should not fail ## What changes were proposed in this pull request? In the current master, `toString` throws an exception when

spark git commit: [SPARK-25002][SQL] Avro: revise the output record namespace

2018-08-02 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 73dd6cf9b -> f45d60a5a [SPARK-25002][SQL] Avro: revise the output record namespace ## What changes were proposed in this pull request? Currently the output namespace is starting with ".", e.g. `.topLevelRecord` Although it is valid

svn commit: r28523 - in /dev/spark/2.3.3-SNAPSHOT-2018_08_02_22_01-8080c93-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Fri Aug 3 05:15:41 2018 New Revision: 28523 Log: Apache Spark 2.3.3-SNAPSHOT-2018_08_02_22_01-8080c93 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24966][SQL] Implement precedence rules for set operations.

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master b3f2911ee -> 73dd6cf9b [SPARK-24966][SQL] Implement precedence rules for set operations. ## What changes were proposed in this pull request? Currently the set operations INTERSECT, UNION and EXCEPT are assigned the same precedence. This

spark git commit: [PYSPARK] Updates to Accumulators

2018-08-02 Thread irashid
Repository: spark Updated Branches: refs/heads/branch-2.3 5b187a85a -> 8080c937d [PYSPARK] Updates to Accumulators (cherry picked from commit 15fc2372269159ea2556b028d4eb8860c4108650) Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit:

svn commit: r28522 - in /dev/spark/2.4.0-SNAPSHOT-2018_08_02_20_01-b3f2911-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Fri Aug 3 03:15:49 2018 New Revision: 28522 Log: Apache Spark 2.4.0-SNAPSHOT-2018_08_02_20_01-b3f2911 docs [This commit notification would consist of 1470 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24945][SQL] Switching to uniVocity 2.7.3

2018-08-02 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 7cf16a7fa -> b3f2911ee [SPARK-24945][SQL] Switching to uniVocity 2.7.3 ## What changes were proposed in this pull request? In the PR, I propose to upgrade uniVocity parser from **2.6.3** to **2.7.3**. The recent version includes a fix

spark git commit: [SPARK-24773] Avro: support logical timestamp type with different precisions

2018-08-02 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 29077a1d1 -> 7cf16a7fa [SPARK-24773] Avro: support logical timestamp type with different precisions ## What changes were proposed in this pull request? Support reading/writing Avro logical timestamp type with different precisions

spark git commit: [SPARK-24795][CORE][FOLLOWUP] Combine BarrierTaskContext with BarrierTaskContextImpl

2018-08-02 Thread meng
Repository: spark Updated Branches: refs/heads/master bbdcc3bf6 -> 29077a1d1 [SPARK-24795][CORE][FOLLOWUP] Combine BarrierTaskContext with BarrierTaskContextImpl ## What changes were proposed in this pull request? According to https://github.com/apache/spark/pull/21758#discussion_r206746905

spark git commit: [SPARK-22219][SQL] Refactor code to get a value for "spark.sql.codegen.comments"

2018-08-02 Thread srowen
Repository: spark Updated Branches: refs/heads/master d0bc3ed67 -> bbdcc3bf6 [SPARK-22219][SQL] Refactor code to get a value for "spark.sql.codegen.comments" ## What changes were proposed in this pull request? This PR refactors code to get a value for "spark.sql.codegen.comments" by

svn commit: r28520 - in /dev/spark/2.4.0-SNAPSHOT-2018_08_02_16_02-d0bc3ed-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Thu Aug 2 23:16:01 2018 New Revision: 28520 Log: Apache Spark 2.4.0-SNAPSHOT-2018_08_02_16_02-d0bc3ed docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24896][SQL] Uuid should produce different values for each execution in streaming query

2018-08-02 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master efef55388 -> d0bc3ed67 [SPARK-24896][SQL] Uuid should produce different values for each execution in streaming query ## What changes were proposed in this pull request? `Uuid`'s results depend on random seed given during analysis. Thus

spark git commit: [SPARK-24705][SQL] ExchangeCoordinator broken when duplicate exchanges reused

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 02f967795 -> efef55388 [SPARK-24705][SQL] ExchangeCoordinator broken when duplicate exchanges reused ## What changes were proposed in this pull request? In the current master, `EnsureRequirements` sets the number of exchanges in

spark git commit: [SPARK-23908][SQL] Add transform function.

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 0df6bf882 -> 02f967795 [SPARK-23908][SQL] Add transform function. ## What changes were proposed in this pull request? This pr adds `transform` function which transforms elements in an array using the function. Optionally we can take the

svn commit: r28518 - in /dev/spark/2.4.0-SNAPSHOT-2018_08_02_12_02-0df6bf8-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Thu Aug 2 19:16:55 2018 New Revision: 28518 Log: Apache Spark 2.4.0-SNAPSHOT-2018_08_02_12_02-0df6bf8 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [BUILD] Fix lint-python.

2018-08-02 Thread ueshin
Repository: spark Updated Branches: refs/heads/master 38e4699c9 -> 0df6bf882 [BUILD] Fix lint-python. ## What changes were proposed in this pull request? This pr fixes lint-python. ``` ./python/pyspark/accumulators.py:231:9: E306 expected 1 blank line before a nested definition, found 0

spark git commit: [SPARK-24820][SPARK-24821][CORE] Fail fast when submitted job contains a barrier stage with unsupported RDD chain pattern

2018-08-02 Thread meng
Repository: spark Updated Branches: refs/heads/master ad2e63662 -> 38e4699c9 [SPARK-24820][SPARK-24821][CORE] Fail fast when submitted job contains a barrier stage with unsupported RDD chain pattern ## What changes were proposed in this pull request? Check on job submit to make sure we

spark git commit: [SPARK-24598][DOCS] State in the documentation the behavior when arithmetic operations cause overflow

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 15fc23722 -> ad2e63662 [SPARK-24598][DOCS] State in the documentation the behavior when arithmetic operations cause overflow ## What changes were proposed in this pull request? According to the discussion in

svn commit: r28514 - in /dev/spark/2.4.0-SNAPSHOT-2018_08_02_08_02-f04cd67-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Thu Aug 2 15:16:27 2018 New Revision: 28514 Log: Apache Spark 2.4.0-SNAPSHOT-2018_08_02_08_02-f04cd67 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [MINOR] remove dead code in ExpressionEvalHelper

2018-08-02 Thread srowen
Repository: spark Updated Branches: refs/heads/master d182b3d34 -> f04cd6709 [MINOR] remove dead code in ExpressionEvalHelper ## What changes were proposed in this pull request? This addresses https://github.com/apache/spark/pull/21236/files#r207078480 both

spark git commit: [SPARK-24742] Fix NullPointerexception in Field Metadata

2018-08-02 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 7be6fc3c7 -> d182b3d34 [SPARK-24742] Fix NullPointerexception in Field Metadata ## What changes were proposed in this pull request? This pull request provides a fix for SPARK-24742: SQL Field MetaData was throwing an Exception in the

spark git commit: [SPARK-24742] Fix NullPointerexception in Field Metadata

2018-08-02 Thread srowen
Repository: spark Updated Branches: refs/heads/master 46110a589 -> 7be6fc3c7 [SPARK-24742] Fix NullPointerexception in Field Metadata ## What changes were proposed in this pull request? This pull request provides a fix for SPARK-24742: SQL Field MetaData was throwing an Exception in the

spark git commit: [SPARK-24865][FOLLOW-UP] Remove AnalysisBarrier LogicalPlan Node

2018-08-02 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master a65736996 -> 46110a589 [SPARK-24865][FOLLOW-UP] Remove AnalysisBarrier LogicalPlan Node ## What changes were proposed in this pull request? Remove the AnalysisBarrier LogicalPlan node, which is useless now. ## How was this patch tested?

spark git commit: [SPARK-14540][CORE] Fix remaining major issues for Scala 2.12 Support

2018-08-02 Thread srowen
Repository: spark Updated Branches: refs/heads/master 275415777 -> a65736996 [SPARK-14540][CORE] Fix remaining major issues for Scala 2.12 Support ## What changes were proposed in this pull request? This PR addresses issues 2,3 in this

spark git commit: [SPARK-24795][CORE][FOLLOWUP] Kill all running tasks when a task in a barrier stage fail

2018-08-02 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 57d994994 -> 275415777 [SPARK-24795][CORE][FOLLOWUP] Kill all running tasks when a task in a barrier stage fail ## What changes were proposed in this pull request? Kill all running tasks when a task in a barrier stage fail in the middle.

svn commit: r28508 - in /dev/spark/2.4.0-SNAPSHOT-2018_08_02_00_02-57d9949-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-08-02 Thread pwendell
Author: pwendell Date: Thu Aug 2 07:16:48 2018 New Revision: 28508 Log: Apache Spark 2.4.0-SNAPSHOT-2018_08_02_00_02-57d9949 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.]

spark git commit: [SPARK-24557][ML] ClusteringEvaluator support array input

2018-08-02 Thread meng
Repository: spark Updated Branches: refs/heads/master 166f34618 -> 57d994994 [SPARK-24557][ML] ClusteringEvaluator support array input ## What changes were proposed in this pull request? ClusteringEvaluator support array input ## How was this patch tested? added tests Author: zhengruifeng

spark git commit: [SPARK-24957][SQL][FOLLOW-UP] Clean the code for AVERAGE

2018-08-02 Thread lixiao
Repository: spark Updated Branches: refs/heads/master c9914cf04 -> 166f34618 [SPARK-24957][SQL][FOLLOW-UP] Clean the code for AVERAGE ## What changes were proposed in this pull request? This PR is to refactor the code in AVERAGE by dsl. ## How was this patch tested? N/A Author: Xiao Li