Repository: spark
Updated Branches:
  refs/heads/master a489567e3 -> 3ad99f166


[SPARK-18146][SQL] Avoid using Union to chain together create table and repair 
partition commands

## What changes were proposed in this pull request?

The behavior of union is not well defined here. It is safer to explicitly 
execute these commands in order. The other use of `Union` in this way will be 
removed by https://github.com/apache/spark/pull/15633

## How was this patch tested?

Existing tests.

cc yhuai cloud-fan

Author: Eric Liang <[email protected]>
Author: Eric Liang <[email protected]>

Closes #15665 from ericl/spark-18146.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3ad99f16
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3ad99f16
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3ad99f16

Branch: refs/heads/master
Commit: 3ad99f166494950665c137fd5dea636afa0feb10
Parents: a489567
Author: Eric Liang <[email protected]>
Authored: Sun Oct 30 20:27:38 2016 +0800
Committer: Wenchen Fan <[email protected]>
Committed: Sun Oct 30 20:27:38 2016 +0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/sql/DataFrameWriter.scala    | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/3ad99f16/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
index 7ff3522..11dd1df 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
@@ -388,16 +388,14 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
           partitionColumnNames = partitioningColumns.getOrElse(Nil),
           bucketSpec = getBucketSpec
         )
-        val createCmd = CreateTable(tableDesc, mode, Some(df.logicalPlan))
-        val cmd = if (tableDesc.partitionColumnNames.nonEmpty &&
+        df.sparkSession.sessionState.executePlan(
+          CreateTable(tableDesc, mode, Some(df.logicalPlan))).toRdd
+        if (tableDesc.partitionColumnNames.nonEmpty &&
             df.sparkSession.sqlContext.conf.manageFilesourcePartitions) {
           // Need to recover partitions into the metastore so our saved data 
is visible.
-          val recoverPartitionCmd = 
AlterTableRecoverPartitionsCommand(tableDesc.identifier)
-          Union(createCmd, recoverPartitionCmd)
-        } else {
-          createCmd
+          df.sparkSession.sessionState.executePlan(
+            AlterTableRecoverPartitionsCommand(tableDesc.identifier)).toRdd
         }
-        df.sparkSession.sessionState.executePlan(cmd).toRdd
     }
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to