GitHub user fjh100456 opened a pull request:

    https://github.com/apache/spark/pull/22707

    [SPARK-25717][SQL] Insert overwrite a recreated external and partitioned 
table may result in incorrect query results

    ## What changes were proposed in this pull request?
    
    Consider the following scenario:
    
    ```
    spark.range(100).createTempView("temp")
    (0 until 3).foreach { _ =>
      spark.sql("drop table if exists tableA")
      spark.sql("create table if not exists tableA(a int) partitioned by (p 
int) location 'file:/e:/study/warehouse/tableA'")
      spark.sql("insert overwrite table tableA partition(p=1) select * from 
temp")
      spark.sql("select count(1) from tableA where p=1").show
    }
    ```
    
    We expect the count always be 100, but the actual results are as follows:
    
    ```
    +--------+
    |count(1)|
    +--------+
    |     100|
    +--------+
    
    +--------+
    |count(1)|
    +--------+
    |   200|
    +--------+
    
    +--------+
    |count(1)|
    +--------+
    |   300|
    +--------+
    ```
    
    when spark executes an `insert overwrite` command,  it gets the historical 
partition first, and then delete it from fileSystem.
    
    But for recreated external and partitioned table, the partitions were all 
deleted by the `drop  table` command with data unremoved. So the historical 
data is preserved which lead to the query results incorrect.
    
    ## How was this patch tested?
    Manual test.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/fjh100456/spark InsertOverwriteCommit

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22707.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22707
    
----
commit 8e1b1be1ac7f7cc4fe31ccf99fb92dd7d8fc8918
Author: fjh100456 <fu.jinhua6@...>
Date:   2018-10-12T09:45:39Z

    [SPARK-25717][SQL] Insert overwrite a recreated external and partitioned 
table may result in incorrect query results.
    
    ## What changes were proposed in this pull request?
    
    when spark executes an `insert overwrite` command,  it gets the historical 
partition first, and then delete it from fileSystem.
    
    But for recreated external and partitioned table, the partitions were all 
deleted by the `drop  table` command with data unremoved. So the historical 
data is preserved which lead to the query results incorrect.
    
    ## How was this patch tested?
    Manual test.

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to