yangz created SPARK-24194:
-----------------------------

             Summary: HadoopFsRelation cannot overwrite a path that is also 
being read from
                 Key: SPARK-24194
                 URL: https://issues.apache.org/jira/browse/SPARK-24194
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.4.0
         Environment: spark master
            Reporter: yangz
             Fix For: 2.4.0


When
{code:java}
INSERT OVERWRITE TABLE territory_count_compare select * from 
territory_count_compare where shop_count!=real_shop_count
{code}
And territory_count_compare is a table with parquet, there will be a error 

Cannot overwrite a path that is also being read from

 

And in file MetastoreDataSourceSuite.scala, there have a test case

 

 
{code:java}
table(tableName).write.mode(SaveMode.Overwrite).insertInto(tableName)
{code}
 

But when the table territory_count_compare is a common hive table, there is no 
error. 

So I think the reason is when insert overwrite into hadoopfs relation with 
static partition, it first delete the partition in the output. But it should be 
the time when the job commited.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to