[
https://issues.apache.org/jira/browse/SPARK-24194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193329#comment-17193329
]
CHC commented on SPARK-24194:
-----------------------------
I met this on spark 3.0.0 too
When:
{code:java}
insert overwrite tmp.spark3_snap partition(dt='2020-09-10')
select id from tmp.spark3_snap where dt='2020-09-09';
{code}
and get error: "Error in query: Cannot overwrite a path that is also being read
from.;"
> HadoopFsRelation cannot overwrite a path that is also being read from
> ---------------------------------------------------------------------
>
> Key: SPARK-24194
> URL: https://issues.apache.org/jira/browse/SPARK-24194
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 3.1.0
> Environment: spark master
> Reporter: yangz
> Priority: Minor
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> When
> {code:java}
> INSERT OVERWRITE TABLE territory_count_compare select * from
> territory_count_compare where shop_count!=real_shop_count
> {code}
> And territory_count_compare is a table with parquet, there will be a error
> Cannot overwrite a path that is also being read from
>
> And in file MetastoreDataSourceSuite.scala, there have a test case
>
>
> {code:java}
> table(tableName).write.mode(SaveMode.Overwrite).insertInto(tableName)
> {code}
>
> But when the table territory_count_compare is a common hive table, there is
> no error.
> So I think the reason is when insert overwrite into hadoopfs relation with
> static partition, it first delete the partition in the output. But it should
> be the time when the job commited.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]