[
https://issues.apache.org/jira/browse/HIVE-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422883#comment-15422883
]
Murali Parimi commented on HIVE-9389:
-------------------------------------
I don't think this issue is fixed in version 1.2 as well. Is someone tracking
this?
> INSERT OVERWRITE DIRECTORY fails to delete old data files
> ---------------------------------------------------------
>
> Key: HIVE-9389
> URL: https://issues.apache.org/jira/browse/HIVE-9389
> Project: Hive
> Issue Type: Bug
> Affects Versions: 0.13.1
> Environment: CDH 5.3.0, non-secure hdfs, perm checking off
> Reporter: Andy Skelton
>
> {code:sql}
> FROM myview INSERT OVERWRITE DIRECTORY 'hdfs://nameservice/path/' SELECT
> COUNT(DISTINCT mycol);
> {code}
> This always produces one row. Sometimes the output is two files, {{000000_0}}
> and {{000000_1}}, one of which is empty. Sometimes we have seen new results
> in {{000000_0}} while old results remain in {{000000_1}}.
> We were alerted to this because Sqoop was exporting the output files in order
> by filename, writing first the new value and then overwriting with the old
> value, triggering an alert because the value stopped increasing in our
> database.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)