[ 
https://issues.apache.org/jira/browse/IMPALA-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dimitris Tsirogiannis resolved IMPALA-3558.
-------------------------------------------
    Resolution: Not A Bug

Not an Impala issue. This is related to 
https://issues.apache.org/jira/browse/HADOOP-13230. 

> DROP TABLE PURGE on S3A table may not delete externally written files
> ---------------------------------------------------------------------
>
>                 Key: IMPALA-3558
>                 URL: https://issues.apache.org/jira/browse/IMPALA-3558
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Catalog
>    Affects Versions: Impala 2.6.0
>            Reporter: Sailesh Mukil
>            Assignee: Aaron Fabbri
>              Labels: s3
>
> To reproduce, do the following:
> * In Hive, "create table purge_test_s3 (x int) location 
> 's3a://[bucket]/purge_test_s3';"
> * Use the AWS CLI or the AWS Web interface to copy files to the above 
> mentioned location.
> * In Hive, "drop table purge_test_s3 purge;"
> The Metastore logs say:
> 2016-05-20 17:01:41,259 INFO hive.metastore.hivemetastoressimpl: 
> [pool-4-thread-103]: Not moving s3a://[bucket]/purge_test_s3 to trash
> 2016-05-20 17:01:41,364 INFO hive.metastore.hivemetastoressimpl: 
> [pool-4-thread-103]: Deleted the diretory s3a://[bucket]/purge_test_s3
> However, the files are still there. The weird part is that the Hadoop S3A 
> connector reads the files correctly but is not able to delete them.
> If instead of the AWS CLI or the AWS Web interface, we use the hadoop CLI to 
> copy the files, "drop table ... purge" works just fine. If we insert the 
> files using Hive, it works fine as well.
> The root cause of the problem has been found and is mentioned below in 
> Aaron's comment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to