[ https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894833#comment-16894833 ]
Jason Dere commented on HIVE-22040: ----------------------------------- Hi [~xiepengjie], thanks for submitting the patch. What version of Hive are you working with? The patch does not apply on Hive master because the source files are now in standalone-metastore/metastore-server and standalone-metastore/metastore-common on master. I also was not able to get TestDropPartitions to run properly on master branch .. it seems like there have been some changes to how this test is run, that I am not aware of. I would recommend adding the following qfile test to TestMiniLlapCliDriver, to simulate the failure scenario you mentioned: {noformat} create table delete_parent_path (c1 int) partitioned by (year string, month string, day string) location 'hdfs:///tmp/delete_parent_path'; alter table delete_parent_path add partition (year='2019', month='07', day='01'); dfs -rm -r hdfs:///tmp/delete_parent_path/year=2019/month=07; alter table delete_parent_path drop partition (year='2019', month='07', day='01'); {noformat} However I noticed that this no longer fails on Hive master. It seems like this has been fixed by HIVE-17472. I do agree with the change that you made in Warehouse.isEmpty(), if you could either remove the existing test case or replace it with the one I suggested. > Drop partition throws exception with 'Failed to delete parent: File does not > exist' when the partition's parent path does not exists > ------------------------------------------------------------------------------------------------------------------------------------ > > Key: HIVE-22040 > URL: https://issues.apache.org/jira/browse/HIVE-22040 > Project: Hive > Issue Type: Improvement > Components: Metastore > Affects Versions: 1.2.1, 2.0.0, 3.0.0 > Reporter: xiepengjie > Assignee: xiepengjie > Priority: Major > Attachments: HIVE-22040.01.patch, HIVE-22040.patch > > > I create a manage table with multi partition columns, when i try to drop > partition throws exception with 'Failed to delete parent: File does not > exist' when the partition's parent path does not exist. The partition's > metadata in mysql has been deleted, but the exception is still thrown. it > will fail if connecting hiveserver2 with jdbc by java, this problem also > exists in master branch, I think it is very unfriendly and we should fix it. > Example: > – First, create manage table with nulti partition columns, and add partitions: > {code:java} > drop table if exists t1; > create table t1 (c1 int) partitioned by (year string, month string, day > string); > alter table t1 add partition(year='2019', month='07', day='01');{code} > – Second, delete the path of partition 'month=07': > {code:java} > hadoop fs -rm -r > /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code} > -- Third, when i try to drop partition, the metastore throws exception with > 'Failed to delete parent: File does not exist' . > {code:java} > alter table t1 drop partition partition(year='2019', month='07', day='01'); > {code} > exception like this: > {code:java} > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File > does not exist: > /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07 > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) > (state=08S01,code=1) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)