[
https://issues.apache.org/jira/browse/HIVE-24882?focusedWorklogId=565576&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-565576
]
ASF GitHub Bot logged work on HIVE-24882:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 13/Mar/21 00:00
Start Date: 13/Mar/21 00:00
Worklog Time Spent: 10m
Work Description: nareshpr opened a new pull request #2069:
URL: https://github.com/apache/hive/pull/2069
### What changes were proposed in this pull request?
Compaction reattempt should clear the old attempt files before running the
current attempt
### Why are the changes needed?
Compaction task can be pre-empted by yarn for resource or some network issue
could cause hdfs read failures, which when re-attempted to new NM can succeed.
This re-attempt is failing because of stale files from old failed attempt.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
No testcase is included.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 565576)
Remaining Estimate: 0h
Time Spent: 10m
> Compaction task reattempt fails with FileAlreadyExistsException for
> DeleteEventWriter
> -------------------------------------------------------------------------------------
>
> Key: HIVE-24882
> URL: https://issues.apache.org/jira/browse/HIVE-24882
> Project: Hive
> Issue Type: Bug
> Reporter: Naresh P R
> Assignee: Naresh P R
> Priority: Major
> Time Spent: 10m
> Remaining Estimate: 0h
>
> If first attempt of compaction task is pre-empted by yarn or execution failed
> because of environmental issues, re-attempted tasks will fail with
> FileAlreadyExistsException
> {noformat}
> Error: org.apache.hadoop.fs.FileAlreadyExistsException:
> /warehouse/tablespace/managed/hive/test.db/acid_table/dept=cse/_tmp_xxx/delete_delta_0000001_0000010/bucket_00000
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:380)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2453)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351)
>
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774)
>
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462)
>
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>
> at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>
> at
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:278)
>
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
> at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:95)
> at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:177)
> at org.apache.hadoop.hive.ql.io.orc.WriterImpl.<init>(WriterImpl.java:94)
> at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRawRecordWriter(OrcOutputFormat.java:299)
>
> at
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.getDeleteEventWriter(CompactorMR.java:1084)
>
> at
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:995)
>
> at
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:958){noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)