[ 
https://issues.apache.org/jira/browse/HDFS-198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13886051#comment-13886051
 ] 

sukhendu chakraborty commented on HDFS-198:
-------------------------------------------

I am seeing the lease not expired error for a partitioned hive tables in CDH 
4.5 MR1. I have a similar usecase as Sujesh above, I am using dynamic date 
partitioning for a year (365 partitions), but have 1B rows (300GB of data for 
that year). I also want to cluster the data in each partition into 32 buckets.

Here is part  of the error trace:
3:58:18.531 PM  ERROR   org.apache.hadoop.hdfs.DFSClient        
Failed to close file 
/tmp/hive-user/hive_2014-01-29_15-33-51_510_4099525102053071439/_task_tmp.-ext-10000/trn_dt=20090531/_tmp.000012_0
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
 No lease on 
/tmp/hive-user/hive_2014-01-29_15-33-51_510_4099525102053071439/_task_tmp.-ext-10000/trn_dt=20090531/_tmp.000012_0:
 File does not exist. Holder DFSClient_NONMAPREDUCE_-1745484980_1 does not have 
any open files.
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2543)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2535)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2601)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2578)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)

        at org.apache.hadoop.ipc.Client.call(Client.java:1238)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
        at $Proxy10.complete(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
        at $Proxy10.complete(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:330)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1796)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1783)
        at 
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:709)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:726)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:561)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2399)
        at 
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2415)
        at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

> org.apache.hadoop.dfs.LeaseExpiredException during dfs write
> ------------------------------------------------------------
>
>                 Key: HDFS-198
>                 URL: https://issues.apache.org/jira/browse/HDFS-198
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client, namenode
>            Reporter: Runping Qi
>
> Many long running cpu intensive map tasks failed due to 
> org.apache.hadoop.dfs.LeaseExpiredException.
> See [a comment 
> below|https://issues.apache.org/jira/browse/HDFS-198?focusedCommentId=12910298&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12910298]
>  for the exceptions from the log:



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to