[ 
https://issues.apache.org/jira/browse/HADOOP-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12570892#action_12570892
 ] 

Pete Wyckoff commented on HADOOP-2669:
--------------------------------------

Hi Raghu, here's the grep of the file from the NameNode.  Note that both times 
this happened, a reduce failed on a machine because it couldn't allocate a 
directory on that machine because it was out of space. I'll send you the stack 
trace. thx, pete



hadoopNN] logs > grep "tmp/1050617226/ds=2008-02-18/SomeTable/part-00015" 
hadoop-root-namenode-hadoopNN.facebook.com.log
2008-02-20 04:04:34,434 WARN org.apache.hadoop.dfs.StateChange: DIR* 
FSDirectory.unprotectedDelete: failed to remove 
/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015 because it does not exist
2008-02-20 04:50:43,307 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_2853191854023133101
2008-02-20 04:58:40,735 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_2092683246371438874
2008-02-20 05:06:40,676 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-6864374713288003568
2008-02-20 05:15:00,448 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_6785822354554528111
2008-02-20 05:23:05,122 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-1002253031864643087
2008-02-20 05:42:58,850 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_4868356428049530206
2008-02-20 05:51:30,124 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_4502317329036147265
2008-02-20 05:59:41,655 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-1028912444083540626
2008-02-20 06:07:59,222 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-2305314790692595413
2008-02-20 06:15:52,835 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_6988580537244455759
2008-02-20 06:23:58,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
28 on 9000, call addBlock(/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015, 
DFSClient_task_200802191501_0676_r_000015_1) fr\
om SomeIP:45453: error: org.apache.hadoop.dfs.LeaseExpiredException: No lease 
on /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
org.apache.hadoop.dfs.LeaseExpiredException: No lease on 
/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
2008-02-20 06:44:24,151 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_3760541177395872782
2008-02-20 06:52:26,050 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-3341039163005140779
2008-02-20 07:00:16,378 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_6227348648842177155
2008-02-20 07:08:08,215 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_1906369823207034882
2008-02-20 07:16:01,895 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_2621171097816659457
2008-02-20 07:24:33,046 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 9000, call addBlock(/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015, 
DFSClient_task_200802191501_0676_r_000015_2) fro\
m SomeOtherIP:48442: error: org.apache.hadoop.dfs.LeaseExpiredException: No 
lease on /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
org.apache.hadoop.dfs.LeaseExpiredException: No lease on 
/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
2008-02-20 07:47:14,155 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-7287451503662587350
2008-02-20 07:55:21,732 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_7927573534765844605
2008-02-20 08:03:14,563 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_3932465634538367837
2008-02-20 08:11:08,936 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_-3190439497258007802
2008-02-20 08:19:01,456 INFO org.apache.hadoop.dfs.StateChange: BLOCK* 
NameSystem.allocateBlock: /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015. 
blk_2730573925522501767
2008-02-20 08:27:02,055 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
17 on 9000, call addBlock(/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015, 
DFSClient_task_200802191501_0676_r_000015_3) fr\
om YetAnotherIP:51184: error: org.apache.hadoop.dfs.LeaseExpiredException: No 
lease on /tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
org.apache.hadoop.dfs.LeaseExpiredException: No lease on 
/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015
2008-02-20 09:26:58,886 WARN org.apache.hadoop.dfs.StateChange: DIR* 
NameSystem.internalReleaseCreate: attempt to release a create lock on 
/tmp/1050617226/ds=2008-02-18/SomeTable/part-00015 file does not\
 exist.




> DFS client lost lease during writing into DFS files
> ---------------------------------------------------
>
>                 Key: HADOOP-2669
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2669
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Runping Qi
>
> I have a program that reads a block compressed sequence file, does some 
> processing on the records and writes the
> processed records into another  block compressed sequence file.
> During execution of the program, I got the following exception: 
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.dfs.LeaseExpiredException: No lease on xxxxx/part-00000
>         at 
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:976)
>         at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:293)
>         at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
>         at org.apache.hadoop.ipc.Client.call(Client.java:482)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
>         at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
>         at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
>         at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:1554)
>         at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1500)
>         at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1626)
>         at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:1602)
>         at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:140)
>         at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:100)
>         at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
>         at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:39)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at 
> org.apache.hadoop.io.SequenceFile$BlockCompressWriter.writeBuffer(SequenceFile.java:1181)
>         at 
> org.apache.hadoop.io.SequenceFile$BlockCompressWriter.sync(SequenceFile.java:1198)
>         at 
> org.apache.hadoop.io.SequenceFile$BlockCompressWriter.append(SequenceFile.java:1248)
>         at 
> org.apache.hadoop.mapred.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:69)
>      

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to