Hi,

Am really stuck with this issue. If I decrease the number of max map tasks
to something like 4, then it runs fine. Does anyone have a clue on the
issue.

Thanks
Sudhan S

---------- Forwarded message ----------
From: Sudharsan Sampath <sudha...@gmail.com>
Date: Fri, Nov 4, 2011 at 5:10 PM
Subject: Re: HDFS error : Could not Complete file
To: hdfs-u...@hadoop.apache.org


Hi,

Thanks for the reply.

There's no delete command issued from the client code. FYR, I have attached
the program that's used to reproduce this bug. The input contains a simple
CSV file with 2 million entries.

Thanks
Sudhan S


On Fri, Nov 4, 2011 at 4:42 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:

> Looks before comlpeting the file, folder has been deleted.
> In HDFS, we will be able to delete the files any time. Application need to
> take care about the file comleteness depending on his usage.
> Do you have any dfsclient side logs in mapreduce, when exactly delete
> command issued?
> ----- Original Message -----
> From: Sudharsan Sampath <sudha...@gmail.com>
> Date: Friday, November 4, 2011 2:48 pm
> Subject: HDFS error : Could not Complete file
> To: hdfs-u...@hadoop.apache.org
>
> > Hi,
> >
> > I have a simple map-reduce program [map only :) ]that reads the
> > input and
> > emits the same to n outputs on a single node cluster with max map
> > tasks set
> > to 10 on a 16 core processor machine.
> >
> > After a while the tasks begin to fail with the following exception
> > log.
> > 2011-01-01 03:17:52,149 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > ugi=temp,temp    ip=/x.x.x.x cmd=delete
> >
> >
> src=/TestMultipleOuputs1320394241986/_temporary/_attempt_201101010256_0006_m_000000_2
>  dst=null        perm=null
> > 2011-01-01 03:17:52,156 INFO org.apache.hadoop.hdfs.StateChange:
> > BLOCK*NameSystem.addStoredBlock: addStoredBlock request received for
> > blk_7046642930904717718_23143 on x.x.x.x:<port> size 66148 But it
> > does not
> > belong to any file.
> > 2011-01-01 03:17:52,156 WARN org.apache.hadoop.hdfs.StateChange: DIR*
> > NameSystem.completeFile: failed to complete
> >
> /TestMultipleOuputs1320394241986/_temporary/_attempt_201101010256_0006_m_000000_2/Output0-m-00000
> > because dir.getFileBlocks() is null  and pendingFile is null
> > 2011-01-01 03:17:52,156 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 12 on 9000, call
> >
> complete(/TestMultipleOuputs1320394241986/_temporary/_attempt_201101010256_0006_m_000000_2/Output0-m-00000,
> > DFSClient_attempt_201101010256_0006_m_000000_2) from
> > x.x.x.x:<port> error:
> > java.io.IOException: Could not complete write to file
> >
> /TestMultipleOuputs1320394241986/_temporary/_attempt_201101010256_0006_m_000000_2/Output0-m-00000
> > by DFSClient_attempt_201101010256_0006_m_000000_2
> > java.io.IOException: Could not complete write to file
> >
> /TestMultipleOuputs1320394241986/_temporary/_attempt_201101010256_0006_m_000000_2/Output0-m-00000
> > by DFSClient_attempt_201101010256_0006_m_000000_2
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:497)
> >        at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown
> > Source)        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:396)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)
> >
> >
> > Looks like there's a delete command issued by FsNameSystem.audit
> > before the
> > it errors out stating it could not complete write to the file
> > inside that..
> >
> > Any clue on what could have gone wrong?
> >
> > Thanks
> > Sudharsan S
> >
>

Attachment: TestMultipleOutputs.java
Description: Binary data

Reply via email to