Looks like an network error on Datanode during checkDiskError operation.
Does your datanode use network mounts for storage? If yes then worth
checking mounts.  


On 10/16/10 8:44 AM, "[email protected]"
<[email protected]> wrote:

> From: "Sharma, Avani" <[email protected]>
> Date: Sat, 16 Oct 2010 07:40:51 -0700
> To: "[email protected]" <[email protected]>
> Subject: java.net.SocketException: Broken pipe
> 
> I get the below error when dumping a 50G file on one of my Hadoop (0.20.2)
> clusters. It worked fine on another one though. I researched and this seems
> more like a network problem? I want to know how can I go about resolving this.
> What all should I look for on my cluster to debug this.
> 
> 2010-10-15 06:01:50,014 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder blk_-1170040697541244894_3431 1 Exception
> java.net.SocketExcept
> ion: Broken pipe
>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>         at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
>         at 
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTra
> nsferProtocol.java:132)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(Block
> Receiver.java:899)
>         at java.lang.Thread.run(Thread.java:619)
> 
> 2010-10-15 06:01:50,016 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
> IOException in BlockReceiver.run():
> java.net.SocketException: Broken pipe
>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>         at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
>         at 
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTra
> nsferProtocol.java:132)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(Block
> Receiver.java:1001)
>         at java.lang.Thread.run(Thread.java:619)
> 2010-10-15 06:01:50,017 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
> checkDiskError: exception:
> java.net.SocketException: Broken pipe
>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>         at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)
>         at 
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTra
> nsferProtocol.java:132)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(Block
> Receiver.java:1001)
>         at java.lang.Thread.run(Thread.java:619)
> 
> Thanks,
> Avani Sharma


iCrossing Privileged and Confidential Information
This email message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information of iCrossing. Any unauthorized 
review, use, disclosure or distribution is prohibited. If you are not the 
intended recipient, please contact the sender by reply email and destroy all 
copies of the original message.


Reply via email to