[ 
https://issues.apache.org/jira/browse/HADOOP-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12612677#action_12612677
 ] 

Raghu Angadi commented on HADOOP-3707:
--------------------------------------

Advantages of the current patch :

* fixes a real problem observed by users and increases robustness of DFS.
* is certainly an improvement over what we have.
* has no regressions.
* does not slow down any NameNode activity.
* in my opinion does not increase or decrease the complexity or change the 
nature of the big beast "FSNamesystem".
* once it works well, the counter can be used for other scheduling activities.
* I don't think "approx" in the name should distract much.. it is as accurate 
as it can be.. and we deal with small departures from accuracy in case of 
errors. It is only guilty of living with uncertainty :).

Of course we can change the patch, for e.g. we can increase the "roll interval" 
from 5 minutes.

> Frequent DiskOutOfSpaceException on almost-full datanodes
> ---------------------------------------------------------
>
>                 Key: HADOOP-3707
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3707
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Koji Noguchi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.2, 0.18.0, 0.19.0
>
>         Attachments: HADOOP-3707-branch-017.patch, 
> HADOOP-3707-branch-017.patch, HADOOP-3707-trunk.patch, 
> HADOOP-3707-trunk.patch, HADOOP-3707-trunk.patch
>
>
> On a datanode which is completely full (leaving reserve space),  we 
> frequently see
> target node reporting, 
> {noformat}
> 2008-07-07 16:54:44,707 INFO org.apache.hadoop.dfs.DataNode: Receiving block 
> blk_3328886742742952100 src: /11.1.11.111:22222 dest: /11.1.11.111:22222
> 2008-07-07 16:54:44,708 INFO org.apache.hadoop.dfs.DataNode: writeBlock 
> blk_3328886742742952100 received exception 
> org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient 
> space for an additional block
> 2008-07-07 16:54:44,708 ERROR org.apache.hadoop.dfs.DataNode: 
> 33.3.33.33:22222:DataXceiver: 
> org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: Insufficient 
> space for an additional block
>         at 
> org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:444)
>         at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:716)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:2187)
>         at 
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1113)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}
> Sender reporting 
> {noformat}
> 2008-07-07 16:54:44,712 INFO org.apache.hadoop.dfs.DataNode: 
> 11.1.11.111:22222:Exception writing block blk_3328886742742952100 to mirror 
> 33.3.33.33:22222
> java.io.IOException: Broken pipe
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
>         at sun.nio.ch.IOUtil.write(IOUtil.java:75)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>         at 
> org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:53)
>         at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:144)
>         at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:105)
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveChunk(DataNode.java:2292)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receivePacket(DataNode.java:2411)
>         at 
> org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2476)
>         at 
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1204)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:976)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}
> Since it's not constantly happening,  my guess is whenever datanode gets some 
> small space available, namenode over-assigns blocks which can fail the block
> pipeline.
> (Note, before 0.17, namenode was much slower in assigning blocks)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to