[ 
https://issues.apache.org/jira/browse/HDFS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13043882#comment-13043882
 ] 

Jonathan Hsieh commented on HDFS-1787:
--------------------------------------

Nicholas,

Thanks for the review! 

{quote}
Why DFSOutputStream.internalDNErrors is an AtomicInteger but 
DFSInputStream.internalDNErrors is an int?
{quote}

In the internalDNErrors case, the "Integer" is used by the ResponseProcessor 
which is a private Daemon/thread class.  I generally avoid non-static inner 
classes (hard to test) and had assumed that it was static.  If it were, it 
would require a final reference.  However, in this case, it is a non-static 
inner class so can be just an int.  Both you and ATM are correct.  It has been 
changed.

{quote}
BlockReader.maxTransfersErrors is not used.
{quote}

removed 

{quote}
What if Text.readString(..) throws an exception?
+      } else if (status == Status.ERROR)  {
+        String errmsg = Text.readString(in);
+        throw new InternalDataNodeException("Got error on read due to "
{quote} 

Text.readString can throw IOException.  The InternalDataNodeException thrown on 
the next line is also a subclass of IOException.  Behaviorwise it would 
essentially use the same error recovery path.

{quote}
It is better to add a catch-block for InternalDataNodeException.
} catch (Throwable t) {
+      // Special case where we transfer max transfers error information to 
client.
+      try {
+        if (t instanceof InternalDataNodeException) {
+          // send error message.
{quote}
Done.

{quote}
new public methods need javadoc.
{quote}
Ah, I missed one in InternalDataNodeException.  Done.

{quote}
Please use junit 4.
{quote}
Done.

{quote}
Please don't make white space change here for easing the review.
{quote}

I missed a few, and will clean them up.

Question: There was one checkstyle-type whitespace change that wasn't 
associated with code change.  Should that be reverted or is that a welcome 
change?



> "Not enough xcievers" error should propagate to client
> ------------------------------------------------------
>
>                 Key: HDFS-1787
>                 URL: https://issues.apache.org/jira/browse/HDFS-1787
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>    Affects Versions: 0.23.0
>            Reporter: Todd Lipcon
>            Assignee: Jonathan Hsieh
>              Labels: newbie
>             Fix For: 0.23.0
>
>         Attachments: hdfs-1787.2.patch, hdfs-1787.3.patch, hdfs-1787.3.patch, 
> hdfs-1787.patch
>
>
> We find that users often run into the default transceiver limits in the DN. 
> Putting aside the inherent issues with xceiver threads, it would be nice if 
> the "xceiver limit exceeded" error propagated to the client. Currently, 
> clients simply see an EOFException which is hard to interpret, and have to go 
> slogging through DN logs to find the underlying issue.
> The data transfer protocol should be extended to either have a special error 
> code for "not enough xceivers" or should have some error code for generic 
> errors with which a string can be attached and propagated.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to