[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378986#comment-15378986
 ] 

Zhihua Deng edited comment on HBASE-16212 at 7/15/16 8:09 AM:
--------------------------------------------------------------

I add log details to  DFSInputStream#seek(long targetPos):
{code:borderStyle=solid}
if(pos > targetPos) {
      DFSClient.LOG.info(dfsClient.getClientName() + " seek " + 
getCurrentDatanode() + " for " + getCurrentBlock() +
              ". pos: " + pos + ", targetPos: " + targetPos);
    }
{code}
The attached file named 'regionserver-dfsinputstream.log' shows the process.

Also in one of datanodes throw an exception of such a sudden close by the 
client side:
java.net.SocketException: Original Exception : java.io.IOException: Connection 
reset by peer
        at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:427)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:492)
        at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:607)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:579)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:759)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:706)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:551)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
        ... 13 more





was (Author: dengzh):
I add log details to  DFSInputStream#seek(long targetPos):
{code:borderStyle=solid}
if(pos > targetPos) {
      DFSClient.LOG.info(dfsClient.getClientName() + " seek " + 
getCurrentDatanode() + " for " + getCurrentBlock() +
              ". pos: " + pos + ", targetPos: " + targetPos);
    }
{code}
The attached file named 'regionserver-dfsinputstream.log' shows the process.

Also in one of datanodes throws an exceptions of client's sudden close:

java.net.SocketException: Original Exception : java.io.IOException: Connection 
reset by peer
        at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:427)
        at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:492)
        at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:607)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:579)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:759)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:706)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:551)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
        ... 13 more




> Many Connections are created by wrong seeking pos on InputStream
> ----------------------------------------------------------------
>
>                 Key: HBASE-16212
>                 URL: https://issues.apache.org/jira/browse/HBASE-16212
>             Project: HBase
>          Issue Type: Improvement
>    Affects Versions: 1.1.2
>            Reporter: Zhihua Deng
>         Attachments: HBASE-16212.patch, regionserver-dfsinputstream.log
>
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to