[
https://issues.apache.org/jira/browse/HDFS-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13850274#comment-13850274
]
Hadoop QA commented on HDFS-5671:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12619041/5671.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/5740//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5740//console
This message is automatically generated.
> When Hbase RegionServer request block to DataNode and "java.io.IOException"
> occurs, the fail TCP socket is not closed (in status "CLOSE_WAIT" with port
> 1004 of DataNode)
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-5671
> URL: https://issues.apache.org/jira/browse/HDFS-5671
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs-client
> Affects Versions: 2.2.0
> Environment: hadoop-2.2.0
> java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64
> x86_64 x86_64 GNU/Linux
> Reporter: JamesLi
> Priority: Critical
> Attachments: 5671.patch
>
>
> lsof -i TCP:1004 | grep -c CLOSE_WAIT
> 18235
> When hbase regionserver request a file's block to DataNode:1004. If request
> fail because "java.io.IOException: Got error for OP_READ_BLOCK,Block token is
> expired." Occurs and the TCP socket that regionserver using is not closed.
> I think the problem above is in DatanodeInfo blockSeekTo(long target) of
> Class DFSInputStream
> The connection regionserver using is BlockReader:
> blockReader = getBlockReader(targetAddr, chosenNode, src, blk,
> accessToken, offsetIntoBlock, blk.getNumBytes() - offsetIntoBlock,
> buffersize, verifyChecksum, dfsClient.clientName);
> and if this connection fail, regionserver will fetch a new access token , and
> old Connection is not closed here.
> I think need small code to close old Connection when exception happens:
> if(blockReader != null)
> try{
> blockReader.close();
> blockReader = null;
> } catch (IOException exc) {
> DFSClient.LOG.error("Close connection to " + targetAddr
> + " failed");
> }
--
This message was sent by Atlassian JIRA
(v6.1.4#6159)