[
https://issues.apache.org/jira/browse/HDFS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13036704#comment-13036704
]
Hadoop QA commented on HDFS-1965:
---------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12479878/hdfs-1965.txt
against trunk revision 1125217.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
-1 core tests. The patch failed these core unit tests:
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
org.apache.hadoop.hdfs.TestHDFSTrash
+1 contrib tests. The patch passed contrib unit tests.
+1 system test framework. The patch passed system test framework compile.
Test results:
https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/599//testReport/
Findbugs warnings:
https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/599//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output:
https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/599//console
This message is automatically generated.
> IPCs done using block token-based tickets can't reuse connections
> -----------------------------------------------------------------
>
> Key: HDFS-1965
> URL: https://issues.apache.org/jira/browse/HDFS-1965
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: security
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Critical
> Fix For: 0.22.0
>
> Attachments: hdfs-1965.txt, hdfs-1965.txt
>
>
> This is the reason that TestFileConcurrentReaders has been failing a lot.
> Reproducing a comment from HDFS-1057:
> The test has a thread which continually re-opens the file which is being
> written to. Since the file's in the middle of being written, it makes an RPC
> to the DataNode in order to determine the visible length of the file. This
> RPC is authenticated using the block token which came back in the
> LocatedBlocks object as the security ticket.
> When this RPC hits the IPC layer, it looks at its existing connections and
> sees none that can be re-used, since the block token differs between the two
> requesters. Hence, it reconnects, and we end up with hundreds or thousands of
> IPC connections to the datanode.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira