[
https://issues.apache.org/jira/browse/HDFS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13036554#comment-13036554
]
Todd Lipcon commented on HDFS-1965:
-----------------------------------
I implemented option (b) and have a test case that shows that it fixes the
problem...
BUT: the real DFSInputStream code seems to call RPC.stopProxy() after it uses
the proxy, which should also avoid this issue. Doing so in my test case makes
the case pass without any other fix. So there's still some mystery.
> IPCs done using block token-based tickets can't reuse connections
> -----------------------------------------------------------------
>
> Key: HDFS-1965
> URL: https://issues.apache.org/jira/browse/HDFS-1965
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: security
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Critical
> Fix For: 0.22.0
>
>
> This is the reason that TestFileConcurrentReaders has been failing a lot.
> Reproducing a comment from HDFS-1057:
> The test has a thread which continually re-opens the file which is being
> written to. Since the file's in the middle of being written, it makes an RPC
> to the DataNode in order to determine the visible length of the file. This
> RPC is authenticated using the block token which came back in the
> LocatedBlocks object as the security ticket.
> When this RPC hits the IPC layer, it looks at its existing connections and
> sees none that can be re-used, since the block token differs between the two
> requesters. Hence, it reconnects, and we end up with hundreds or thousands of
> IPC connections to the datanode.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira