[
https://issues.apache.org/jira/browse/HDFS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13038072#comment-13038072
]
Todd Lipcon commented on HDFS-1965:
-----------------------------------
I think in trunk, it's not possible, since the connection is only lazily opened
by the actual RPC to the DataNode. Then, it won't close since there's a call
outstanding.
In 0.22, it's possible that it will open one connection for the
getProtocolVersion() call and a second one for the actual RPC. Unless I'm
missing something, that should only be an efficiency issue and not a
correctness issue. Do you agree?
> IPCs done using block token-based tickets can't reuse connections
> -----------------------------------------------------------------
>
> Key: HDFS-1965
> URL: https://issues.apache.org/jira/browse/HDFS-1965
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: security
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Priority: Critical
> Fix For: 0.22.0
>
> Attachments: hdfs-1965-0.22.txt, hdfs-1965.txt, hdfs-1965.txt,
> hdfs-1965.txt
>
>
> This is the reason that TestFileConcurrentReaders has been failing a lot.
> Reproducing a comment from HDFS-1057:
> The test has a thread which continually re-opens the file which is being
> written to. Since the file's in the middle of being written, it makes an RPC
> to the DataNode in order to determine the visible length of the file. This
> RPC is authenticated using the block token which came back in the
> LocatedBlocks object as the security ticket.
> When this RPC hits the IPC layer, it looks at its existing connections and
> sees none that can be re-used, since the block token differs between the two
> requesters. Hence, it reconnects, and we end up with hundreds or thousands of
> IPC connections to the datanode.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira