[
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272836#comment-15272836
]
Chris Nauroth commented on HBASE-9393:
--------------------------------------
Does the above question amount to asking whether or not it is safe for multiple
threads to read concurrently from a {{DFSInputStream}}? If so, then I can
provide some feedback.
One case to consider is positional read, which is the following method
signature:
{code}
int read(long position, byte[] buffer, int offset, int length) throws
IOException;
{code}
If all calling threads are using positional read, then it's correct that each
caller would operate on its own unique instance of {{BlockReader}}, backed by
its own dedicated socket connection to a DataNode.
Another case to consider is any of the other non-positional read APIs. These
APIs are thread-safe at the method level via {{synchronized}}, however that's
not sufficient to guarantee isolation for a sequence of multiple method calls,
such as {{seek}} + {{read}} or {{skip}} + {{read}}. That would require
external synchronization. Even with external synchronization, there would be a
risk of thrashing if multiple threads were trying to read drastically different
positions spanning block boundaries. The {{DFSInputStream}} would have to keep
updating its single {{BlockReader}} instance and related state to point at the
new position. That amounts to additional NameNode RPC and connection
reestablishment to a DataNode (unless a cached connection is available).
Caveat: I don't know HBase code well enough to comment authoritatively on its
usage patterns. If HBase code additionally relies on locking the stream
objects for mutual exclusion around its own higher-level operations, then
that's another concern.
I hope this helps.
> Hbase does not closing a closed socket resulting in many CLOSE_WAIT
> --------------------------------------------------------------------
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.94.2, 0.98.0, 1.0.1.1, 1.1.2
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node,
> 7279 regions
> Reporter: Avi Zrachya
> Assignee: Ashish Singhi
> Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch,
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch,
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v15.patch,
> HBASE-9393.v15.patch, HBASE-9393.v2.patch, HBASE-9393.v3.patch,
> HBASE-9393.v4.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch,
> HBASE-9393.v5.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch,
> HBASE-9393.v6.patch, HBASE-9393.v7.patch, HBASE-9393.v8.patch,
> HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect
> to the datanode because too many mapped sockets from one host to another on
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart
> hbase to solve the porblem, later in time it will incease to 60-100K sockets
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219 0 12:26 pts/0 00:00:00 grep 21592
> hbase 21592 1 17 Aug29 ? 03:29:06
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
> -Dhbase.log.dir=/var/log/hbase
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)