[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197254#comment-15197254
 ] 

Ashish Singhi commented on HBASE-9393:
--------------------------------------

Thanks for the comments [~busbey]. Sorry for the late response.

bq. Can we do the initialization in the constructor (and make the related 
instance variables final) rather than rely on doing this lazy initialization?
We cannot make them final as the variables value are set at two different 
places in the method based on the condition and java compiler will not allow 
for that if we make it final.

bq. Can we catch specific exceptions instead of Exception?
Addressed.

bq. this doesn't need to reference the JIRA (both instances). Should specify 
that implementers should make it threadsafe (since we use it without locks in 
various places)
Addressed.

bq. This call means that openReader can only be called within a lock for fsdis. 
The javadocs should say so. (do all uses already do this?)

bq. This call means that createReader(FileSystem, Path, 
FSDataInputStreamWrapper, long, CacheConfig, Configuration) can only be called 
within a lock for fsdis, javadocs should say so. (do all uses already do this?)

As per my knowledge the concurrent thing is already handled in HDFS. Each 
client will have its own BlockReader and socket, so if the first client 
unbuffer the socket also it will not cause any problem to the other client 
block reads.

Please review and let me know your thoughts.
Thanks.

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> --------------------------------------------------------------------
>
>                 Key: HBASE-9393
>                 URL: https://issues.apache.org/jira/browse/HBASE-9393
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.94.2, 0.98.0
>         Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>            Reporter: Avi Zrachya
>            Assignee: Ashish Singhi
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, 
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v15.patch, 
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, 
> HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root     17255 17219  0 12:26 pts/0    00:00:00 grep 21592
> hbase    21592     1 17 Aug29 ?        03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to