[
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15160133#comment-15160133
]
Sean Busbey commented on HBASE-9393:
------------------------------------
Several of your log messages are at ERROR but they don't give an operator any
idea about what to do next. No info on finding a root cause, something to
correct, or get more information. Could we add some of that kind of information
or change them to be INFO or WARN?
{code}
+ private static void setInstanceOfCanUnbuffer(boolean instanceOfCanUnbuffer) {
+ FSDataInputStreamWrapper.instanceOfCanUnbuffer = instanceOfCanUnbuffer;
+ }
+
+ private static void setUnbufferMethod(Method unbuffer) {
+ FSDataInputStreamWrapper.unbuffer = unbuffer;
+ }
{code}
Why are we making these assignments indirectly via methods?
{code}
+ if (FSDataInputStreamWrapper.instanceOfCanUnbuffer == null) {
+ // To ensure we compute whether the stream is instance of CanUnbuffer
only once.
+ FSDataInputStreamWrapper.setInstanceOfCanUnbuffer(false);
+ Class<?>[] streamInterfaces = streamClass.getInterfaces();
+ for (Class c : streamInterfaces) {
+ if
(c.getCanonicalName().toString().equals("org.apache.hadoop.fs.CanUnbuffer")) {
+ try {
+
FSDataInputStreamWrapper.setUnbufferMethod(streamClass.getDeclaredMethod("unbuffer"));
+ } catch (Exception e) {
+ LOG.error("Failed to find 'unbuffer' method in class " +
streamClass, e);
+ return;
+ }
+ FSDataInputStreamWrapper.setInstanceOfCanUnbuffer(true);
+ break;
+ }
+ }
+ }
{code}
This doesn't look like it will behave correctly in presence of concurrency. Can
we do the reflection set up during a static initializer?
{code}
+ if (FSDataInputStreamWrapper.instanceOfCanUnbuffer) {
+ try {
+ FSDataInputStreamWrapper.unbuffer.invoke(wrappedStream);
+ } catch (Exception e) {
+ LOG.error("Failed to unbuffer the stream so possibly there may be a
TCP socket "
+ + "connection left open in CLOSE_WAIT state for this
RegionServer.", e);
+ }
+ }
{code}
This should probably have an else clause that similarly gives the warning. In
that case, it should probably give a pointer to this issue.
> Hbase does not closing a closed socket resulting in many CLOSE_WAIT
> --------------------------------------------------------------------
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.94.2, 0.98.0
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node,
> 7279 regions
> Reporter: Avi Zrachya
> Assignee: Ashish Singhi
> Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch,
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch,
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch,
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch,
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch,
> HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect
> to the datanode because too many mapped sockets from one host to another on
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart
> hbase to solve the porblem, later in time it will incease to 60-100K sockets
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219 0 12:26 pts/0 00:00:00 grep 21592
> hbase 21592 1 17 Aug29 ? 03:29:06
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
> -Dhbase.log.dir=/var/log/hbase
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)