[
https://issues.apache.org/jira/browse/HBASE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14606914#comment-14606914
]
Rocju commented on HBASE-13437:
-------------------------------
after this bug fixed,find a new bug, thrift call will throws a exception:
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at
org.apache.hadoop.hbase.thrift2.generated.THBaseService$Client.recv_getScannerResults(THBaseService.java:703)
at
org.apache.hadoop.hbase.thrift2.generated.THBaseService$Client.getScannerResults(THBaseService.java:688)
at com.nl.test.ScanTask.run(ScanTask.java:47)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 9 more
=================================================================
I found org.apache.hadoop.hbase.util.ConnectionCache's InnerClass
connInfo.updateAccessTime() won't be called,
till maxIdleTime it close the connection,because of
org.apache.hadoop.hbase.client.HTablePool's method findOrCreateTable
will use its HTableInterface table = tables.get(tableName) and this won't renew
connection time.
so I modified org.apache.hadoop.hbase.client.HTablePool.java
private HTableInterface findOrCreateTable(String tableName) {
//HTableInterface table = tables.get(tableName);
//if (table == null) {
// table = createHTable(tableName);
//}
//return table;
return createHTable(tableName);
}
> ThriftServer leaks ZooKeeper connections
> ----------------------------------------
>
> Key: HBASE-13437
> URL: https://issues.apache.org/jira/browse/HBASE-13437
> Project: HBase
> Issue Type: Bug
> Components: Thrift
> Affects Versions: 0.98.8
> Reporter: Winger Pun
> Assignee: Winger Pun
> Fix For: 2.0.0, 1.1.0, 0.98.13, 1.0.2, 1.2.0
>
> Attachments: HBASE-13437_1.patch, HBASE-13437_1.patch,
> hbase-13437-fix.patch
>
>
> HBase ThriftServer will cache Zookeeper connection in memory using
> org.apache.hadoop.hbase.util.ConnectionCache. This class has a mechanism
> called chore to clean up connections idle for too long(default is 10 min).
> But method timedOut for testing whether idle exceed for maxIdleTime always
> return false which leads to never release the Zookeeper connection. If we
> send request to ThriftServer every maxIdleTime then ThriftServer will keep
> thousands of Zookeeper Connection soon.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)