[ 
https://issues.apache.org/jira/browse/HBASE-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13669005#comment-13669005
 ] 

Hadoop QA commented on HBASE-8630:
----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585151/8630-trunk-v1.txt
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

    {color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

    {color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5861//console

This message is automatically generated.
                
> Share Socket Connections for different HConnectionImplementations
> -----------------------------------------------------------------
>
>                 Key: HBASE-8630
>                 URL: https://issues.apache.org/jira/browse/HBASE-8630
>             Project: HBase
>          Issue Type: Improvement
>          Components: Client
>    Affects Versions: 0.94.3
>            Reporter: cuijianwei
>         Attachments: 8630-trunk-v1.txt
>
>
> In org.apache.hadoop.hbase.ipc.HBaseClient.java, socket connections are 
> pooled by map as:
> {code} protected final PoolMap<ConnectionId, Connection> connections; {code}
> The hashCode of ConnectionId is defined as:
> {code}     public int hashCode() {
>       return (address.hashCode() + PRIME * (
>                   PRIME * System.identityHashCode(protocol) ^
>              (ticket == null ? 0 : ticket.hashCode()) )) ^ rpcTimeout;
>     } {code}
> As we can see, ticket.hashCode() will contribute to hashCode of ConnectionId. 
> For hbase without authentication, the ticket should be a HadoopUser; while 
> for hbase with authentication, the ticket should be a SecureHadoopUser. 
> Neither HadoopUser nor SecureHadoopUser override hashCode() method, 
> therefore, two tickets have the same hashCode only when they refer to the 
> same object.
>   On the other hand, when we use HTable to access hbase, firstly, we will 
> invoke HBaseRPC.waitForProxy(...) to create a proxy for region server as 
> follows:
> {code}              server = (HRegionInterface) HBaseRPC.waitForProxy(
>                   serverInterfaceClass, HRegionInterface.VERSION,
>                   address, this.conf,
>                   this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout); 
> {code}
> Then HBaseRpc.getProxy(...) will be called as follows:
> {code}public static VersionedProtocol getProxy(Class<? extends 
> VersionedProtocol> protocol,
>       long clientVersion, InetSocketAddress addr, Configuration conf,
>       SocketFactory factory, int rpcTimeout) throws IOException {
>     return getProxy(protocol, clientVersion, addr,
>         User.getCurrent(), conf, factory, rpcTimeout);
>   } {code}
> We can see, User.getCurrent() will be invoked to generate the ticket to build 
> socket connection. User.getCurrent() is defined as:
> {code}
> public static User getCurrent() throws IOException {
>     User user;
>     if (IS_SECURE_HADOOP) {
>       user = new SecureHadoopUser();
>     } else {
>       user = new HadoopUser();
>     }
>     if (user.getUGI() == null) {
>       return null;
>     }
>     return user;
>   }
> {code}
> Therefore, we will get different tickets when we create different proxies for 
> the same region server, so that these proxies can't share the created socket 
> connections and will create new socket connections even if they have the same 
> HBaseConfiguration.
> We can use the following case to validate the description above:
> {code}
> public static void main(String args[]) throws Exception {
>     Configuration conf = HBaseConfiguration.create();
>     for (int i = 0;; ++i) {
>       HTable table = new HTable(conf, TestTable.testTableName);
>       table.close();
>     }
> }
> {code}
> Each time we close the HTable, the created region server proxies will be 
> closed as the underlying HConnectionImplementation will be closed. However, 
> the created socket connections won't be closed and wait to be shared in 
> future. Then, when we create HTable in the next turn, we will create server 
> proxy again, get a new ticket and consequently create new socket connections. 
> The created socket connections last turn can not be used any more. As the 
> loop goes on, thousands of socket will be created to connect region servers 
> until we get an exception to show no more sockets could be created.
> To fix the problem, maybe, we can use ticket.getName().hashCode() instead of 
> ticket.hashCode()?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to