[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2017-08-10 Thread Robert Yokota (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16122377#comment-16122377
 ] 

Robert Yokota commented on HBASE-16388:
---

Using "hbase.client.perserver.requests.threshold" would be much easier if there 
were metrics for the value that is tested against the threshold.  I've 
submitted HBASE-18559 with a possible way to do this, but perhaps there is a 
better way.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-10-18 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15587510#comment-15587510
 ] 

Phil Yang commented on HBASE-16388:
---

Yes, this is a client-only change covering all rpc requests while AP's limit 
only covering part of operations in HTable.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-10-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15587002#comment-15587002
 ] 

Mikhail Antonov commented on HBASE-16388:
-

[~yangzhe1991]

I good one, I missed that. Is it fair to say that the primary motivation for 
that is that global, per-region and per-server limits in AP are flawed since 
they only ever enforced on the write path (going through AP#submit() / buffered 
mutator)?

With that being client-only change I'd consider backporting it to 1.3.. 
Anything that reduced blast radius from bad RS is an important reliability fix 
IMO.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491629#comment-15491629
 ] 

Hudson commented on HBASE-16388:


FAILURE: Integrated in Jenkins build HBase-1.4 #416 (See 
[https://builds.apache.org/job/HBase-1.4/416/])
HBASE-16388 Prevent client threads being blocked by only one slow region 
(stack: rev 069d1f73fa7e1a2b4c21ba95dea867d077a51068)
* (add) 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/ServerTooBusyException.java
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java


> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491443#comment-15491443
 ] 

Jerry He commented on HBASE-16388:
--

bq.  because the counter cache is a static object.

Got it, Thanks.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491289#comment-15491289
 ] 

Hudson commented on HBASE-16388:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1601 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1601/])
HBASE-16388 Prevent client threads being blocked by only one slow region 
(stack: rev 8ef6c76344127f2f4d2f9536d87fa6fc7b5c7132)
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (add) 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/ServerTooBusyException.java
* (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491232#comment-15491232
 ] 

Phil Yang commented on HBASE-16388:
---

Thanks for reviews and committing.

I'm not familiar with the release strategy. I didn't make a patch for 
branch-1.3 because I thought we will not add new feature to 1.3. If we need 
this feature in 1.3 I can make a patch for 1.3 tomorrow(UTC+8)

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491225#comment-15491225
 ] 

Phil Yang commented on HBASE-16388:
---

As said in hbase-default, the limit is process-level, even user uses multiple 
HConnections, because the counter cache is a static object.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491167#comment-15491167
 ] 

Jerry He commented on HBASE-16388:
--

Good feature!   Just need the release notes and the hbase-default.xml to be 
more clear.  The threshold limit is per Connection/HConnection. 
This is to compare to a situation where multi-threaded client opens multiple 
HConnections, 

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15490850#comment-15490850
 ] 

stack commented on HBASE-16388:
---

Findbugs issue is unrelated:


CodeWarning
RV  Return value of java.util.concurrent.CountDownLatch.await(long, 
TimeUnit) ignored in 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper()

Looks like attempt at fixing this elsewhere did not work.

I tried the tests locally and they passed. Need to dig in on these tests on why 
they are failing. They seem unrelated to this patch. Going to commit.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, 
> HBASE-16388-branch-1-v2.patch, HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v3.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489843#comment-15489843
 ] 

Hadoop QA commented on HBASE-16388:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 6s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
58s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 28s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 51s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
51s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487161#comment-15487161
 ] 

stack commented on HBASE-16388:
---

Ok. Expert config. If you are seeing SBE, then you have set 
hbase.client.perserver.requests.threshold down from its default value so you 
know about it.

The patch has rotted because of HBASE-16445 "Refactor and reimplement 
RpcClient" . Please redo, change ServerBusyException to ServerTooBusyException 
just so it has same form as RegionTooBusyException and I'm +. Thanks 
[~yangzhe1991]

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-12 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486310#comment-15486310
 ] 

Phil Yang commented on HBASE-16388:
---

If a user connect to a cluster with N region servers, we expected that if one 
region server is slow and all requests to it timed out, at a higher level which 
can monitor the service using HBase client, there should be only 1/N requests 
failed/timeout. If the failed requests are more than 1/N, especially much more 
than 1/N, the limit should be change to a lower number. 

And if there is no slow server in the cluster, but clients throw SBE, there may 
be two reasons: the limit is too strict, or the regions are not balanced and 
much more than 1/N keys belong to the server.

So the proper limit should be several times of 1/N*threadNumber, too high may 
reduce the availability when there are slow servers, too low may prevent normal 
requests.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486221#comment-15486221
 ] 

stack commented on HBASE-16388:
---

Where does the user get feedback on how well or how bad their setting of this 
config is doing [~yangzhe1991]?

On SBE, I confused it with RegionTooBusyException.java. Pardon me. I should 
have seen it is a new Exception. So ignore my remark. Answer above question and 
I'll commit (after changing name of SBE to ServerTooBusyException and adding 
detail on this new config.) Thanks [~yangzhe1991]



> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-12 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486102#comment-15486102
 ] 

Phil Yang commented on HBASE-16388:
---

We limit the number of concurrent requests. The suitable conf depends on the 
number of RSs and the number of threads accessing Table's API. But we only know 
the number of RSs, we don't know the number of threads. So this should be set 
by users according to their own environment.

SBE is from client-side, we don't send request at all. So this is a client-only 
fix. After we have AsyncTable and users use it in an async way which means 
user's threads will not be blocked, this conf is still useful to prevent we 
have too many pending request resulting in an OOM. And of course for 
AsyncTable's user, this limit can be much larger than blocking Table's user.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484159#comment-15484159
 ] 

stack commented on HBASE-16388:
---

[~yangzhe1991] Can this be on by default rather than require config to enable? 
Maybe on by default in master but in branch-1, you have to enable it? It is 
already 'complaining' loudly when up against the limit by throwing the 
ServerBusyException but maybe add that we are up against config limit in the 
message and config to set to up limit in the SBE message ... and that the SBE 
is coming from client-side rather than server (may not be immediately obvious)?

Nice test.

Needs a release note because this is important issue w/ nice fix.

Thanks



> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-12 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483425#comment-15483425
 ] 

Phil Yang commented on HBASE-16388:
---

[~stack] Any concerns? Thanks.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473734#comment-15473734
 ] 

Hadoop QA commented on HBASE-16388:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 47s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 25s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
51s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 22s {color} 
| {color:black} {color} |
\\
\

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15473426#comment-15473426
 ] 

Hadoop QA commented on HBASE-16388:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 6m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} master passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 124m 5s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 204m 27s {color} 
| {color:black} {color} |
\\
\\
|| R

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-24 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434405#comment-15434405
 ] 

Phil Yang commented on HBASE-16388:
---

Any comments? Thanks

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-18 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427537#comment-15427537
 ] 

Phil Yang commented on HBASE-16388:
---

Failed tests are unrelated and can pass locally

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-branch-1-v1.patch, HBASE-16388-v1.patch, 
> HBASE-16388-v2.patch, HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426338#comment-15426338
 ] 

Hadoop QA commented on HBASE-16388:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 0s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 34s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
43s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 25

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-17 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425959#comment-15425959
 ] 

Phil Yang commented on HBASE-16388:
---

Will we use non-blocking Stub in blocking Table interface? This patch only 
handles BlockingStub. And I think AsyncTable won't block users' threads. So we 
need not have any limit of concurrent requests for a server in Stub if it is 
only used in AsyncTable.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-17 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425911#comment-15425911
 ] 

Duo Zhang commented on HBASE-16388:
---

How do you deal with the async call? Although it is not used right now and I 
plan to reimplement it...

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch, HBASE-16388-v2.patch, 
> HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423990#comment-15423990
 ] 

Hadoop QA commented on HBASE-16388:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 14s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
45s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 157m 43s {color} 
| {color:black} {color} |
\\
\

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422508#comment-15422508
 ] 

Hadoop QA commented on HBASE-16388:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 13m 14s 
{color} | {color:red} Docker failed to build yetus/hbase:date2016-08-16. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823847/HBASE-16388-v2.patch |
| JIRA Issue | HBASE-16388 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3102/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch, HBASE-16388-v2.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-12 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418955#comment-15418955
 ] 

Phil Yang commented on HBASE-16388:
---

A further discussion:
The best way to prevent threads of client being blocked by operations of Table 
is to support async HBase client. After HBASE-13784, users can use Staged 
Event-Driven Architecture to solve this more gracefully. If user must use 
blocking mode, this issue can avoid extreme case.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-11 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418295#comment-15418295
 ] 

Phil Yang commented on HBASE-16388:
---

It is configurable (default is 2147483647 which means disabled) and configuring 
the threshold should consider you concerns. The main objective is prevent slow 
region server blocking all client threads, so I think this threshold should be 
much higher than average number of threads accessing to one server, and is 
mainly depend on the number of threads in client. 

For example, if we have 100 threads in client, and we have 5 region servers. 
The threshold should be much higher than 20 to prevent failing on 
unbalanced-but-working servers, we can set the threshold to 50, so if a server 
is slow, we still have half of threads working well. If we have only one 
server, maybe this threshold is meaningless and we have to disable it.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-11 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15418234#comment-15418234
 ] 

Allan Yang commented on HBASE-16388:


I don't think throttle the thread to one server is reasonable, what if there is 
only one region server serving? or what if the region on the regionserver is 
not balanced, some region server have a lot more regions than others. So 
threads to this server will naturally a lot more than others.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417105#comment-15417105
 ] 

Hadoop QA commented on HBASE-16388:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} master passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} master passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 1s {color} | 
{color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 25s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
54s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 177m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tes

[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-11 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416910#comment-15416910
 ] 

Phil Yang commented on HBASE-16388:
---

Upload a patch for master. Logic is very simple. In the test case we should 
test Put alone because it use AsyncProcess which will wrap exceptions. And 
change the description of the confs used by AsyncProcess. 
hbase.client.max.*.tasks only affect mutation operations, not all operations in 
HTable.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-16388-v1.patch
>
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416563#comment-15416563
 ] 

stack commented on HBASE-16388:
---

Thank you [~yangzhe1991] Start simple I'd say. Will be hard enough to test. 
Would be sweet if we could avoid another layer of accounting in the client. 
There is a bunch already going on in AsyncProcess by region and by server. Can 
this be leveraged?

I like the way you characterize how queue-per-client is at other end of the 
spectrum from what this patch is trying to do. Looking forward to it Phil Yang.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416561#comment-15416561
 ] 

stack commented on HBASE-16388:
---

That might work [~aoxiang]. We already have netty on client-side ([~jurmous]?)

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416456#comment-15416456
 ] 

Phil Yang commented on HBASE-16388:
---

[~stack] Thanks for your reply.

{quote}
Where will you throttle the requests in the process?
{quote}
In a lower level, tend to AbstractRpcClient. Use a static guava cache(it helps 
us cleaning up removed RS) to save AtomicLongs for each server address. One 
question is whether we need differentiate RPC requests to master and to RS, 
whether we need differentiate requests to meta table and user table. If not, 
the logic will be simple. And I think the concurrent requests to master or meta 
table is not very high, so a suitable threshold may not fail requests to master 
server or meta table, right?

{quote}
do a queue per client on the regionserver
{quote}
Sounds reasonable, it can prevent server being "DDOS" by a client. And in this 
issue we can prevent a client being blocked by a slow server. They are two 
approaches that both can increase the availability in extreme case.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416419#comment-15416419
 ] 

binlijin commented on HBASE-16388:
--

May be we can using NettyRpcServer with it more easy by just adding a throttle 
Handler? Use netty we can control write to client and read from client.  

http://normanmaurer.me/presentations/2014-facebook-eng-netty/slides.html#9.0

http://normanmaurer.me/presentations/2014-facebook-eng-netty/slides.html#33.0





> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416281#comment-15416281
 ] 

stack commented on HBASE-16388:
---

Tell us more [~yangzhe1991] Where will you throttle the requests in the 
process? Hopefully it is not more on AsyncProcess (smile). But sounds great.

Related, an interesting suggestion I heard last week was to do a queue per 
client on the regionserver. This would help when a 'bad client' tends to hog or 
swap the RegionServer... When the 'bad client' fills its queue, we can reject 
further requests. Meantime, other clients can make progress.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16388) Prevent client threads being blocked by only one slow region server

2016-08-10 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415245#comment-15415245
 ] 

Phil Yang commented on HBASE-16388:
---

I'll upload a patch this week which adds a new configure, 
hbase.client.perserver.requests.threshold, to limit the max concurrent requests 
to one RS in process-level. Comments are welcomed, thanks.

> Prevent client threads being blocked by only one slow region server
> ---
>
> Key: HBASE-16388
> URL: https://issues.apache.org/jira/browse/HBASE-16388
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
>
> It is a general use case for HBase's users that they have several 
> threads/handlers in their service, and each handler has its own Table/HTable 
> instance. Generally users think each handler is independent and won't 
> interact each other.
> However, in an extreme case, if a region server is very slow, every requests 
> to this RS will timeout, handlers of users' service may be occupied by the 
> long-waiting requests even requests belong to other RS will also be timeout.
> For example: 
> If we have 100 handlers in a client service(timeout is 1000ms) and HBase has 
> 10 region servers whose average response time is 50ms. If no region server is 
> slow, we can handle 2000 requests per second.
> Now this service's QPS is 1000. If there is one region server very slow and 
> all requests to it will be timeout. Users hope that only 10% requests failed, 
> and 90% requests' response time is still 50ms, because only 10% requests are 
> located to the slow RS. However, each second we have 100 long-waiting 
> requests which exactly occupies all 100 handles. So all handlers is blocked, 
> the availability of this service is almost zero.
> To prevent this case, we can limit the max concurrent requests to one RS in 
> process-level. Requests exceeding the limit will throws 
> ServerBusyException(extends DoNotRetryIOE) immediately to users. In the above 
> case, if we set this limit to 20, only 20 handlers will be occupied and other 
> 80 handlers can still handle requests to other RS. The availability of this 
> service is 90% as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)