[jira] [Updated] (HBASE-16857) RateLimiter may fails during parallel scan execution

2016-10-17 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-16857:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

oops. It's a duplicate for HBASE-16699

> RateLimiter may fails during parallel scan execution
> 
>
> Key: HBASE-16857
> URL: https://issues.apache.org/jira/browse/HBASE-16857
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: hbase.quota.enabled=true 
> hbase.quota.refresh.period=5000 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-16857.patch
>
>
> Steps to reproduce using phoenix (that's the easiest way to run a lot of 
> parallel scans):
> 1. Create table:
> {code}
> create table "abc" (id bigint not null primary key, name varchar) 
> salt_buckets=50; 
> {code}
> 2. set quota from hbase shell:
> {code}
> set_quota TYPE => THROTTLE, TABLE => 'abc', LIMIT => '10G/sec' 
> {code}
> 3. in phoenix run 
> {code}
>  select * from "abc"; 
> {code}
> It will fail with ThrottlingException.
> Sometimes it requires to run it several times to reproduce.
> That happens because the logic in DefaultOperationQuota. First we run 
> limiter.checkQuota where we may change available to Long.MAX_VALUE, after 
> that we run limiter.grabQuota where we reduce available for 1000 (is it scan 
> overhead or what?) and in close() we adding this 1000 back. 
> When number of parallel scans are executing there is a chance that one of the 
> threads run limiter.checkQuota right before the second run close(). We get 
> overflow and as the result available value becomes negative, so during the 
> next check we just fail. 
> This behavior was introduced in HBASE-13686.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16857) RateLimiter may fails during parallel scan execution

2016-10-17 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-16857:

Attachment: HBASE-16857.patch

Simple patch that prevent overflow during consume().

> RateLimiter may fails during parallel scan execution
> 
>
> Key: HBASE-16857
> URL: https://issues.apache.org/jira/browse/HBASE-16857
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: hbase.quota.enabled=true 
> hbase.quota.refresh.period=5000 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-16857.patch
>
>
> Steps to reproduce using phoenix (that's the easiest way to run a lot of 
> parallel scans):
> 1. Create table:
> {code}
> create table "abc" (id bigint not null primary key, name varchar) 
> salt_buckets=50; 
> {code}
> 2. set quota from hbase shell:
> {code}
> set_quota TYPE => THROTTLE, TABLE => 'abc', LIMIT => '10G/sec' 
> {code}
> 3. in phoenix run 
> {code}
>  select * from "abc"; 
> {code}
> It will fail with ThrottlingException.
> Sometimes it requires to run it several times to reproduce.
> That happens because the logic in DefaultOperationQuota. First we run 
> limiter.checkQuota where we may change available to Long.MAX_VALUE, after 
> that we run limiter.grabQuota where we reduce available for 1000 (is it scan 
> overhead or what?) and in close() we adding this 1000 back. 
> When number of parallel scans are executing there is a chance that one of the 
> threads run limiter.checkQuota right before the second run close(). We get 
> overflow and as the result available value becomes negative, so during the 
> next check we just fail. 
> This behavior was introduced in HBASE-13686.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16857) RateLimiter may fails during parallel scan execution

2016-10-17 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-16857:

Status: Patch Available  (was: Open)

> RateLimiter may fails during parallel scan execution
> 
>
> Key: HBASE-16857
> URL: https://issues.apache.org/jira/browse/HBASE-16857
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: hbase.quota.enabled=true 
> hbase.quota.refresh.period=5000 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-16857.patch
>
>
> Steps to reproduce using phoenix (that's the easiest way to run a lot of 
> parallel scans):
> 1. Create table:
> {code}
> create table "abc" (id bigint not null primary key, name varchar) 
> salt_buckets=50; 
> {code}
> 2. set quota from hbase shell:
> {code}
> set_quota TYPE => THROTTLE, TABLE => 'abc', LIMIT => '10G/sec' 
> {code}
> 3. in phoenix run 
> {code}
>  select * from "abc"; 
> {code}
> It will fail with ThrottlingException.
> Sometimes it requires to run it several times to reproduce.
> That happens because the logic in DefaultOperationQuota. First we run 
> limiter.checkQuota where we may change available to Long.MAX_VALUE, after 
> that we run limiter.grabQuota where we reduce available for 1000 (is it scan 
> overhead or what?) and in close() we adding this 1000 back. 
> When number of parallel scans are executing there is a chance that one of the 
> threads run limiter.checkQuota right before the second run close(). We get 
> overflow and as the result available value becomes negative, so during the 
> next check we just fail. 
> This behavior was introduced in HBASE-13686.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)