[jira] [Commented] (HBASE-15328) Unvalidated Redirect in HMaster

2017-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875539#comment-15875539
 ] 

Hudson commented on HBASE-15328:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2543 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2543/])
HBASE-15328 sanity check the redirect used to send master info requests 
(busbey: rev d7ffa0013bde592bb035ce5306c09883a192989f)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> Unvalidated Redirect in HMaster
> ---
>
> Key: HBASE-15328
> URL: https://issues.apache.org/jira/browse/HBASE-15328
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-15328.0.patch, HBASE-15328.1.patch
>
>
> See OWASP page on why we should clean it up someday:
> https://www.owasp.org/index.php/Unvalidated_Redirects_and_Forwards_Cheat_Sheet
> Here is where we do the redirect:
> {code}
> @Override
> public void doGet(HttpServletRequest request,
> HttpServletResponse response) throws ServletException, IOException {
>   String redirectUrl = request.getScheme() + "://"
> + request.getServerName() + ":" + regionServerInfoPort
> + request.getRequestURI();
>   response.sendRedirect(redirectUrl);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17210:
--
Attachment: HBASE-17210.branch-1.v02.patch

> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17120.v1.patch, HBASE-17210.branch-1.v01.patch, 
> HBASE-17210.branch-1.v02.patch, HBASE-17210.v02.patch, HBASE-17210.v03.patch, 
> HBASE-17210.v04.patch, HBASE-17210.v04.patch
>
>
> Now when we want to get a row lock, the timeout is fixed and default is 30s. 
> But the requests from client have different RPC timeout setting. We can use 
> the client's deadline to set timeout on tryLock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875532#comment-15875532
 ] 

Hadoop QA commented on HBASE-17672:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 6s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 6s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 56s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853670/HBASE-17672.patch |
| JIRA Issue | HBASE-17672 |
| Optional Tests |  asflicense  unit  rubocop  ruby_lint  |
| uname | Linux 5ba46d5e1ee0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d7ffa00 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5780/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5780/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17672.patch
>
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875452#comment-15875452
 ] 

Zheng Hu edited comment on HBASE-17672 at 2/21/17 6:53 AM:
---

Sorry about the failed ut, I've uploaded a patch to fix failed case.  All ruby 
tests are passed in my localhost. 

Could you help to review  & commit ?  Thanks. 


was (Author: openinx):
Sorry about the failed ut, I've uploaded a patch to fix failed case.  All ruby 
tests are passed in my localhost. 

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17672.patch
>
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17672:
-
Fix Version/s: 2.0.0
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17672.patch
>
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17672:
-
Attachment: HBASE-17672.patch

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Zheng Hu
> Attachments: HBASE-17672.patch
>
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu reassigned HBASE-17672:


Assignee: Zheng Hu

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Zheng Hu
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875452#comment-15875452
 ] 

Zheng Hu commented on HBASE-17672:
--

Sorry about the failed ut, I've uploaded a patch to fix failed case.  All ruby 
tests are passed in my localhost. 

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875442#comment-15875442
 ] 

Hadoop QA commented on HBASE-17673:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 34s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 140m 36s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853652/HBASE-17673.patch |
| JIRA Issue | HBASE-17673 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 448d293df65a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 22fa1cd3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5778/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5778/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673-branch-1.patch, HBASE-17673.patch
>
>
> This issue 

[jira] [Commented] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875436#comment-15875436
 ] 

Hadoop QA commented on HBASE-17673:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
9s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 7s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 7s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 9s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853654/HBASE-17673-branch-1.patch
 |
| JIRA Issue | HBASE-17673 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2cfbd45de4d7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1 / 45357c0 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 

[jira] [Updated] (HBASE-17409) Re-fix XSS request issue in JMXJsonServlet

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17409:
-
Fix Version/s: (was: 1.1.8)
   1.1.9

> Re-fix XSS request issue in JMXJsonServlet
> --
>
> Key: HBASE-17409
> URL: https://issues.apache.org/jira/browse/HBASE-17409
> Project: HBase
>  Issue Type: Sub-task
>  Components: security, UI
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17409.001.patch, HBASE-17409.002.patch
>
>
> I have a patch here which should mitigate the XSS issue in this servlet 
> without the use of owasp.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17149) Procedure V2 - Fix nonce submission to avoid unnecessary calling coprocessor multiple times

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17149:
-
Fix Version/s: (was: 1.1.8)
   1.1.9

> Procedure V2 - Fix nonce submission to avoid unnecessary calling coprocessor 
> multiple times
> ---
>
> Key: HBASE-17149
> URL: https://issues.apache.org/jira/browse/HBASE-17149
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: 17149.branch-1.incomplete.txt, 
> HBASE-17149-addendum.v1-master.patch, HBASE-17149.master.001.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.002.patch, 
> HBASE-17149.master.002.patch, HBASE-17149.master.003.patch, 
> HBASE-17149.v1-branch-1.1.patch, HBASE-17149.v1-branch-1.2.patch, 
> HBASE-17149.v1-branch-1.3.patch, HBASE-17149.v1-branch-1.patch, nonce.patch
>
>
> instead of having all the logic in submitProcedure(), split in 
> registerNonce() + submitProcedure().
> In this case we can avoid calling the coprocessor twice and having a clean 
> submit logic knowing that there will only be one submission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17238) Wrong in-memory hbase:meta location causing SSH failure

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17238:
-
Fix Version/s: (was: 1.1.8)
   1.1.9

> Wrong in-memory hbase:meta location causing SSH failure
> ---
>
> Key: HBASE-17238
> URL: https://issues.apache.org/jira/browse/HBASE-17238
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 1.1.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>Priority: Critical
> Fix For: 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17238.v1-branch-1.1.patch, 
> HBASE-17238.v1-branch-1.patch, HBASE-17238.v2-branch-1.1.patch
>
>
> In HBase 1.x, if HMaster#assignMeta() assigns a non-DEFAULT_REPLICA_ID 
> hbase:meta region, it would wrongly update the DEFAULT_REPLICA_ID hbase:meta 
> region in-memory.  This caused the in-memory region state has wrong RS 
> information for default replica hbase:meta region.  One of the problem we saw 
> is a wrong type of SSH could be chosen and causing problems.
> {code}
> void assignMeta(MonitoredTask status, Set 
> previouslyFailedMetaRSs, int replicaId)
>   throws InterruptedException, IOException, KeeperException {
> // Work on meta region
> ...
> if (replicaId == HRegionInfo.DEFAULT_REPLICA_ID) {
>   status.setStatus("Assigning hbase:meta region");
> } else {
>   status.setStatus("Assigning hbase:meta region, replicaId " + replicaId);
> }
> // Get current meta state from zk.
> RegionStates regionStates = assignmentManager.getRegionStates();
> RegionState metaState = 
> MetaTableLocator.getMetaRegionState(getZooKeeper(), replicaId);
> HRegionInfo hri = 
> RegionReplicaUtil.getRegionInfoForReplica(HRegionInfo.FIRST_META_REGIONINFO,
> replicaId);
> ServerName currentMetaServer = metaState.getServerName();
> ...
> boolean rit = this.assignmentManager.
>   processRegionInTransitionAndBlockUntilAssigned(hri);
> boolean metaRegionLocation = metaTableLocator.verifyMetaRegionLocation(
>   this.getConnection(), this.getZooKeeper(), timeout, replicaId);
> ...
> } else {
>   // Region already assigned. We didn't assign it. Add to in-memory state.
>   regionStates.updateRegionState(
> HRegionInfo.FIRST_META_REGIONINFO, State.OPEN, currentMetaServer); 
> <<--- Wrong region to update -->>
>   this.assignmentManager.regionOnline(
> HRegionInfo.FIRST_META_REGIONINFO, currentMetaServer); <<--- Wrong 
> region to update -->>
> }
> ...
> {code}
> Here is the problem scenario:
> Step 1: master failovers (or starts could have the same issue) and find 
> default replica of hbase:meta is in rs1.
> {noformat}
> 2016-11-26 00:06:36,590 INFO org.apache.hadoop.hbase.master.ServerManager: 
> AssignmentManager hasn't finished failover cleanup; waiting
> 2016-11-26 00:06:36,591 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 0 assigned=0, rit=false, 
> location=rs1,60200,1480103147220
> {noformat}
> Step 2: master finds that replica 1 of hbase:meta is unassigned, therefore, 
> HMaster#assignMeta() is called and assign the replica 1 region to rs2, but at 
> the end, it wrongly updates the in-memory state of default replica to rs2
> {noformat}
> 2016-11-26 00:08:21,741 DEBUG org.apache.hadoop.hbase.master.RegionStates: 
> Onlined 1588230740 on rs2,60200,1480102993815 {ENCODED => 1588230740, NAME => 
> 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.RegionStates: 
> Offlined 1588230740 from rs1,60200,1480103147220
> 2016-11-26 00:08:21,741 INFO org.apache.hadoop.hbase.master.HMaster: 
> hbase:meta with replicaId 1 assigned=0, rit=false, 
> location=rs2,60200,1480102993815
> {noformat}
> Step 3: now rs1 is down, master needs to choose which SSH to call 
> (MetaServerShutdownHandler or normal ServerShutdownHandler) - in this case, 
> MetaServerShutdownHandler should be chosen; however, due to wrong in-memory 
> location, normal ServerShutdownHandler was chosen:
> {noformat}
> 2016-11-26 00:08:33,995 INFO 
> org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral 
> node deleted, processing expiration [rs1,60200,1480103147220]
> 2016-11-26 00:08:33,998 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: based on AM, current 
> region=hbase:meta,,1.1588230740 is on server=rs2,60200,1480102993815 server 
> being checked: rs1,60200,1480103147220
> 2016-11-26 00:08:34,001 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> Added=rs1,60200,1480103147220 to dead servers, submitted shutdown handler to 
> be executed meta=false
> {noformat}
> Step 4: Wrong SSH was chosen. Due to accessing hbase:meta failure, 

[jira] [Updated] (HBASE-17341) Add a timeout during replication endpoint termination

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17341:
-
Fix Version/s: (was: 1.1.8)
   1.1.9

> Add a timeout during replication endpoint termination
> -
>
> Key: HBASE-17341
> URL: https://issues.apache.org/jira/browse/HBASE-17341
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 0.98.24, 1.1.9
>
> Attachments: HBASE-17341.branch-1.1.v1.patch, 
> HBASE-17341.branch-1.1.v2.patch, HBASE-17341.master.v1.patch, 
> HBASE-17341.master.v2.patch
>
>
> In ReplicationSource#terminate(), a Future is obtained from 
> ReplicationEndpoint#stop().  Future.get() is then called, but can potentially 
> hang there if something went wrong in the endpoint stop().
> Hanging there has serious implications, because the thread could potentially 
> be the ZK event thread (e.g. watcher calls 
> ReplicationSourceManager#removePeer() -> ReplicationSource#terminate() -> 
> blocked).  This means no other events in the ZK event queue will get 
> processed, which for HBase means other ZK watches such as replication watch 
> notifications, snapshot watch notifications, even RegionServer shutdown will 
> all get blocked.
> The short term fix addressed here is to simply add a timeout for 
> Future.get().  But the severe consequences seen here perhaps suggest a 
> broader refactoring of the ZKWatcher usage in HBase is in order, to protect 
> against situations like this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15983) Replication improperly discards data from end-of-wal in some cases.

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15983:
-
Fix Version/s: (was: 1.1.9)
   1.1.10

> Replication improperly discards data from end-of-wal in some cases.
> ---
>
> Key: HBASE-15983
> URL: https://issues.apache.org/jira/browse/HBASE-15983
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.0, 1.0.0, 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.1, 0.98.23, 1.2.5, 1.1.10
>
>
> In some particular deployments, the Replication code believes it has
> reached EOF for a WAL prior to successfully parsing all bytes known to
> exist in a cleanly closed file.
> The underlying issue is that several different underlying problems with a WAL 
> reader are all treated as end-of-file by the code in ReplicationSource that 
> decides if a given WAL is completed or needs to be retried.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17534) SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17534:
-
Fix Version/s: (was: 1.1.9)
   1.1.10

> SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server
> 
>
> Key: HBASE-17534
> URL: https://issues.apache.org/jira/browse/HBASE-17534
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 1.4.0, 1.3.1, 1.2.5, 1.1.10
>
> Attachments: HBASE-17534.001.branch-1.patch, 
> HBASE-17534.002.branch-1.patch, HBASE-17534.003.branch-1.patch
>
>
> While writing some tests against 1.x, I noticed that what should have been a 
> DoNotRetryIOException sent to the client from a RegionServer was getting 
> retried until it reached the hbase client retries limit.
> Upon inspection, I found that the SecureBulkLoadClient was wrapping all 
> Exceptions from the RPC as an IOException. I believe this is creating a case 
> where the RPC system doesn't notice that there's a DNRIOException wrapped 
> beneath it, thinking it's a transient error.
> This results in clients having to wait for the retry limit to be reached 
> before they get acknowledgement that something failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17513) Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and can easily be misconfigured so there is no encryption when the operator expects it.

2017-02-20 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17513:
-
Fix Version/s: (was: 1.1.9)
   1.1.10

> Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and 
> can easily be misconfigured so there is no encryption when the operator 
> expects it.
> 
>
> Key: HBASE-17513
> URL: https://issues.apache.org/jira/browse/HBASE-17513
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, security, Thrift, Usability
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.1, 1.2.5, 1.1.10
>
>
> As of HBASE-14400 the setting {{hbase.thrift.security.qop}} was unified to 
> behave the same as the general HBase RPC protection. However, this only 
> happened for the Thrift2 server. The Thrift server found in the thrift 
> package (aka Thrift Server 1) still hard codes the old configs of 'auth', 
> 'auth-int', and 'auth-conf'.
> Additionally, these Quality of Protection (qop) settings are used only by the 
> SASL transport. If a user configures the HBase Thrift Server to make use of 
> the HTTP transport (to enable doAs proxying e.g. for Hue) then a QOP setting 
> of 'privacy' or 'auth-conf' won't get them encryption as expected.
> We should
> 1) update {{hbase-thrift/src/main/.../thrift/ThriftServerRunner}} to rely on 
> {{SaslUtil}} to use the same 'authentication', 'integrity', 'privacy' configs 
> in a backward compatible way
> 2) also have ThriftServerRunner warn when both {{hbase.thrift.security.qop}} 
> and {{hbase.regionserver.thrift.http}} are set, since the latter will cause 
> the former to be ignored. (users should be directed to 
> {{hbase.thrift.ssl.enabled}} and related configs to ensure their transport is 
> encrypted when using the HTTP transport.)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17172) Optimize mob compaction with _del files

2017-02-20 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875425#comment-15875425
 ] 

huaxiang sun commented on HBASE-17172:
--

Thanks [~jingcheng.du], I will  think more about it and come back in a day or 
two.

> Optimize mob compaction with _del files
> ---
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0
>
> Attachments: HBASE-17172-master-001.patch, 
> HBASE-17172.master.001.patch, HBASE-17172.master.002.patch, 
> HBASE-17172.master.003.patch
>
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875416#comment-15875416
 ] 

Zheng Hu commented on HBASE-17672:
--

[~tedyu], Thanks for reminding , Let me fix it. 

> "Grant should set access rights appropriately" test fails
> -
>
> Key: HBASE-17672
> URL: https://issues.apache.org/jira/browse/HBASE-17672
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>
> The following test failure is reproducible after HBASE-17472 went in:
> {code}
>   1) Failure:
> test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
> [./src/test/ruby/hbase/security_admin_test.rb:66:in 
> `test_Grant_should_set_access_rights_appropriately'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
> `user_permission'
>  
> file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
>  /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
> `user_permission'
>  ./src/test/ruby/hbase/security_admin_test.rb:65:in 
> `test_Grant_should_set_access_rights_appropriately'
>  org/jruby/RubyProc.java:270:in `call'
>  org/jruby/RubyKernel.java:2105:in `send'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> {code}
> [~openinx]:
> Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bingbing Wang updated HBASE-17671:
--
Attachment: ClassHistogram.png

heap dump details

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: ClassHistogram.png, hbase-site.xml, hbase-thrift2.log, 
> log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875393#comment-15875393
 ] 

Bingbing Wang edited comment on HBASE-17671 at 2/21/17 5:43 AM:


Yes, I have checked hprof, and the most part of them is writeBuffer in 
org.apache.thrift.transport.TFramedTransport. Most of writeBuffer have exceeded 
128M. I am very curious why such big writeBuffer is allocated and not cycled in 
time. Please see the attached ClassHistogram.png.

Yes, we have close scanner on time. We can confirm this. Because we have use 
C++ auto-destructor when leaving life scope to ensure all scanner will be 
closed. We have ever fixed such bugs, so there should have no scanner leak in 
our application.

Previous we have ever use CMS, but many Java GC issues. Later we switched to 
G1GC and do some adjustments, now the issue have been less than previous.


was (Author: wbb1975):
Yes, I have checked hprof, and the most part of them is writeBuffer in 
org.apache.thrift.transport.TFramedTransport. Most of writeBuffer have exceeded 
128M. I am very curious why such big writeBuffer is allocated and not cycled in 
time.

Yes, we have close scanner on time. We can confirm this. Because we have use 
C++ auto-destructor when leaving life scope to ensure all scanner will be 
closed. We have ever fixed such bugs, so there should have no scanner leak in 
our application.

Previous we have ever use CMS, but many Java GC issues. Later we switched to 
G1GC and do some adjustments, now the issue have been less than previous.

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, hbase-thrift2.log, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875393#comment-15875393
 ] 

Bingbing Wang commented on HBASE-17671:
---

Yes, I have checked hprof, and the most part of them is writeBuffer in 
org.apache.thrift.transport.TFramedTransport. Most of writeBuffer have exceeded 
128M. I am very curious why such big writeBuffer is allocated and not cycled in 
time.

Yes, we have close scanner on time. We can confirm this. Because we have use 
C++ auto-destructor when leaving life scope to ensure all scanner will be 
closed. We have ever fixed such bugs, so there should have no scanner leak in 
our application.

Previous we have ever use CMS, but many Java GC issues. Later we switched to 
G1GC and do some adjustments, now the issue have been less than previous.

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, hbase-thrift2.log, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875387#comment-15875387
 ] 

Ted Yu commented on HBASE-17673:


lgtm

> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673-branch-1.patch, HBASE-17673.patch
>
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is creating a list for RPC monitoredTask separately. It is OK 
> to do so since the RPC handlers remain in a fixed number. It won't increase 
> or decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875362#comment-15875362
 ] 

ramkrishna.s.vasudevan commented on HBASE-17623:


Just saw your other comment. You are running with CMS. Can you try with G1?  

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875361#comment-15875361
 ] 

ramkrishna.s.vasudevan commented on HBASE-17623:


Are you using G1 or CMS? If G1 are you going with default settings?

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17673:
---
Description: 
This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
RS in our production environment still have this problem. Strange thing is that 
newly started servers seems do not affected. Digging for a while, then I 
realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root cause. 
The RPC hander's monitoredTask only create once, if the server is flooded with 
tasks, RPC monitoredTask can be purged by CircularFifoBuffer, and then never 
visible in WebUI.
So my solution is creating a list for RPC monitoredTask separately. It is OK to 
do so since the RPC handlers remain in a fixed number. It won't increase or 
decrease during the lifetime of the server.

  was:
This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
RS in our production environment still have this problem. Strange thing is that 
newly started servers seems do not affected. Digging for a while, then I 
realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root cause. 
The RPC hander's monitoredTask only create once, if the server is flooded with 
tasks, RPC monitoredTask can be purged by CircularFifoBuffer, and then never 
visible in WebUI.
So my solution is create a list for RPC monitoredTask separately. It is OK to 
do so since the RPC handlers remain in a fixed number. It won't increase or 
decrease during the lifetime of the server.


> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673-branch-1.patch, HBASE-17673.patch
>
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is creating a list for RPC monitoredTask separately. It is OK 
> to do so since the RPC handlers remain in a fixed number. It won't increase 
> or decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17673:
---
Description: 
This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
RS in our production environment still have this problem. Strange thing is that 
newly started servers seems do not affected. Digging for a while, then I 
realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root cause. 
The RPC hander's monitoredTask only create once, if the server is flooded with 
tasks, RPC monitoredTask can be purged by CircularFifoBuffer, and then never 
visible in WebUI.
So my solution is create a list for RPC monitoredTask separately. It is OK to 
do so since the RPC handlers remain in a fixed number. It won't increase or 
decrease during the lifetime of the server.

  was:
This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
RS in our production environment still have this problem. Strange thing is that 
newly started servers seems do not affected. Digging for a while, then I 
realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root cause. 
The RPC hander's monitoredTask only create once, if the server is flooded with 
tasks, RPC monitoredTask can be purged by CircularFifoBuffer, and then never 
visible in WebUI.
So my solution is create a list for RPC monitoredTask sepreately. It is OK to 
do so since the RPC handlers remain in a fixed number. It won't increase or 
decrease during the lifetime of the server.


> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673-branch-1.patch, HBASE-17673.patch
>
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is create a list for RPC monitoredTask separately. It is OK to 
> do so since the RPC handlers remain in a fixed number. It won't increase or 
> decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17673:
---
Attachment: HBASE-17673-branch-1.patch

> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673-branch-1.patch, HBASE-17673.patch
>
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is create a list for RPC monitoredTask sepreately. It is OK to 
> do so since the RPC handlers remain in a fixed number. It won't increase or 
> decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17673:
---
Attachment: HBASE-17673.patch

> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0, 1.2.4, 1.1.8
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Attachments: HBASE-17673.patch
>
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is create a list for RPC monitoredTask sepreately. It is OK to 
> do so since the RPC handlers remain in a fixed number. It won't increase or 
> decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17673:
---
Status: Patch Available  (was: Open)

> Monitored RPC Handler not show in the WebUI
> ---
>
> Key: HBASE-17673
> URL: https://issues.apache.org/jira/browse/HBASE-17673
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.8, 1.2.4, 2.0.0, 3.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
>
> This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
> RS in our production environment still have this problem. Strange thing is 
> that newly started servers seems do not affected. Digging for a while, then I 
> realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root 
> cause. The RPC hander's monitoredTask only create once, if the server is 
> flooded with tasks, RPC monitoredTask can be purged by CircularFifoBuffer, 
> and then never visible in WebUI.
> So my solution is create a list for RPC monitoredTask sepreately. It is OK to 
> do so since the RPC handlers remain in a fixed number. It won't increase or 
> decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17673) Monitored RPC Handler not show in the WebUI

2017-02-20 Thread Allan Yang (JIRA)
Allan Yang created HBASE-17673:
--

 Summary: Monitored RPC Handler not show in the WebUI
 Key: HBASE-17673
 URL: https://issues.apache.org/jira/browse/HBASE-17673
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.8, 1.2.4, 2.0.0, 3.0.0
Reporter: Allan Yang
Assignee: Allan Yang
Priority: Minor


This issue has been fixed once in HBASE-14674. But, I noticed that almost all 
RS in our production environment still have this problem. Strange thing is that 
newly started servers seems do not affected. Digging for a while, then I 
realize the {{CircularFifoBuffer}} introduced by HBASE-10312 is the root cause. 
The RPC hander's monitoredTask only create once, if the server is flooded with 
tasks, RPC monitoredTask can be purged by CircularFifoBuffer, and then never 
visible in WebUI.
So my solution is create a list for RPC monitoredTask sepreately. It is OK to 
do so since the RPC handlers remain in a fixed number. It won't increase or 
decrease during the lifetime of the server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17672) "Grant should set access rights appropriately" test fails

2017-02-20 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17672:
--

 Summary: "Grant should set access rights appropriately" test fails
 Key: HBASE-17672
 URL: https://issues.apache.org/jira/browse/HBASE-17672
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


The following test failure is reproducible after HBASE-17472 went in:
{code}
  1) Failure:
test_Grant_should_set_access_rights_appropriately(Hbase::SecureAdminMethodsTest)
[./src/test/ruby/hbase/security_admin_test.rb:66:in 
`test_Grant_should_set_access_rights_appropriately'
 /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:154:in 
`user_permission'
 
file:/Users/tyu/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
 `each'
 /Users/tyu/trunk/hbase-shell/src/main/ruby/hbase/security.rb:136:in 
`user_permission'
 ./src/test/ruby/hbase/security_admin_test.rb:65:in 
`test_Grant_should_set_access_rights_appropriately'
 org/jruby/RubyProc.java:270:in `call'
 org/jruby/RubyKernel.java:2105:in `send'
 org/jruby/RubyArray.java:1620:in `each'
 org/jruby/RubyArray.java:1620:in `each']:
{code}
[~openinx]:
Can you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875333#comment-15875333
 ] 

Hadoop QA commented on HBASE-17210:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
51s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_121. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_121. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 49s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 37s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 26s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 15s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 3s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 51s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 39s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 28s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 18s 
{color} | {color:red} The patch causes 39 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hbase-server-jdk1.8.0_121 

[jira] [Commented] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875332#comment-15875332
 ] 

Phil Yang commented on HBASE-17671:
---

Have you checked the hprof file? What is the most number of objects in hprof? 
Do you close the scanner after you scanning enough number of rows?
BTW, for small heap CMS may be better than G1.

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, hbase-thrift2.log, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875327#comment-15875327
 ] 

Bingbing Wang commented on HBASE-17671:
---

The Thrift2 heap dump file is very big (about 4G, and more than 180M even after 
compression), so I couldn't upload it.

I have used the MemoryAnalyzer to parse the heap dump, and it showed that many 
memory are held by scannerMap and org.apache.thrift.transport.TFramedTransport 
writeBuffer. Not clearly why some many writeBuffer is not freed.

Could your guys take a look and give some explanation?

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, hbase-thrift2.log, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bingbing Wang updated HBASE-17671:
--
Attachment: hbase-thrift2.log

HBase Thrift2 log file

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, hbase-thrift2.log, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bingbing Wang updated HBASE-17671:
--
Attachment: log_gc.log.0.zip

Thrift2 GC log

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml, log_gc.log.0.zip
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bingbing Wang updated HBASE-17671:
--
Attachment: hbase-site.xml

Thrift2 Configuration file

> HBase Thrift2 OutOfMemory
> -
>
> Key: HBASE-17671
> URL: https://issues.apache.org/jira/browse/HBASE-17671
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 0.98.6
> Environment: Product
>Reporter: Bingbing Wang
>Priority: Critical
> Attachments: hbase-site.xml
>
>
> We have a HBase Thrift2 server deployed on Windows, basically the physical 
> view looks like:
> QueryEngine <==> HBase Thrift2 <==> HBase cluster
> Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
> HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).
> Our Thrift2 Java options looks like:
> -server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
> -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
> -XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
> -XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
> -Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
> -Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
> -XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
> -Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
> -Dhbase.root.logger=INFO,DRFA -cp 
> "d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
> org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start
> The phenomenon of  the issue is that after some time running, Thrift2 
> sometimes reports OOM and heap dump file (.hprof) file was generated. The 
> consequence of this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17210:
--
Attachment: HBASE-17210.branch-1.v01.patch

> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17120.v1.patch, HBASE-17210.branch-1.v01.patch, 
> HBASE-17210.v02.patch, HBASE-17210.v03.patch, HBASE-17210.v04.patch, 
> HBASE-17210.v04.patch
>
>
> Now when we want to get a row lock, the timeout is fixed and default is 30s. 
> But the requests from client have different RPC timeout setting. We can use 
> the client's deadline to set timeout on tryLock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17671) HBase Thrift2 OutOfMemory

2017-02-20 Thread Bingbing Wang (JIRA)
Bingbing Wang created HBASE-17671:
-

 Summary: HBase Thrift2 OutOfMemory
 Key: HBASE-17671
 URL: https://issues.apache.org/jira/browse/HBASE-17671
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.98.6
 Environment: Product
Reporter: Bingbing Wang
Priority: Critical


We have a HBase Thrift2 server deployed on Windows, basically the physical view 
looks like:
QueryEngine <==> HBase Thrift2 <==> HBase cluster
Here QueryEngine is a C++ application, and HBase cluster is a about 50-nodes 
HBase cluster (CDH 5.3.3, namely Hbase version 0.98.6).

Our Thrift2 Java options looks like:
-server -Xms4096m -Xmx4096m -XX:MaxDirectMemorySize=8192m 
-XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:+ParallelRefProcEnabled 
-XX:G1HeapRegionSize=4M -XX:InitiatingHeapOccupancyPercent=40 
-XX:+PrintAdaptiveSizePolicy -XX:+PrintPromotionFailure 
-Dhbase.log.dir=d:\vhayu\thrift2\log -verbose:gc -XX:+PrintGCDateStamps 
-XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:PrintFLSStatistics=1 
-Xloggc:log_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
-XX:GCLogFileSize=200M -Dhbase.log.file=hbase-thrift2.log  
-Dhbase.home.dir=D:\vhayu\thrift2\hbase0.98 -Dhbase.id.str=root -Dlog4j.info 
-Dhbase.root.logger=INFO,DRFA -cp 
"d:\vhayu\thrift2\hbase0.98\*;d:\vhayu\thrift2\conf" 
org.apache.hadoop.hbase.thrift2.ThriftServer -b 127.0.0.1 -f framed start

The phenomenon of  the issue is that after some time running, Thrift2 sometimes 
reports OOM and heap dump file (.hprof) file was generated. The consequence of 
this will always trigger high latency form HBase cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875290#comment-15875290
 ] 

Zheng Hu commented on HBASE-17472:
--

Really thanks [~Apache9]'s help.  I update this issue with release note added. 

> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17472:
-
Release Note: Before this patch, later granted permissions will override 
previous granted permissions, and previous granted permissions LOST. this issue 
re-define grant semantic: for master branch, later granted permissions will 
merge with previous granted permissions.  for branch-1.4, grant keep override 
behavior for compatibility purpose, and a grant with mergeExistingPermission 
flag provided.

> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-20 Thread CHIA-PING TSAI (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875260#comment-15875260
 ] 

CHIA-PING TSAI commented on HBASE-17623:


hi [~anoop.hbase]
||statistic||before||after||
|elapsed(s)|11756|11081|
|young GC count|15947|6337|
|young total GC time(s)|868|928|
|old GC count|160|135|
|old total GC time(s)|920|1197|
|total pause time(s)|893|952|

I run the test for 3 hour and 1TB data. This patch introduces the lower GC 
count but the higher pause time.
If we want to make all objects die at young generation, the baosInMemory should 
be re-created for building the next blocks?

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: CHIA-PING TSAI
>Assignee: CHIA-PING TSAI
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17478) Avoid sending FSUtilization reports to master when quota support is not enabled

2017-02-20 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17478:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks again for the review, Ted!

> Avoid sending FSUtilization reports to master when quota support is not 
> enabled
> ---
>
> Key: HBASE-17478
> URL: https://issues.apache.org/jira/browse/HBASE-17478
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: HBASE-16961
>
> Attachments: HBASE-17478.001.HBASE-16961.patch, 
> HBASE-17478.002.HBASE-16961.patch, HBASE-17478.003.HBASE-16961.patch
>
>
> Trivial little change to make sure that the RS's do not send the filesystem 
> utilization reports to the master when hbase.quota.enabled=false and, 
> similarly, that the master gracefully handles these reports when the feature 
> is not enabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874913#comment-15874913
 ] 

Hudson commented on HBASE-17472:


SUCCESS: Integrated in Jenkins build HBase-1.4 #636 (See 
[https://builds.apache.org/job/HBase-1.4/636/])
HBASE-17472: Correct the semantic of permission grant (zhangduo: rev 
45357c078d566ebd2f32594d73f7ad35feebe6dc)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* (edit) hbase-protocol/src/main/protobuf/AccessControl.proto
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* (edit) 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java


> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874729#comment-15874729
 ] 

Hudson commented on HBASE-17472:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2539 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2539/])
HBASE-17472: Correct the semantic of permission grant (zhangduo: rev 
22fa1cd3df3ed16ddbc0336ac2e52964c1e22665)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java
* (edit) 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AccessControlProtos.java
* (edit) hbase-protocol/src/main/protobuf/AccessControl.proto
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java


> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874659#comment-15874659
 ] 

Ted Yu commented on HBASE-17210:


+1

> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17120.v1.patch, HBASE-17210.v02.patch, 
> HBASE-17210.v03.patch, HBASE-17210.v04.patch, HBASE-17210.v04.patch
>
>
> Now when we want to get a row lock, the timeout is fixed and default is 30s. 
> But the requests from client have different RPC timeout setting. We can use 
> the client's deadline to set timeout on tryLock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874496#comment-15874496
 ] 

Anoop Sam John commented on HBASE-17662:


bq.  if (inWalReplay.get()) 
This check and the set and reset of the inWalReplay state will be done from one 
thread only right?  I may be wrong.. If so, do we really need a Atomic boolean 
or just simple boolean is ok? The above if comes in all normal write path and 
now we add a AtomicBoolean read which is not that cheap.  If the state is being 
accessed from multiple thread and an Atomic/volatile is unavoidable, I suggest 
we add this boolean check after the size check. I mean after if 
(this.active.keySize() > inmemoryFlushSize) 
Or else for every write op we will end up in this Atomic boolean read.


> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17472:
--
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-1.

Thanks [~openinx] for your contribution. And please fill the release note as 
this is an incompatible change.

> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874440#comment-15874440
 ] 

Hadoop QA commented on HBASE-17210:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 107m 49s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853537/HBASE-17210.v04.patch 
|
| JIRA Issue | HBASE-17210 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c23cef02eef5 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d08bafa |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5775/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5775/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17120.v1.patch, HBASE-17210.v02.patch, 
> HBASE-17210.v03.patch, HBASE-17210.v04.patch, 

[jira] [Created] (HBASE-17670) LruBlockCache with VictimHandler should evict from bucket cache also when eviction by filename happens

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-17670:
--

 Summary: LruBlockCache with VictimHandler should evict from bucket 
cache also when eviction by filename happens
 Key: HBASE-17670
 URL: https://issues.apache.org/jira/browse/HBASE-17670
 Project: HBase
  Issue Type: Bug
  Components: BlockCache, BucketCache
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0


When VictimHandler is used we use the bucket cache to cache those blocks that 
are evicted. So in case of where we close the hfile and call 
evictBlocksByHfileName - I think it still makes sense to call evict on the 
vicitmHandler also. Else the victimHandler is going to just occupy the space 
till the eviction thread in that vicitm handler clears it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17670) LruBlockCache with VictimHandler should evict from bucket cache also when eviction by filename happens

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-17670.

Resolution: Not A Problem

Sorry just saw that the code already does it.

> LruBlockCache with VictimHandler should evict from bucket cache also when 
> eviction by filename happens
> --
>
> Key: HBASE-17670
> URL: https://issues.apache.org/jira/browse/HBASE-17670
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache, BucketCache
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
>
> When VictimHandler is used we use the bucket cache to cache those blocks that 
> are evicted. So in case of where we close the hfile and call 
> evictBlocksByHfileName - I think it still makes sense to call evict on the 
> vicitmHandler also. Else the victimHandler is going to just occupy the space 
> till the eviction thread in that vicitm handler clears it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17663) Remove the unused imports throughout the code base

2017-02-20 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-17663:
--
Status: Patch Available  (was: In Progress)

> Remove the unused imports throughout the code base
> --
>
> Key: HBASE-17663
> URL: https://issues.apache.org/jira/browse/HBASE-17663
> Project: HBase
>  Issue Type: Task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Attachments: HBASE-17663.master.001.patch
>
>
> Currently there are a lot of unused imports throughout the code base. They 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874252#comment-15874252
 ] 

Duo Zhang commented on HBASE-17472:
---

Oh, sorry. Will commit it this evening.

Thanks.

> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17210:
--
Attachment: HBASE-17210.v04.patch

Retry, seems flaky tests are unrelated

> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17120.v1.patch, HBASE-17210.v02.patch, 
> HBASE-17210.v03.patch, HBASE-17210.v04.patch, HBASE-17210.v04.patch
>
>
> Now when we want to get a row lock, the timeout is fixed and default is 30s. 
> But the requests from client have different RPC timeout setting. We can use 
> the client's deadline to set timeout on tryLock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17263) Netty based rpc server impl

2017-02-20 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874234#comment-15874234
 ] 

binlijin commented on HBASE-17263:
--

ping [~anoop.hbase] [~ram_krish]

>   Netty based rpc server impl
> -
>
> Key: HBASE-17263
> URL: https://issues.apache.org/jira/browse/HBASE-17263
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0
>
> Attachments: HBASE-17263.patch, HBASE-17263_v2.patch, 
> HBASE-17263_v3.patch, HBASE-17263_v4.patch, HBASE-17263_v5.patch, 
> HBASE-17263_v6.patch, HBASE-17263_v7.patch, HBASE-17263_v8.patch
>
>
> An RPC server with Netty4 implementation, which provide better performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17669) Implement async mergeRegion/splitRegion methods.

2017-02-20 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-17669:


 Summary: Implement async mergeRegion/splitRegion methods.
 Key: HBASE-17669
 URL: https://issues.apache.org/jira/browse/HBASE-17669
 Project: HBase
  Issue Type: Sub-task
  Components: Admin, asyncclient, Client
Affects Versions: 2.0.0
Reporter: Zheng Hu
Assignee: Zheng Hu
 Fix For: 2.0.0


RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17668) Implement async assgin/offline/move/unassign methods

2017-02-20 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17668:
-
Description: 
Implement following methods for async admin client: 

1.  assign region; 
2.  unassign region; 
3.  offline region; 
4.  move region;


  was:
Implement following methods for async admin client: 

1.  assign region; 
2.  unassign region; 
3.  offline region; 
4.  move regioin;



> Implement async assgin/offline/move/unassign methods
> 
>
> Key: HBASE-17668
> URL: https://issues.apache.org/jira/browse/HBASE-17668
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
>
> Implement following methods for async admin client: 
> 1.  assign region; 
> 2.  unassign region; 
> 3.  offline region; 
> 4.  move region;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17668) Implement async assgin/offline/move/unassign methods

2017-02-20 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-17668:


 Summary: Implement async assgin/offline/move/unassign methods
 Key: HBASE-17668
 URL: https://issues.apache.org/jira/browse/HBASE-17668
 Project: HBase
  Issue Type: Sub-task
  Components: Admin, asyncclient, Client
Affects Versions: 2.0.0
Reporter: Zheng Hu
Assignee: Zheng Hu
 Fix For: 2.0.0


Implement following methods for async admin client: 

1.  assign region; 
2.  unassign region; 
3.  offline region; 
4.  move regioin;




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17667) Implement async flush/compact region methods

2017-02-20 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-17667:


 Summary: Implement  async  flush/compact region methods
 Key: HBASE-17667
 URL: https://issues.apache.org/jira/browse/HBASE-17667
 Project: HBase
  Issue Type: Sub-task
  Components: Admin, asyncclient, Client
Affects Versions: 2.0.0
Reporter: Zheng Hu
Assignee: Zheng Hu
 Fix For: 2.0.0


Implement following methods for async admin: 

{code}
1. flush table ; 
2. flush region; 
3. compact table;
4. compact region;
5. compact region server; 
6. major compact for table; 
7. major compact for region; 
8. major compact for CF;
9. major compact for specific region and specific CF; 
{code}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17472) Correct the semantic of permission grant

2017-02-20 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874200#comment-15874200
 ] 

Zheng Hu commented on HBASE-17472:
--

Ping [~Apache9], [~enis], [~busbey], Thanks. 

> Correct the semantic of  permission grant
> -
>
> Key: HBASE-17472
> URL: https://issues.apache.org/jira/browse/HBASE-17472
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17472.branch-1.3.v6.patch, 
> HBASE-17472.branch-1.v6.patch, HBASE-17472.branch-1.v7.patch, 
> HBASE-17472.master.v6.patch, HBASE-17472.master.v6.patch, 
> HBASE-17472.master.v7.patch, HBASE-17472.v1.patch, HBASE-17472.v2.patch, 
> HBASE-17472.v3.patch, HBASE-17472.v4.patch, HBASE-17472.v5.patch
>
>
> Currently, HBase grant operation has following semantic:
> {code}
> hbase(main):019:0> grant 'hbase_tst', 'RW', 'ycsb'
> 0 row(s) in 0.0960 seconds
> hbase(main):020:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
> 
>  hbase_tst   default,ycsb,,: 
> [Permission:actions=READ,WRITE]   
>   
>   
> 1 row(s) in 0.0550 seconds
> hbase(main):021:0> grant 'hbase_tst', 'CA', 'ycsb'
> 0 row(s) in 0.0820 seconds
> hbase(main):022:0> user_permission 'ycsb'
> User 
> Namespace,Table,Family,Qualifier:Permission   
>   
>   
>  hbase_tst   default,ycsb,,: 
> [Permission: actions=CREATE,ADMIN]
>   
>   
> 1 row(s) in 0.0490 seconds
> {code}  
> Later permission will replace previous granted permissions, which confused 
> most of HBase administrator.
> It's seems more reasonable that HBase merge multiple granted permission.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17662) Disable in-memory flush when replaying from WAL

2017-02-20 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17662:

Summary: Disable in-memory flush when replaying from WAL  (was: Disable 
in-memory flush when eplaying from WAL)

> Disable in-memory flush when replaying from WAL
> ---
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17662) Disable in-memory flush when eplaying from WAL

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874187#comment-15874187
 ] 

ramkrishna.s.vasudevan commented on HBASE-17662:


Small comment
{code}
for (Store store : stores) {// update the stores that we are 
done replaying
889 ((HStore)store).stopReplayingFromWAL();
890   }
{code}
Should this be in a try finally block that encloses that 
{code}
if (ServerRegionReplicaUtil.shouldReplayRecoveredEdits(this)) {
{code}
I think rest looks good to me.

> Disable in-memory flush when eplaying from WAL
> --
>
> Key: HBASE-17662
> URL: https://issues.apache.org/jira/browse/HBASE-17662
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
> Attachments: HBASE-17662-V02.patch
>
>
> When replaying the edits from WAL, the region's updateLock is not taken, 
> because a single threaded action is assumed. However, the thread-safeness of 
> the in-memory flush of CompactingMemStore is based on taking the region's 
> updateLock. 
> The in-memory flush can be skipped in the replay time (anyway everything is 
> flushed to disk just after the replay). Therefore it is acceptable to just 
> skip the in-memory flush action while the updates come as part of replay from 
> WAL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17655) Removing MemStoreScanner and SnapshotScanner

2017-02-20 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874183#comment-15874183
 ] 

Eshcar Hillel commented on HBASE-17655:
---

Thanks, [~ram_krish].
While working on HBASE-17339 it occurred to me that it is very difficult to 
debug the scanners when there are layers on layers of scanners and key-value 
heaps. Some simplification is required here to have a maintainable code.

> Removing MemStoreScanner and SnapshotScanner
> 
>
> Key: HBASE-17655
> URL: https://issues.apache.org/jira/browse/HBASE-17655
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17655-V01.patch
>
>
> With CompactingMemstore becoming the new default, a store comprises multiple 
> memory segments and not just 1-2. MemStoreScanner encapsulates the scanning 
> of segments in the memory part of the store. SnapshotScanner is used to scan 
> the snapshot segment upon flush to disk.
> Having the logic of scanners scattered in multiple classes (StoreScanner, 
> SegmentScanner, MemStoreScanner, SnapshotScanner) makes maintainance and 
> debugging challenging tasks, not always for a good reason.
> For example, MemStoreScanner has a KeyValueHeap (KVH). When creating the 
> store scanner which also has a KVH, this makes a KVH inside a KVH. Reasoning 
> about the correctness of the methods supported by the scanner (seek, next, 
> hasNext, peek, etc.) is hard and debugging  them is cumbersome. 
> In addition, by removing the MemStoreScanner layer we allow store scanner to 
> filter out each one of the memory scanners instead of either taking them all 
> (in most cases) or discarding them all (rarely).
> SnapshotScanner is a simplified version of SegmentScanner as it is used only 
> in a specific context. However it is an additional implementation of the same 
> logic with no real advantage of improved performance.
> Therefore, I suggest removing both MemStoreScanner and SnapshotScanner. The 
> code is adjusted to handle the list of segment scanners they encapsulate.
> This fits well with the current code since in most cases at some point a list 
> of scanner is expected, so passing the actual list of segment scanners is 
> more natural than wrapping a single (high level) scanner with 
> Collections.singeltonList(...).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17655) Removing MemStoreScanner and SnapshotScanner

2017-02-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874174#comment-15874174
 ] 

ramkrishna.s.vasudevan commented on HBASE-17655:


Once we have composite memstore as default then even the snapshotScanner that 
is created for composite memstore is nothing but a MemstoreScanner. In that 
case SnapshotScanner is redundant. SnapshotScanner was infact added for that 
case where when the snapshot was made of a pipeline there was a instance 
created for snapshotScanner and that had to explicity closed so that the 
segments are  not returned back. Will check RB once.

> Removing MemStoreScanner and SnapshotScanner
> 
>
> Key: HBASE-17655
> URL: https://issues.apache.org/jira/browse/HBASE-17655
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17655-V01.patch
>
>
> With CompactingMemstore becoming the new default, a store comprises multiple 
> memory segments and not just 1-2. MemStoreScanner encapsulates the scanning 
> of segments in the memory part of the store. SnapshotScanner is used to scan 
> the snapshot segment upon flush to disk.
> Having the logic of scanners scattered in multiple classes (StoreScanner, 
> SegmentScanner, MemStoreScanner, SnapshotScanner) makes maintainance and 
> debugging challenging tasks, not always for a good reason.
> For example, MemStoreScanner has a KeyValueHeap (KVH). When creating the 
> store scanner which also has a KVH, this makes a KVH inside a KVH. Reasoning 
> about the correctness of the methods supported by the scanner (seek, next, 
> hasNext, peek, etc.) is hard and debugging  them is cumbersome. 
> In addition, by removing the MemStoreScanner layer we allow store scanner to 
> filter out each one of the memory scanners instead of either taking them all 
> (in most cases) or discarding them all (rarely).
> SnapshotScanner is a simplified version of SegmentScanner as it is used only 
> in a specific context. However it is an additional implementation of the same 
> logic with no real advantage of improved performance.
> Therefore, I suggest removing both MemStoreScanner and SnapshotScanner. The 
> code is adjusted to handle the list of segment scanners they encapsulate.
> This fits well with the current code since in most cases at some point a list 
> of scanner is expected, so passing the actual list of segment scanners is 
> more natural than wrapping a single (high level) scanner with 
> Collections.singeltonList(...).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17210) Set timeout on trying rowlock according to client's RPC timeout

2017-02-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15874172#comment-15874172
 ] 

Hadoop QA commented on HBASE-17210:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 120m 54s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 165m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853521/HBASE-17210.v04.patch 
|
| JIRA Issue | HBASE-17210 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f5036f16806b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / d08bafa |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5774/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5774/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5774/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Set timeout on trying rowlock according to client's RPC timeout
> ---
>
> Key: HBASE-17210
> URL: https://issues.apache.org/jira/browse/HBASE-17210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Phil Yang
>Assignee: Phil Yang
>