[jira] [Commented] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488469#comment-16488469
 ] 

Sean Busbey commented on HBASE-20501:
-

Release note:

{code}

HBase is no longer able to maintain compatibility with Apache Hadoop versions 
that are no longer receiving updates. This release raises the minimum supported 
version to Hadoop 2.7.1. Downstream users are strongly advised to upgrade to 
the latest Hadoop 2.7 maintenance release.

Downstream users of earlier HBase versions are similarly advised to upgrade to 
Hadoop 2.7.1+. When doing so, it is especially important to follow the guidance 
from [the HBase Reference Guide's Hadoop 
section](http://hbase.apache.org/book.html#hadoop) on replacing the Hadoop 
artifacts bundled with HBase. 
{code}

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
> Attachments: HBASE-20501.0.patch
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488472#comment-16488472
 ] 

Hadoop QA commented on HBASE-20501:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/12931/console in case of 
problems.


> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
> Attachments: HBASE-20501.0.patch
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20633) Dropping a table containing a disable violation policy fails to remove the quota upon table delete

2018-05-23 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-20633:
--

 Summary: Dropping a table containing a disable violation policy 
fails to remove the quota upon table delete
 Key: HBASE-20633
 URL: https://issues.apache.org/jira/browse/HBASE-20633
 Project: HBase
  Issue Type: Bug
Reporter: Nihal Jain
Assignee: Nihal Jain


 
{code:java}
  private void setQuotaAndThenDropTable(SpaceViolationPolicy policy) throws 
Exception {
Put put = new Put(Bytes.toBytes("to_reject"));
put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), 
Bytes.toBytes("to"),
  Bytes.toBytes("reject"));

SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE;

// Do puts until we violate space policy
final TableName tn = writeUntilViolationAndVerifyViolation(policy, put);

// Now, drop the table
TEST_UTIL.deleteTable(tn);
LOG.debug("Successfully deleted table ", tn);

// Now re-create the table
TEST_UTIL.createTable(tn, Bytes.toBytes(SpaceQuotaHelperForTests.F1));
LOG.debug("Successfully re-created table ", tn);

// Put some rows now: should not violate as table/quota was dropped
verifyNoViolation(policy, tn, put);
  }
{code}

 * When we drop a table, upon completion the quota triggers removal of disable 
policy, thus causing the system to enable the table
{noformat}
2018-05-18 18:08:58,189 DEBUG [PEWorker-13] 
procedure.DeleteTableProcedure(130): delete 
'testSetQuotaAndThenDropTableWithDisable19' completed
2018-05-18 18:08:58,191 INFO  [PEWorker-13] procedure2.ProcedureExecutor(1265): 
Finished pid=328, state=SUCCESS; DeleteTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19 in 271msec
2018-05-18 18:08:58,321 INFO  [regionserver/ba4cba1aa13d:0.Chore.1] 
client.HBaseAdmin$14(844): Started enable of 
testSetQuotaAndThenDropTableWithDisable19{noformat}

 * But, since the table has already been dropped, enable procedure would 
rollback
{noformat}
2018-05-18 18:08:58,427 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
procedure2.ProcedureExecutor(884): Stored pid=329, 
state=RUNNABLE:ENABLE_TABLE_PREPARE; EnableTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19
2018-05-18 18:08:58,430 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
master.MasterRpcServices(1141): Checking to see if procedure is done pid=329
2018-05-18 18:08:58,451 INFO  [PEWorker-10] procedure2.ProcedureExecutor(1359): 
Rolled back pid=329, state=ROLLEDBACK, 
exception=org.apache.hadoop.hbase.TableNotFoundException via 
master-enable-table:org.apache.hadoop.hbase.TableNotFoundException: 
testSetQuotaAndThenDropTableWithDisable19; EnableTableProcedure 
table=testSetQuotaAndThenDropTableWithDisable19 exec-time=124msec
2018-05-18 18:08:58,533 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46443] 
master.MasterRpcServices(1141): Checking to see if procedure is done pid=329
2018-05-18 18:08:58,535 INFO  [regionserver/ba4cba1aa13d:0.Chore.1] 
client.HBaseAdmin$TableFuture(3652): Operation: ENABLE, Table Name: 
default:testSetQuotaAndThenDropTableWithDisable19 failed with 
testSetQuotaAndThenDropTableWithDisable19{noformat}

 * Since, quota manager fails to enable table (i.e disable violation policy), 
it would not remove the policy, causing problems if table re-created
{noformat}
2018-05-18 18:08:58,536 ERROR [regionserver/ba4cba1aa13d:0.Chore.1] 
quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation 
policy for testSetQuotaAndThenDropTableWithDisable19. This table will remain in 
violation.
 org.apache.hadoop.hbase.TableNotFoundException: 
testSetQuotaAndThenDropTableWithDisable19
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.prepareEnable(EnableTableProcedure.java:323)
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.executeFromState(EnableTableProcedure.java:98)
 at 
org.apache.hadoop.hbase.master.procedure.EnableTableProcedure.executeFromState(EnableTableProcedure.java:49)
 at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
 at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1472)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1240)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)
 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1760){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20501:

Status: Patch Available  (was: In Progress)

v0
  - updates hadoop prerequisites list to call out dropping of pre-2.7.1 in each 
next minor release
  - updates compatibility discussion to call out lack of dependency 
compatibility for Hadoop
  - updates our test personality to stop looking at earlier hadoop versions for 
branch-1 and future minor 1.y branches

I checked the refguide by rendering locally. reviewers should probably look at 
the document precommit posts up.

I checked the personality changes by swapping out maven for the echo command.

e.g. branch-1.4 as used in nightly:

{code}
$ test-patch --personality=dev-support/hbase-personality.sh --empty-patch 
--plugins=maven,hadoopcheck --mvn-cmd=$(which echo) --branch=branch-1.4 --robot
{code}

branch-1 quick hadoop check (used in precommit):

{code}
$ test-patch --personality=dev-support/hbase-personality.sh --empty-patch 
--plugins=maven,hadoopcheck --mvn-cmd=$(which echo) --branch=branch-1 --robot 
--quick-hadoopcheck
{code}

branch-2.0:
{code}
test-patch --personality=dev-support/hbase-personality.sh --empty-patch 
--plugins=maven,hadoopcheck --mvn-cmd=$(which echo) --branch=branch-2.0 --robot
{code}

I tried it the current release branches and a local feature branch. (Note that 
{{--robot}} means the local git repo will be moved to the given branch. I had 
the patch on a feature branch and just changed back to it after each test.)

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
> Attachments: HBASE-20501.0.patch
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Nihal Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488450#comment-16488450
 ] 

Nihal Jain edited comment on HBASE-20588 at 5/24/18 5:39 AM:
-

This patch fixes the whitespaces.  [^HBASE-20588.master.005.patch] 


was (Author: nihaljain.cs):
Thus patch fixes the whitespaces.  [^HBASE-20588.master.005.patch] 

> Space quota change after quota violation doesn't seem to take in effect
> ---
>
> Key: HBASE-20588
> URL: https://issues.apache.org/jira/browse/HBASE-20588
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Biju Nair
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20588.master.001.patch, 
> HBASE-20588.master.002.patch, HBASE-20588.master.003.patch, 
> HBASE-20588.master.004.patch, HBASE-20588.master.005.patch
>
>
> Steps followed 
>  - Through {{hbase shell}}
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => '2M', POLICY => 
> NO_INSERTS{noformat}
>  - Run {{PE}} until the quota is reached
> {noformat}
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
> --rows=2000 sequentialWrite 1{noformat}
>  - Through {{HBase}} shell
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => NONE{noformat}
> - Through {{HBase}} shell verify the effective Quotas
> {noformat}
> > list_quotas
> OWNER                                               QUOTAS                    
>                                                                               
>                                              
> 0 row(s)
> Took 0.0365 seconds{noformat}
>  - Wait for some time (at least 5 mins) and try to add data to the table
> {noformat}
> > put 'TestTable','r1','info0:0','v1'
> ERROR: org.apache.hadoop.hbase.quotas.SpaceLimitingException: NO_INSERTS Puts 
> are disallowed due to a space quota.
> at 
> org.apache.hadoop.hbase.quotas.policies.NoInsertsViolationPolicyEnforcement.check(NoInsertsViolationPolicyEnforcement.java:47){noformat}
> To resolve the issue, {{RSes}} need to be restarted which points to in memory 
> data not getting reset. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Nihal Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-20588:
---
Attachment: HBASE-20588.master.005.patch

> Space quota change after quota violation doesn't seem to take in effect
> ---
>
> Key: HBASE-20588
> URL: https://issues.apache.org/jira/browse/HBASE-20588
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Biju Nair
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20588.master.001.patch, 
> HBASE-20588.master.002.patch, HBASE-20588.master.003.patch, 
> HBASE-20588.master.004.patch, HBASE-20588.master.005.patch
>
>
> Steps followed 
>  - Through {{hbase shell}}
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => '2M', POLICY => 
> NO_INSERTS{noformat}
>  - Run {{PE}} until the quota is reached
> {noformat}
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
> --rows=2000 sequentialWrite 1{noformat}
>  - Through {{HBase}} shell
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => NONE{noformat}
> - Through {{HBase}} shell verify the effective Quotas
> {noformat}
> > list_quotas
> OWNER                                               QUOTAS                    
>                                                                               
>                                              
> 0 row(s)
> Took 0.0365 seconds{noformat}
>  - Wait for some time (at least 5 mins) and try to add data to the table
> {noformat}
> > put 'TestTable','r1','info0:0','v1'
> ERROR: org.apache.hadoop.hbase.quotas.SpaceLimitingException: NO_INSERTS Puts 
> are disallowed due to a space quota.
> at 
> org.apache.hadoop.hbase.quotas.policies.NoInsertsViolationPolicyEnforcement.check(NoInsertsViolationPolicyEnforcement.java:47){noformat}
> To resolve the issue, {{RSes}} need to be restarted which points to in memory 
> data not getting reset. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Nihal Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-20588:
---
Attachment: (was: HBASE-20588.master.001.patch)

> Space quota change after quota violation doesn't seem to take in effect
> ---
>
> Key: HBASE-20588
> URL: https://issues.apache.org/jira/browse/HBASE-20588
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Biju Nair
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20588.master.001.patch, 
> HBASE-20588.master.002.patch, HBASE-20588.master.003.patch, 
> HBASE-20588.master.004.patch, HBASE-20588.master.005.patch
>
>
> Steps followed 
>  - Through {{hbase shell}}
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => '2M', POLICY => 
> NO_INSERTS{noformat}
>  - Run {{PE}} until the quota is reached
> {noformat}
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
> --rows=2000 sequentialWrite 1{noformat}
>  - Through {{HBase}} shell
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => NONE{noformat}
> - Through {{HBase}} shell verify the effective Quotas
> {noformat}
> > list_quotas
> OWNER                                               QUOTAS                    
>                                                                               
>                                              
> 0 row(s)
> Took 0.0365 seconds{noformat}
>  - Wait for some time (at least 5 mins) and try to add data to the table
> {noformat}
> > put 'TestTable','r1','info0:0','v1'
> ERROR: org.apache.hadoop.hbase.quotas.SpaceLimitingException: NO_INSERTS Puts 
> are disallowed due to a space quota.
> at 
> org.apache.hadoop.hbase.quotas.policies.NoInsertsViolationPolicyEnforcement.check(NoInsertsViolationPolicyEnforcement.java:47){noformat}
> To resolve the issue, {{RSes}} need to be restarted which points to in memory 
> data not getting reset. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Nihal Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488450#comment-16488450
 ] 

Nihal Jain commented on HBASE-20588:


Thus patch fixes the whitespaces.  [^HBASE-20588.master.005.patch] 

> Space quota change after quota violation doesn't seem to take in effect
> ---
>
> Key: HBASE-20588
> URL: https://issues.apache.org/jira/browse/HBASE-20588
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Biju Nair
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20588.master.001.patch, 
> HBASE-20588.master.002.patch, HBASE-20588.master.003.patch, 
> HBASE-20588.master.004.patch, HBASE-20588.master.005.patch
>
>
> Steps followed 
>  - Through {{hbase shell}}
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => '2M', POLICY => 
> NO_INSERTS{noformat}
>  - Run {{PE}} until the quota is reached
> {noformat}
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred 
> --rows=2000 sequentialWrite 1{noformat}
>  - Through {{HBase}} shell
> {noformat}
> set_quota TYPE => SPACE, TABLE => 'TestTable', LIMIT => NONE{noformat}
> - Through {{HBase}} shell verify the effective Quotas
> {noformat}
> > list_quotas
> OWNER                                               QUOTAS                    
>                                                                               
>                                              
> 0 row(s)
> Took 0.0365 seconds{noformat}
>  - Wait for some time (at least 5 mins) and try to add data to the table
> {noformat}
> > put 'TestTable','r1','info0:0','v1'
> ERROR: org.apache.hadoop.hbase.quotas.SpaceLimitingException: NO_INSERTS Puts 
> are disallowed due to a space quota.
> at 
> org.apache.hadoop.hbase.quotas.policies.NoInsertsViolationPolicyEnforcement.check(NoInsertsViolationPolicyEnforcement.java:47){noformat}
> To resolve the issue, {{RSes}} need to be restarted which points to in memory 
> data not getting reset. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20501:

Attachment: HBASE-20501.0.patch

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
> Attachments: HBASE-20501.0.patch
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488423#comment-16488423
 ] 

Hudson commented on HBASE-20627:


Results for branch branch-2
[build #775 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0, 1.4.5
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20595) Remove the concept of 'special tables' from rsgroups

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488420#comment-16488420
 ] 

Hudson commented on HBASE-20595:


Results for branch branch-2
[build #775 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Remove the concept of 'special tables' from rsgroups
> 
>
> Key: HBASE-20595
> URL: https://issues.apache.org/jira/browse/HBASE-20595
> Project: HBase
>  Issue Type: Task
>  Components: Region Assignment, rsgroup
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 1.4.5
>
> Attachments: HBASE-20595-branch-1.patch, HBASE-20595.patch, 
> HBASE-20595.patch
>
>
> Regionserver groups needs to specially handle what it calls "special tables", 
> tables upon which core or other modular functionality depends. They need to 
> be excluded from normal rsgroup processing during bootstrap to avoid circular 
> dependencies or errors due to insufficiently initialized state. I think we 
> also want to ensure that such tables are always given a rsgroup assignment 
> with nonzero servers. (IIRC another issue already raises that point, we can 
> link it later.)
> Special tables include:
> * The system tables in the 'hbase:' namespace
> * The ACL table if the AccessController coprocessor is installed
> * The Labels table if the VisibilityController coprocessor is installed
> * The Quotas table if the FS quotas feature is active
> Either we need a facility where "special tables" can be registered, which 
> should be in core. Or, we institute a blanket rule that core and all 
> extensions that need a "special table" must put them into the 'hbase:' 
> namespace, so the TableName#isSystemTable() test will return TRUE for all, 
> and then rsgroups simply needs to test for that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20564) Tighter ByteBufferKeyValue Cell Comparator

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488422#comment-16488422
 ] 

Hudson commented on HBASE-20564:


Results for branch branch-2
[build #775 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Tighter ByteBufferKeyValue Cell Comparator
> --
>
> Key: HBASE-20564
> URL: https://issues.apache.org/jira/browse/HBASE-20564
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: 0001-HBASE-20564-addendum.txt, 
> 0001-HBASE-20564-addendum.txt, 0001-HBASE-20564-addendum2.branch-2.0.patch, 
> 0002-HBASE-20564-addendum.branch-2.0.patch, 1.4.pe.write.0510.96203.cpu.svg, 
> 2.p3.write2.0514.104236.cpu.svg, 2.pe.write.135142.cpu.svg, 20564.addendum, 
> HBASE-20564.branch-2.0.001.patch, HBASE-20564.branch-2.0.002.patch, 
> HBASE-20564.branch-2.patch, hits.png
>
>
> Comparing Cells in hbase2 takes almost 3x the CPU.
> In hbase1, its a keyValue backed by a byte array caching a few important 
> values.. In hbase2, its a NoTagByteBufferChunkKeyValue(?) deserializing the 
> row/family/qualifier lengths repeatedly.
> I tried making a purposed comparator -- one that was not generic -- and it 
> seemed to have a nicer profile coming close to hbase1 in percentage used 
> (I'll post graphs) when I ran it in my perpetual memstore filler (See scripts 
> attached to HBASE-20483). It doesn't work when I try to run it on cluster. 
> Let me run unit tests to see if it can figure what I have wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488424#comment-16488424
 ] 

Hudson commented on HBASE-20597:


Results for branch branch-2
[build #775 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20620) Tighter ByteBufferKeyValue Cell Comparator; part 2

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488421#comment-16488421
 ] 

Hudson commented on HBASE-20620:


Results for branch branch-2
[build #775 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/775//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Tighter ByteBufferKeyValue Cell Comparator; part 2
> --
>
> Key: HBASE-20620
> URL: https://issues.apache.org/jira/browse/HBASE-20620
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: HBASE-20620.branch-2.0.001.patch, 
> HBASE-20620.branch-2.0.002.patch, HBASE-20620.branch-2.0.003.patch, hits.png
>
>
> This is a follow-on from HBASE-20564 "Tighter ByteBufferKeyValue Cell 
> Comparator". In this issue, we make a stripped-down comparator that we deploy 
> in one location only, as memstore Comparator. HBASE-20564 operated on 
> CellComparator/Impl and got us 5-10k more throughput on top of a baseline 40k 
> or so. A purposed stripped-down ByteBufferKeyValue comparator that fosters 
> better inlining gets us from 45-50k up to about 75k (1.4 does 110-115k no-WAL 
> PE writes). Data coming. Log of profiling kept here: 
> https://docs.google.com/document/d/1vZ_k6_pNR1eQxID5u1xFihuPC7FkPaJQW8c4M5eA2AQ/edit#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20603) Histogram metrics should reset min and max

2018-05-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488400#comment-16488400
 ] 

Lars Hofhansl commented on HBASE-20603:
---

I agree. Server lifetime metrics are pretty much useless.

> Histogram metrics should reset min and max
> --
>
> Key: HBASE-20603
> URL: https://issues.apache.org/jira/browse/HBASE-20603
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-20603-branch-1-WIP.patch
>
>
> It's weird that the bins are reset at every monitoring interval but min and 
> max are tracked over the lifetime of the process. Makes it impossible to set 
> alarms on max value as they'll never shut off unless the process is 
> restarted. Histogram metrics should reset min and max at snapshot time too.
> For discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488394#comment-16488394
 ] 

Hudson commented on HBASE-20597:


Results for branch branch-2.0
[build #340 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20564) Tighter ByteBufferKeyValue Cell Comparator

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488393#comment-16488393
 ] 

Hudson commented on HBASE-20564:


Results for branch branch-2.0
[build #340 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Tighter ByteBufferKeyValue Cell Comparator
> --
>
> Key: HBASE-20564
> URL: https://issues.apache.org/jira/browse/HBASE-20564
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: 0001-HBASE-20564-addendum.txt, 
> 0001-HBASE-20564-addendum.txt, 0001-HBASE-20564-addendum2.branch-2.0.patch, 
> 0002-HBASE-20564-addendum.branch-2.0.patch, 1.4.pe.write.0510.96203.cpu.svg, 
> 2.p3.write2.0514.104236.cpu.svg, 2.pe.write.135142.cpu.svg, 20564.addendum, 
> HBASE-20564.branch-2.0.001.patch, HBASE-20564.branch-2.0.002.patch, 
> HBASE-20564.branch-2.patch, hits.png
>
>
> Comparing Cells in hbase2 takes almost 3x the CPU.
> In hbase1, its a keyValue backed by a byte array caching a few important 
> values.. In hbase2, its a NoTagByteBufferChunkKeyValue(?) deserializing the 
> row/family/qualifier lengths repeatedly.
> I tried making a purposed comparator -- one that was not generic -- and it 
> seemed to have a nicer profile coming close to hbase1 in percentage used 
> (I'll post graphs) when I ran it in my perpetual memstore filler (See scripts 
> attached to HBASE-20483). It doesn't work when I try to run it on cluster. 
> Let me run unit tests to see if it can figure what I have wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20620) Tighter ByteBufferKeyValue Cell Comparator; part 2

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488392#comment-16488392
 ] 

Hudson commented on HBASE-20620:


Results for branch branch-2.0
[build #340 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Tighter ByteBufferKeyValue Cell Comparator; part 2
> --
>
> Key: HBASE-20620
> URL: https://issues.apache.org/jira/browse/HBASE-20620
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: HBASE-20620.branch-2.0.001.patch, 
> HBASE-20620.branch-2.0.002.patch, HBASE-20620.branch-2.0.003.patch, hits.png
>
>
> This is a follow-on from HBASE-20564 "Tighter ByteBufferKeyValue Cell 
> Comparator". In this issue, we make a stripped-down comparator that we deploy 
> in one location only, as memstore Comparator. HBASE-20564 operated on 
> CellComparator/Impl and got us 5-10k more throughput on top of a baseline 40k 
> or so. A purposed stripped-down ByteBufferKeyValue comparator that fosters 
> better inlining gets us from 45-50k up to about 75k (1.4 does 110-115k no-WAL 
> PE writes). Data coming. Log of profiling kept here: 
> https://docs.google.com/document/d/1vZ_k6_pNR1eQxID5u1xFihuPC7FkPaJQW8c4M5eA2AQ/edit#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20595) Remove the concept of 'special tables' from rsgroups

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488391#comment-16488391
 ] 

Hudson commented on HBASE-20595:


Results for branch branch-2.0
[build #340 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/340//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Remove the concept of 'special tables' from rsgroups
> 
>
> Key: HBASE-20595
> URL: https://issues.apache.org/jira/browse/HBASE-20595
> Project: HBase
>  Issue Type: Task
>  Components: Region Assignment, rsgroup
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 1.4.5
>
> Attachments: HBASE-20595-branch-1.patch, HBASE-20595.patch, 
> HBASE-20595.patch
>
>
> Regionserver groups needs to specially handle what it calls "special tables", 
> tables upon which core or other modular functionality depends. They need to 
> be excluded from normal rsgroup processing during bootstrap to avoid circular 
> dependencies or errors due to insufficiently initialized state. I think we 
> also want to ensure that such tables are always given a rsgroup assignment 
> with nonzero servers. (IIRC another issue already raises that point, we can 
> link it later.)
> Special tables include:
> * The system tables in the 'hbase:' namespace
> * The ACL table if the AccessController coprocessor is installed
> * The Labels table if the VisibilityController coprocessor is installed
> * The Quotas table if the FS quotas feature is active
> Either we need a facility where "special tables" can be registered, which 
> should be in core. Or, we institute a blanket rule that core and all 
> extensions that need a "special table" must put them into the 'hbase:' 
> namespace, so the TableName#isSystemTable() test will return TRUE for all, 
> and then rsgroups simply needs to test for that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20601) Add multiPut support and other miscellaneous to PE

2018-05-23 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488385#comment-16488385
 ] 

Allan Yang commented on HBASE-20601:


{quote}
This commit is not right?
{quote}

I forget to rebase when pulling. This patch was committed in 

commit 85901754f82f0b60ade1356f5e45c6e2595f3107
Author: Allan Yang 
Date: Thu May 24 09:11:45 2018 +0800

HBASE-20601 Add multiPut support and other miscellaneous to PE


But, another merge commit is generated as you mentioned. It is not right but do 
no harm. What can I do, do I need to revert this?

I will be careful next time, sorry for the inconvenience 

 

> Add multiPut support and other miscellaneous to PE
> --
>
> Key: HBASE-20601
> URL: https://issues.apache.org/jira/browse/HBASE-20601
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20601.002.patch, HBASE-20601.003.patch, 
> HBASE-20601.branch-2.002.patch, HBASE-20601.branch-2.003.patch, 
> HBASE-20601.branch-2.004.patch, HBASE-20601.branch-2.005.patch, 
> HBASE-20601.branch-2.006.patch, HBASE-20601.branch-2.patch, HBASE-20601.patch
>
>
> Add some useful stuff and some refinement to PE tool
> 1. Add multiPut support
> Though we have BufferedMutator, sometimes we need to benchmark batch put in a 
> certain number.
> Set --multiPut=number to enable batchput(meanwhile, --autoflush need be set 
> to false)
> 2. Add Connection Number support
> Before, there is only on parameter to control the connection used by threads. 
> oneCon=true means all threads use one connection, false means each thread has 
> it own connection.
> When thread number is high and oneCon=false, we noticed high context switch 
> frequency in the machine which PE run on, disturbing the benchmark 
> results(each connection has its own netty worker threads, 2*CPU IIRC).  
> So, added a new parameter connCount to PE. set --connCount=2 means all 
> threads will share 2 connections.
> 3. Add avg RT and avg TPS/QPS statstic for all threads
> Useful when we want to meansure the total throughtput of the cluster
> 4. Delete some redundant code
> Now RandomWriteTest is inherited from SequentialWrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20589) Don't need to assign meta to a new RS when standby master become active

2018-05-23 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-20589:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master, branch-2 and branch-2.0. Thanks [~stack] [~Apache9] for 
reviewing.

> Don't need to assign meta to a new RS when standby master become active
> ---
>
> Key: HBASE-20589
> URL: https://issues.apache.org/jira/browse/HBASE-20589
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20589.branch-2.0.patch, 
> HBASE-20589.master.001.patch, HBASE-20589.master.002.patch, 
> HBASE-20589.master.003.patch, HBASE-20589.master.003.patch, 
> HBASE-20589.master.004.patch, HBASE-20589.master.005.patch, 
> HBASE-20589.master.006.patch, HBASE-20589.master.007.patch, 
> HBASE-20589.master.008.patch, HBASE-20589.master.008.patch, 
> HBASE-20589.master.009.patch
>
>
> I found this problem when I write ut for HBASE-20569. Now the master  
> finishActiveMasterInitialization introduce a new 
> RecoverMetaProcedure(HBASE-18261) and it has a sub procedure AssignProcedure. 
> AssignProcedure will skip assign a region when regions state is OPEN and 
> server is online. But for the new regiog state node is created with state 
> OFFLINE. So it will assign the meta to a new RS. And kill the old RS when old 
> RS report to master. This will make the master initialization cost a long 
> time. I will attatch a ut to show this. FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20589) Don't need to assign meta to a new RS when standby master become active

2018-05-23 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-20589:
---
Attachment: HBASE-20589.branch-2.0.patch

> Don't need to assign meta to a new RS when standby master become active
> ---
>
> Key: HBASE-20589
> URL: https://issues.apache.org/jira/browse/HBASE-20589
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20589.branch-2.0.patch, 
> HBASE-20589.master.001.patch, HBASE-20589.master.002.patch, 
> HBASE-20589.master.003.patch, HBASE-20589.master.003.patch, 
> HBASE-20589.master.004.patch, HBASE-20589.master.005.patch, 
> HBASE-20589.master.006.patch, HBASE-20589.master.007.patch, 
> HBASE-20589.master.008.patch, HBASE-20589.master.008.patch, 
> HBASE-20589.master.009.patch
>
>
> I found this problem when I write ut for HBASE-20569. Now the master  
> finishActiveMasterInitialization introduce a new 
> RecoverMetaProcedure(HBASE-18261) and it has a sub procedure AssignProcedure. 
> AssignProcedure will skip assign a region when regions state is OPEN and 
> server is online. But for the new regiog state node is created with state 
> OFFLINE. So it will assign the meta to a new RS. And kill the old RS when old 
> RS report to master. This will make the master initialization cost a long 
> time. I will attatch a ut to show this. FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20424) Allow writing WAL to local and remote cluster concurrently

2018-05-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20424:
--
Attachment: HBASE-20424-HBASE-19064-v3.patch

> Allow writing WAL to local and remote cluster concurrently
> --
>
> Key: HBASE-20424
> URL: https://issues.apache.org/jira/browse/HBASE-20424
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20424-HBASE-19064-v1.patch, 
> HBASE-20424-HBASE-19064-v2.patch, HBASE-20424-HBASE-19064-v3.patch, 
> HBASE-20424-HBASE-19064.patch
>
>
> For better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20624) Race in ReplicationSource which causes walEntryFilter being null when creating new shipper

2018-05-23 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20624:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

Thanks [~zghaobac] for reviewing.

> Race in ReplicationSource which causes walEntryFilter being null when 
> creating new shipper
> --
>
> Key: HBASE-20624
> URL: https://issues.apache.org/jira/browse/HBASE-20624
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20624.patch
>
>
> We initialize the ReplicationSource in a background thread to avoid blocking 
> the peer modification. And when enqueueLog is called and we want to start a 
> new shipper, we will check whether the replicationEndpoint is initialized, if 
> so, we will start a new shipper. But in the initialize thread, we will 
> initialize walEntryFilter after replicationEndpoint , so it is possible that 
> when we start a new shipper, the walEntryFilter is still null and cause NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488349#comment-16488349
 ] 

Hadoop QA commented on HBASE-17992:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HBASE-17992 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-17992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868476/hbase-17992-master.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12928/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
>Priority: Major
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2018-05-23 Thread Bo Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488338#comment-16488338
 ] 

Bo Cui edited comment on HBASE-17992 at 5/24/18 2:54 AM:
-

i looked at the latest code.

it has not existed in 1.0.2 and 1.3.1.

It has been solved by SnapshotFileCache.
{code:java}
See HBASE-16464{code}
{code:java}
// code placeholder

List getSnapshotsInProgress() throws IOException {
List snapshotInProgress = Lists.newArrayList();
// only add those files to the cache, but not to the known snapshots
Path snapshotTmpDir = new Path(snapshotDir, 
SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME);
// only add those files to the cache, but not to the known snapshots
FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir);
if (running != null) {
for (FileStatus run : running) {
try {
snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath()));
} catch (CorruptedSnapshotException e) {
// See HBASE-16464
if (e.getCause() instanceof FileNotFoundException) {
// If the snapshot is not in progress, we will delete it
if (!fs.exists(new Path(run.getPath(),
SnapshotDescriptionUtils.SNAPSHOT_IN_PROGRESS))) {
fs.delete(run.getPath(), true);
LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause());
}
} else {
throw e;
}
}
}
}
return snapshotInProgress;
}
{code}


was (Author: bo cui):
i looked at the latest code.

it has not existed in 1.0.2 and 1.3.1.
{code:java}
See HBASE-16464{code}
{code:java}
// code placeholder

List getSnapshotsInProgress() throws IOException {
List snapshotInProgress = Lists.newArrayList();
// only add those files to the cache, but not to the known snapshots
Path snapshotTmpDir = new Path(snapshotDir, 
SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME);
// only add those files to the cache, but not to the known snapshots
FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir);
if (running != null) {
for (FileStatus run : running) {
try {
snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath()));
} catch (CorruptedSnapshotException e) {
// See HBASE-16464
if (e.getCause() instanceof FileNotFoundException) {
// If the snapshot is not in progress, we will delete it
if (!fs.exists(new Path(run.getPath(),
SnapshotDescriptionUtils.SNAPSHOT_IN_PROGRESS))) {
fs.delete(run.getPath(), true);
LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause());
}
} else {
throw e;
}
}
}
}
return snapshotInProgress;
}
{code}

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
>Priority: Major
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17992) The snapShot TimeoutException causes the cleanerChore thread to fail to complete the archive correctly

2018-05-23 Thread Bo Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488338#comment-16488338
 ] 

Bo Cui commented on HBASE-17992:


i looked at the latest code.

it has not existed in 1.0.2 and 1.3.1.
{code:java}
See HBASE-16464{code}
{code:java}
// code placeholder

List getSnapshotsInProgress() throws IOException {
List snapshotInProgress = Lists.newArrayList();
// only add those files to the cache, but not to the known snapshots
Path snapshotTmpDir = new Path(snapshotDir, 
SnapshotDescriptionUtils.SNAPSHOT_TMP_DIR_NAME);
// only add those files to the cache, but not to the known snapshots
FileStatus[] running = FSUtils.listStatus(fs, snapshotTmpDir);
if (running != null) {
for (FileStatus run : running) {
try {
snapshotInProgress.addAll(fileInspector.filesUnderSnapshot(run.getPath()));
} catch (CorruptedSnapshotException e) {
// See HBASE-16464
if (e.getCause() instanceof FileNotFoundException) {
// If the snapshot is not in progress, we will delete it
if (!fs.exists(new Path(run.getPath(),
SnapshotDescriptionUtils.SNAPSHOT_IN_PROGRESS))) {
fs.delete(run.getPath(), true);
LOG.warn("delete the " + run.getPath() + " due to exception:", e.getCause());
}
} else {
throw e;
}
}
}
}
return snapshotInProgress;
}
{code}

> The snapShot TimeoutException causes the cleanerChore thread to fail to 
> complete the archive correctly
> --
>
> Key: HBASE-17992
> URL: https://issues.apache.org/jira/browse/HBASE-17992
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.10, 1.3.0
>Reporter: Bo Cui
>Priority: Major
> Attachments: hbase-17992-0.98.patch, hbase-17992-1.3.patch, 
> hbase-17992-master.patch, hbase-17992.patch
>
>
> The problem is that when the snapshot occurs TimeoutException  or other 
> Exceptions, there is no correct delete /hbase/.hbase-snapshot/tmp, which 
> causes the cleanerChore to fail to complete the archive correctly.
> Modifying the configuration parameter (hbase.snapshot.master.timeout.millis = 
> 60) only reduces the probability of the problem occurring.
> So the solution to the problem is: multi-Threaded exceptions or 
> TimeoutExceptions, the Main-thread must wait until all the tasks are finished 
> or canceled, the Main-thread can be cleared 
> /hbase/.hbase-snapshot/tmp/snapshotName.Otherwise the task is likely to write 
> /hbase/.hbase-snapshot/tmp/snapshotName/region - mainfest
> The problem exists in disabledTableSnapshot and enabledTableSnapshot, because 
> I'm currently using the disabledTableSnapshot, so I provide the patch of 
> disabledTableSnapshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20625) refactor some WALCellCodec related code

2018-05-23 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488335#comment-16488335
 ] 

Jingyun Tian commented on HBASE-20625:
--

fix failed UTs.

> refactor some WALCellCodec related code
> ---
>
> Key: HBASE-20625
> URL: https://issues.apache.org/jira/browse/HBASE-20625
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20625.master.001.patch, 
> HBASE-20625.master.002.patch
>
>
> Currently I'm working on export HLog to another FileSystem, then I found the 
> code of WALCellCodec and  its related classes is not that clean. And there 
> are several TODOs. Thus I tried to refactor the code based one these TODOs. 
> e.g.
> {code}
>   // TODO: it sucks that compression context is in WAL.Entry. It'd be nice if 
> it was here.
>   //   Dictionary could be gotten by enum; initially, based on enum, 
> context would create
>   //   an array of dictionaries.
>   static class BaosAndCompressor extends ByteArrayOutputStream implements 
> ByteStringCompressor {
> public ByteString toByteString() {
>   // We need this copy to create the ByteString as the byte[] 'buf' is 
> not immutable. We reuse
>   // them.
>   return ByteString.copyFrom(this.buf, 0, this.count);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20625) refactor some WALCellCodec related code

2018-05-23 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-20625:
-
Attachment: HBASE-20625.master.002.patch

> refactor some WALCellCodec related code
> ---
>
> Key: HBASE-20625
> URL: https://issues.apache.org/jira/browse/HBASE-20625
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20625.master.001.patch, 
> HBASE-20625.master.002.patch
>
>
> Currently I'm working on export HLog to another FileSystem, then I found the 
> code of WALCellCodec and  its related classes is not that clean. And there 
> are several TODOs. Thus I tried to refactor the code based one these TODOs. 
> e.g.
> {code}
>   // TODO: it sucks that compression context is in WAL.Entry. It'd be nice if 
> it was here.
>   //   Dictionary could be gotten by enum; initially, based on enum, 
> context would create
>   //   an array of dictionaries.
>   static class BaosAndCompressor extends ByteArrayOutputStream implements 
> ByteStringCompressor {
> public ByteString toByteString() {
>   // We need this copy to create the ByteString as the byte[] 'buf' is 
> not immutable. We reuse
>   // them.
>   return ByteString.copyFrom(this.buf, 0, this.count);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20331) clean up shaded packaging for 2.1

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488322#comment-16488322
 ] 

Hudson commented on HBASE-20331:


Results for branch HBASE-20331
[build #18 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/18/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/18//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/18//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/18//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/18//artifacts/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488316#comment-16488316
 ] 

Hadoop QA commented on HBASE-20632:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20632 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924860/20632.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux aef279b6f247 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c253f8f809 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 

[jira] [Commented] (HBASE-20601) Add multiPut support and other miscellaneous to PE

2018-05-23 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488311#comment-16488311
 ] 

Guanghao Zhang commented on HBASE-20601:


commit c253f8f8090160c01a98a006aa280a320abeaef1
Merge: 8590175 9fbce16
Author: Allan Yang 
Date: Thu May 24 09:14:31 2018 +0800

Merge remote-tracking branch 'origin/master'

 

[~allan163] This commit is not right?

> Add multiPut support and other miscellaneous to PE
> --
>
> Key: HBASE-20601
> URL: https://issues.apache.org/jira/browse/HBASE-20601
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20601.002.patch, HBASE-20601.003.patch, 
> HBASE-20601.branch-2.002.patch, HBASE-20601.branch-2.003.patch, 
> HBASE-20601.branch-2.004.patch, HBASE-20601.branch-2.005.patch, 
> HBASE-20601.branch-2.006.patch, HBASE-20601.branch-2.patch, HBASE-20601.patch
>
>
> Add some useful stuff and some refinement to PE tool
> 1. Add multiPut support
> Though we have BufferedMutator, sometimes we need to benchmark batch put in a 
> certain number.
> Set --multiPut=number to enable batchput(meanwhile, --autoflush need be set 
> to false)
> 2. Add Connection Number support
> Before, there is only on parameter to control the connection used by threads. 
> oneCon=true means all threads use one connection, false means each thread has 
> it own connection.
> When thread number is high and oneCon=false, we noticed high context switch 
> frequency in the machine which PE run on, disturbing the benchmark 
> results(each connection has its own netty worker threads, 2*CPU IIRC).  
> So, added a new parameter connCount to PE. set --connCount=2 means all 
> threads will share 2 connections.
> 3. Add avg RT and avg TPS/QPS statstic for all threads
> Useful when we want to meansure the total throughtput of the cluster
> 4. Delete some redundant code
> Now RandomWriteTest is inherited from SequentialWrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20501 started by Sean Busbey.
---
> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20601) Add multiPut support and other miscellaneous to PE

2018-05-23 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20601:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Add multiPut support and other miscellaneous to PE
> --
>
> Key: HBASE-20601
> URL: https://issues.apache.org/jira/browse/HBASE-20601
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20601.002.patch, HBASE-20601.003.patch, 
> HBASE-20601.branch-2.002.patch, HBASE-20601.branch-2.003.patch, 
> HBASE-20601.branch-2.004.patch, HBASE-20601.branch-2.005.patch, 
> HBASE-20601.branch-2.006.patch, HBASE-20601.branch-2.patch, HBASE-20601.patch
>
>
> Add some useful stuff and some refinement to PE tool
> 1. Add multiPut support
> Though we have BufferedMutator, sometimes we need to benchmark batch put in a 
> certain number.
> Set --multiPut=number to enable batchput(meanwhile, --autoflush need be set 
> to false)
> 2. Add Connection Number support
> Before, there is only on parameter to control the connection used by threads. 
> oneCon=true means all threads use one connection, false means each thread has 
> it own connection.
> When thread number is high and oneCon=false, we noticed high context switch 
> frequency in the machine which PE run on, disturbing the benchmark 
> results(each connection has its own netty worker threads, 2*CPU IIRC).  
> So, added a new parameter connCount to PE. set --connCount=2 means all 
> threads will share 2 connections.
> 3. Add avg RT and avg TPS/QPS statstic for all threads
> Useful when we want to meansure the total throughtput of the cluster
> 4. Delete some redundant code
> Now RandomWriteTest is inherited from SequentialWrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488297#comment-16488297
 ] 

Andrew Purtell edited comment on HBASE-20632 at 5/24/18 1:53 AM:
-

bq. It would be better if system tables are not restricted to a rsgroup so that 
they can be serviced by any available region server in the cluster making it 
more available.

No, that breaks the intent of the design. -1 to this, please, or at least make 
it configurable, and off by default.

But if all servers in the system RSgroup go down,  and then one or more come 
back, and still the system tables are not redeployed, that is a bug.



was (Author: apurtell):
bq. It would be better if system tables are not restricted to a rsgroup so that 
they can be serviced by any available region server in the cluster making it 
more available.

No, that breaks the intent of the design. -1 to this, please

But if all servers in the system RSgroup go down,  and then one or more come 
back, and still the system tables are not redeployed, that is a bug.


> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20390) IMC Default Parameters for 2.0.0

2018-05-23 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488303#comment-16488303
 ] 

Chia-Ping Tsai commented on HBASE-20390:


{quote}Should I commit this to branch-2.0.1
{quote}
There is no branch-2.0.1 :) After you push the patch to branch-2.0 and 
branch-2, the patch will be included in 2.0.1 and 2.1 release.
{code:java}
- private static final double IN_MEMORY_FLUSH_THRESHOLD_FACTOR_DEFAULT = 0.1;
+ private static final double IN_MEMORY_FLUSH_THRESHOLD_FACTOR_DEFAULT = 0.014;

- public static final int COMPACTING_MEMSTORE_THRESHOLD_DEFAULT = 4;
+ public static final int COMPACTING_MEMSTORE_THRESHOLD_DEFAULT = 2;{code}
Could you also sync the above changes to hbase docs?

 

 

> IMC Default Parameters for 2.0.0
> 
>
> Key: HBASE-20390
> URL: https://issues.apache.org/jira/browse/HBASE-20390
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
>Priority: Major
> Attachments: HBASE-20390-branch-2.0-01.patch, 
> HBASE-20390-branch-2.0-01.patch, HBASE-20390.branch-2.0.002.patch, 
> HBASE-20390.branch-2.0.003.patch, HBase 2.0 performance evaluation - 
> throughput SSD_HDD.pdf, hits.ihc.png
>
>
> Setting new default parameters for in-memory compaction based on performance 
> tests done in HBASE-20188 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20624) Race in ReplicationSource which causes walEntryFilter being null when creating new shipper

2018-05-23 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488302#comment-16488302
 ] 

Guanghao Zhang commented on HBASE-20624:


+1

> Race in ReplicationSource which causes walEntryFilter being null when 
> creating new shipper
> --
>
> Key: HBASE-20624
> URL: https://issues.apache.org/jira/browse/HBASE-20624
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20624.patch
>
>
> We initialize the ReplicationSource in a background thread to avoid blocking 
> the peer modification. And when enqueueLog is called and we want to start a 
> new shipper, we will check whether the replicationEndpoint is initialized, if 
> so, we will start a new shipper. But in the initialize thread, we will 
> initialize walEntryFilter after replicationEndpoint , so it is possible that 
> when we start a new shipper, the walEntryFilter is still null and cause NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20603) Histogram metrics should reset min and max

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481166#comment-16481166
 ] 

Andrew Purtell edited comment on HBASE-20603 at 5/24/18 1:52 AM:
-

Parking a WIP patch that does what I want, but only touches hbase-metrics-api's 
Histogram. I don't want to touch FastLongHistogram because it has some legacy 
use. Every histogram metric derived from types in hbase-metrics-api, the 
majority, especially the call time metrics we are especially interested in, 
pick up the behavioral change.


was (Author: apurtell):
Parking a WIP patch that does what I want, but only touches hbase-metrics-api's 
Histogram. I don't want to touch FastLongHistogram because it has a lot of 
legacy use all over the place. Every histogram metric derived from types in 
hbase-metrics-api, the majority, especially the call time metrics we are 
especially interested in, pick up the behavioral change.

> Histogram metrics should reset min and max
> --
>
> Key: HBASE-20603
> URL: https://issues.apache.org/jira/browse/HBASE-20603
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-20603-branch-1-WIP.patch
>
>
> It's weird that the bins are reset at every monitoring interval but min and 
> max are tracked over the lifetime of the process. Makes it impossible to set 
> alarms on max value as they'll never shut off unless the process is 
> restarted. Histogram metrics should reset min and max at snapshot time too.
> For discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488301#comment-16488301
 ] 

Andrew Purtell commented on HBASE-20501:


Not at all! Assigned to you [~busbey]

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-20501:
--

Assignee: Sean Busbey  (was: Andrew Purtell)

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.5.0
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488299#comment-16488299
 ] 

Andrew Purtell commented on HBASE-20632:


Thanks for the tentative patch Ted but in order to know if it will work or not 
we will need a unit test that reproduces the problem. Then it is easy to 
confirm the fix works - the test will pass. 

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488297#comment-16488297
 ] 

Andrew Purtell commented on HBASE-20632:


bq. It would be better if system tables are not restricted to a rsgroup so that 
they can be serviced by any available region server in the cluster making it 
more available.

No, that breaks the intent of the design. -1 to this, please

But if all servers in the system RSgroup go down,  and then one or more come 
back, and still the system tables are not redeployed, that is a bug.


> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20501) Change the Hadoop minimum version to 2.7.1

2018-05-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488294#comment-16488294
 ] 

Sean Busbey commented on HBASE-20501:
-

mind if I do this [~apurtell]?

> Change the Hadoop minimum version to 2.7.1
> --
>
> Key: HBASE-20501
> URL: https://issues.apache.org/jira/browse/HBASE-20501
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 1.5.0
>
>
> See discussion thread on dev@ "[DISCUSS] Branching for HBase 1.5 and Hadoop 
> minimum version update (to 2.7)"
> Consensus
> * This is a needed change due to the practicalities of having Hadoop as a 
> dependency
> * Let's move up the minimum supported version of Hadoop to 2.7.1.
> * Update documentation (support matrix, compatibility discussion) to call 
> this out.
> * Be sure to call out this change in the release notes.
> * Take the opportunity to remind users about our callout "Replace the Hadoop 
> Bundled With HBase!" recommending users upgrade their Hadoop if < 2.7.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488295#comment-16488295
 ] 

Andrew Purtell commented on HBASE-20627:


bq. Compilation errors against old hadoop releases were not related:
This is HBASE-20501

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0, 1.4.5
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
   Resolution: Fixed
Fix Version/s: 1.4.5
   Status: Resolved  (was: Patch Available)

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0, 1.4.5
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488290#comment-16488290
 ] 

Ted Yu commented on HBASE-20627:


Compilation errors against old hadoop releases were not related:
{code}
[ERROR] 
/testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java:[49,39]
 cannot find symbol
[ERROR] symbol:   class BlockStoragePolicy
[ERROR] location: package org.apache.hadoop.hdfs.protocol
{code}

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20603) Histogram metrics should reset min and max

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481166#comment-16481166
 ] 

Andrew Purtell edited comment on HBASE-20603 at 5/24/18 1:36 AM:
-

Parking a WIP patch that does what I want, but only touches hbase-metrics-api's 
Histogram. I don't want to touch FastLongHistogram because it has a lot of 
legacy use all over the place. Every histogram metric derived from types in 
hbase-metrics-api, the majority, especially the call time metrics we are 
especially interested in, pick up the behavioral change.


was (Author: apurtell):
Parking a WIP patch that does what I want, but only touches hbase-metrics-api's 
Histogram. I don't want to touch FastLongHistogram because it has a lot of 
legacy use all over the place. We are most interested in applying the proposed 
new semantics to call metrics, so will propose that change separately as an 
update of call metrics to use hbase-metrics-api's Histogram, at some future 
time if this is accepted. 

> Histogram metrics should reset min and max
> --
>
> Key: HBASE-20603
> URL: https://issues.apache.org/jira/browse/HBASE-20603
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-20603-branch-1-WIP.patch
>
>
> It's weird that the bins are reset at every monitoring interval but min and 
> max are tracked over the lifetime of the process. Makes it impossible to set 
> alarms on max value as they'll never shut off unless the process is 
> restarted. Histogram metrics should reset min and max at snapshot time too.
> For discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20603) Histogram metrics should reset min and max

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488287#comment-16488287
 ] 

Andrew Purtell commented on HBASE-20603:


[~elserj] [~stack] [~lhofhansl] [~enis] 
I'm thinking this metrics behavioral change for 1.5 and 2.1 and 3.0. Makes 
thresholded alerting on min or max measured values over the reporting interval 
possible. Otherwise the alarm cannot be deasserted until the process restarts. 
It is definitely a change in semantics but debatable if what was originally 
intended. There are some comments around the code that allude to expecting only 
the count of all data points measured to not reset for the next monitoring 
interval. (Which after this patch is in fact the state of things.)

> Histogram metrics should reset min and max
> --
>
> Key: HBASE-20603
> URL: https://issues.apache.org/jira/browse/HBASE-20603
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Attachments: HBASE-20603-branch-1-WIP.patch
>
>
> It's weird that the bins are reset at every monitoring interval but min and 
> max are tracked over the lifetime of the process. Makes it impossible to set 
> alarms on max value as they'll never shut off unless the process is 
> restarted. Histogram metrics should reset min and max at snapshot time too.
> For discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20632:
---
Attachment: 20632.v1.txt

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20632:
---
Attachment: (was: 20632.v1.txt)

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20632:
---
Assignee: Ted Yu
  Status: Patch Available  (was: Open)

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20548) Master fails to startup on large clusters, refreshing block distribution

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488276#comment-16488276
 ] 

Andrew Purtell commented on HBASE-20548:


Tests crashed in precommit due to environmental issue. Let me check it locally. 
 Looks like the attached patch is good for branch-1 and branch-1.4. What about 
branch-2 and master? Same issue there? Or is branch-2 and master too different?

> Master fails to startup on large clusters, refreshing block distribution
> 
>
> Key: HBASE-20548
> URL: https://issues.apache.org/jira/browse/HBASE-20548
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.4.4
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Fix For: 1.5.0, 1.4.5
>
> Attachments: HBASE-20548.branch-1.4.001.patch
>
>
> On our large clusters with, master has failed to startup within specified 
> time and aborted itself since it was initializing HDFS block distribution. 
> Enable table also takes time for larger tables for the same reason. My 
> proposal is to refresh HDFS block distribution at the end of master 
> initialization and not at retainAssignment()'s createCluster(). This would 
> address HBASE-16570's intention, but avoid the problems we ran into.
> cc [~aoxiang] [~tedyu]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488273#comment-16488273
 ] 

Ted Yu commented on HBASE-20632:


Attaching tentative patch.

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20632:
---
Attachment: 20632.v1.txt

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
> Attachments: 20632.v1.txt
>
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488272#comment-16488272
 ] 

Ted Yu commented on HBASE-20632:


Looking at RSGroupBasedLoadBalancer, 
{code}
  private List filterOfflineServers(RSGroupInfo RSGroupInfo,
List onlineServers) 
{
if (RSGroupInfo != null) {
  return filterServers(RSGroupInfo.getServers(), onlineServers);
{code}
If none of the servers for the RS group where system table belongs is online, 
empty List is returned.
{code}
  //if not server is available assign to bogus so it ends up in RIT
  if(!assignments.containsKey(LoadBalancer.BOGUS_SERVER_NAME)) {
assignments.put(LoadBalancer.BOGUS_SERVER_NAME, new ArrayList<>());
  }
  assignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
{code}
The system region would be associated with BOGUS_SERVER_NAME and stay in 
transition.

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20624) Race in ReplicationSource which causes walEntryFilter being null when creating new shipper

2018-05-23 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488252#comment-16488252
 ] 

Duo Zhang commented on HBASE-20624:
---

Ping [~zghaobac].

> Race in ReplicationSource which causes walEntryFilter being null when 
> creating new shipper
> --
>
> Key: HBASE-20624
> URL: https://issues.apache.org/jira/browse/HBASE-20624
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20624.patch
>
>
> We initialize the ReplicationSource in a background thread to avoid blocking 
> the peer modification. And when enqueueLog is called and we want to start a 
> new shipper, we will check whether the replicationEndpoint is initialized, if 
> so, we will start a new shipper. But in the initialize thread, we will 
> initialize walEntryFilter after replicationEndpoint , so it is possible that 
> when we start a new shipper, the walEntryFilter is still null and cause NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488250#comment-16488250
 ] 

Andrew Purtell edited comment on HBASE-19722 at 5/24/18 12:46 AM:
--

Thanks for the updates to the patch, it looks good and we could commit it.

bq.  I believe it makes sense to implement it. I will try to implement a 
version based on Lossy counting TopK algrithm.  

That sounds awesome [~xucang] . Indeed would be an impressive contribution




was (Author: apurtell):
Thanks for the updates to the patch, it looks and we could commit it.

bq.  I believe it makes sense to implement it. I will try to implement a 
version based on Lossy counting TopK algrithm.  

That sounds awesome [~xucang] . Indeed would be an impressive contribution



> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488250#comment-16488250
 ] 

Andrew Purtell edited comment on HBASE-19722 at 5/24/18 12:47 AM:
--

Thanks for the updates to the patch, it looks good and we could commit it.

bq.  I believe it makes sense to implement it. I will try to implement a 
version based on Lossy counting TopK algrithm.  

That sounds awesome [~xucang] . Indeed would be an impressive contribution. 
I've not tried to dynamically unregister counter metrics so can't advise if 
that is possible. First thing to check.




was (Author: apurtell):
Thanks for the updates to the patch, it looks good and we could commit it.

bq.  I believe it makes sense to implement it. I will try to implement a 
version based on Lossy counting TopK algrithm.  

That sounds awesome [~xucang] . Indeed would be an impressive contribution



> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488250#comment-16488250
 ] 

Andrew Purtell commented on HBASE-19722:


Thanks for the updates to the patch, it looks and we could commit it.

bq.  I believe it makes sense to implement it. I will try to implement a 
version based on Lossy counting TopK algrithm.  

That sounds awesome [~xucang] . Indeed would be an impressive contribution



> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488249#comment-16488249
 ] 

Hadoop QA commented on HBASE-20627:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
29s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hbase-rsgroup: The patch generated 2 new + 4 unchanged 
- 2 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  0m 
54s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m 
52s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
41s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HBASE-20589) Don't need to assign meta to a new RS when standby master become active

2018-05-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488248#comment-16488248
 ] 

stack commented on HBASE-20589:
---

Good one.  +1 for patch and for branch-2.0. thanks

> Don't need to assign meta to a new RS when standby master become active
> ---
>
> Key: HBASE-20589
> URL: https://issues.apache.org/jira/browse/HBASE-20589
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20589.master.001.patch, 
> HBASE-20589.master.002.patch, HBASE-20589.master.003.patch, 
> HBASE-20589.master.003.patch, HBASE-20589.master.004.patch, 
> HBASE-20589.master.005.patch, HBASE-20589.master.006.patch, 
> HBASE-20589.master.007.patch, HBASE-20589.master.008.patch, 
> HBASE-20589.master.008.patch, HBASE-20589.master.009.patch
>
>
> I found this problem when I write ut for HBASE-20569. Now the master  
> finishActiveMasterInitialization introduce a new 
> RecoverMetaProcedure(HBASE-18261) and it has a sub procedure AssignProcedure. 
> AssignProcedure will skip assign a region when regions state is OPEN and 
> server is online. But for the new regiog state node is created with state 
> OFFLINE. So it will assign the meta to a new RS. And kill the old RS when old 
> RS report to master. This will make the master initialization cost a long 
> time. I will attatch a ut to show this. FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20605) Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission check

2018-05-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488245#comment-16488245
 ] 

stack commented on HBASE-20605:
---

If no one has tried it, yeah put it in the unsupported until it proved 
otherwise.  Thanks

> Exclude new Azure Storage FileSystem from SecureBulkLoadEndpoint permission 
> check
> -
>
> Key: HBASE-20605
> URL: https://issues.apache.org/jira/browse/HBASE-20605
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.5
>
> Attachments: HBASE-20605.001.branch-1.patch
>
>
> Some folks in Hadoop are working on landing a new FileSystem from the Azure 
> team: HADOOP-15407
> At present, this FileSystem doesn't support permissions which causes the 
> SecureBulkLoadEndpoint to balk because it the staging directory doesn't have 
> the proper 711 permissions.
> We have a static list of FileSystem schemes which we ignore this check on. I 
> have a patch on an HBase 1.1ish which:
>  # Adds the new FileSystem scheme
>  # Makes this list configurable for the future



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
Status: Patch Available  (was: Reopened)

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HBASE-20627:


> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
Attachment: 20627.branch-1.txt

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
Attachment: (was: 20627.branch-1.txt)

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20478) move import checks from hbaseanti to checkstyle

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488231#comment-16488231
 ] 

Hudson commented on HBASE-20478:


Results for branch HBASE-20478
[build #4 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/4/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/4//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/4//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20478/4//JDK8_Nightly_Build_Report_(Hadoop3)/]


(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> move import checks from hbaseanti to checkstyle
> ---
>
> Key: HBASE-20478
> URL: https://issues.apache.org/jira/browse/HBASE-20478
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Mike Drob
>Priority: Minor
> Attachments: HBASE-20478.0.patch, HBASE-20478.1.patch, 
> HBASE-20478.2.patch, HBASE-20478.3.patch, HBASE-20478.4.patch, 
> HBASE-20478.WIP.2.patch, HBASE-20478.WIP.2.patch, HBASE-20478.WIP.patch, 
> HBASE-anti-check.patch
>
>
> came up in discussion on HBASE-20332. our check of "don't do this" things in 
> the codebase doesn't log the specifics of complaints anywhere, which forces 
> those who want to follow up to reverse engineer the check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488229#comment-16488229
 ] 

Hudson commented on HBASE-20597:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1113 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1113/])
HBASE-20597 Use a lock to serialize access to a shared reference to (apurtell: 
rev 8040c0ca7696dfa776ee8449750279aa91c3fbd4)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java


> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488227#comment-16488227
 ] 

Hudson commented on HBASE-20597:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #410 (See 
[https://builds.apache.org/job/HBase-1.3-IT/410/])
HBASE-20597 Use a lock to serialize access to a shared reference to (apurtell: 
rev b50e149804afeeb91bc561d2e2ae1ca6fe33c7d3)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java


> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-23 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488223#comment-16488223
 ] 

Xu Cang commented on HBASE-19722:
-

I thought comments regarding TopK client metrics again. I believe it makes 
sense to implement it. I will try to implement a version based on Lossy 
counting TopK algrithm.  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf)]
 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19722) Implement a meta query statistics metrics source

2018-05-23 Thread Xu Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488223#comment-16488223
 ] 

Xu Cang edited comment on HBASE-19722 at 5/23/18 11:56 PM:
---

I thought comments regarding TopK client metrics again. I believe it makes 
sense to implement it. I will try to implement a version based on Lossy 
counting TopK algorithm.  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf)]
 


was (Author: xucang):
I thought comments regarding TopK client metrics again. I believe it makes 
sense to implement it. I will try to implement a version based on Lossy 
counting TopK algrithm.  
([https://micvog.files.wordpress.com/2015/06/approximate_freq_count_over_data_streams_vldb_2002.pdf)]
 

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488219#comment-16488219
 ] 

Andrew Purtell commented on HBASE-20627:


+1 on branch-1 patch, assuming tests pass

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20597:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488217#comment-16488217
 ] 

Andrew Purtell commented on HBASE-20597:


To https://git-wip-us.apache.org/repos/asf/hbase.git
   498f3bf953..1b70763b9e  branch-1 -> branch-1
   66941d70bb..8040c0ca76  branch-1.2 -> branch-1.2
   94001b35a7..b50e149804  branch-1.3 -> branch-1.3
   cd6397be6b..7182df3bd3  branch-1.4 -> branch-1.4
   12d75724d7..60dcef289b  branch-2 -> branch-2
   86a9b80ff4..6ecb444208  branch-2.0 -> branch-2.0
   3a805074a2..9fbce1668b  master -> master


> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
Attachment: 20627.branch-1.txt

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.branch-1.txt, 20627.v1.txt, 20627.v2.txt, 
> 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488204#comment-16488204
 ] 

Ted Yu commented on HBASE-20627:


There're many hunks in conflict when applying the patch to branch-1.

Working to resolve them.

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488188#comment-16488188
 ] 

Biju Nair edited comment on HBASE-20632 at 5/23/18 11:29 PM:
-

Additional observation: Enabling {{rsgroup}} , and not creating the new 
{{rsgroup}} , the cluster recovers fine from failure of RSes hosting the system 
table when the RS restarts.

It would be better if {{system}} tables are not restricted to a {{rsgroup}} so 
that they can be serviced by any available region server  in the cluster making 
it more available.


was (Author: gsbiju):
Additional observation: Enabling {{rsgroup}} , and not creating the new 
{{rsgroup}} , the cluster recovers fine from failure of RSes hosting the system 
table when the RS recovers.

It would be better if {{system}} tables are not restricted to a {{rsgroup}} so 
that they can be serviced by any available region server  in the cluster making 
it more available.

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20632:
---
Priority: Critical  (was: Major)

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488188#comment-16488188
 ] 

Biju Nair commented on HBASE-20632:
---

Additional observation: Enabling {{rsgroup}} , and not creating the new 
{{rsgroup}} , the cluster recovers fine from failure of RSes hosting the system 
table when the RS recovers.

It would be better if {{system}} tables are not restricted to a {{rsgroup}} so 
that they can be serviced by any available region server  in the cluster making 
it more available.

> Failure of RSes belonging to RSgroup for System tables makes the cluster 
> unavailable
> 
>
> Key: HBASE-20632
> URL: https://issues.apache.org/jira/browse/HBASE-20632
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 3.0.0
>Reporter: Biju Nair
>Priority: Critical
>
> This was done on a local cluster (non hdfs) and following are the steps
>  * Start a single node cluster and start an additional RS using 
> {{local-regionservers.sh}}
>  * Through hbase shell add a new rs group
>  * 
> {noformat}
> hbase(main):001:0> add_rsgroup 'test_rsgroup'
> Took 0.5503 seconds
> hbase(main):002:0> list_rsgroups
> NAME SERVER / TABLE
> test_rsgroup
> default server dob2-r3n13:16020
> server dob2-r3n13:16022
> table hbase:meta
> table hbase:acl
> table hbase:quota
> table hbase:namespace
> table hbase:rsgroup
> 2 row(s)
> Took 0.0419 seconds{noformat}
>  * Move one of the region servers to the new {{rsgroup}}
>  * 
> {noformat}
> hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
> Took 6.4894 seconds
> hbase(main):005:0> exit{noformat}
>  * Stop the regionserver which is left in the {{default}} rsgroup
>  * 
> {noformat}
> local-regionservers.sh stop 2{noformat}
> The cluster becomes unusable even if the region server is restarted or even 
> if all the services were brought down and brought up.
> In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
> assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
> it gets assigned. The following is what we can see in {{master}} UI when the 
> {{rs}} is down
> {noformat}
> 1588230740hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
> 18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20597) Use a lock to serialize access to a shared reference to ZooKeeperWatcher in HBaseReplicationEndpoint

2018-05-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488189#comment-16488189
 ] 

Lars Hofhansl commented on HBASE-20597:
---

+1

> Use a lock to serialize access to a shared reference to ZooKeeperWatcher in 
> HBaseReplicationEndpoint
> 
>
> Key: HBASE-20597
> URL: https://issues.apache.org/jira/browse/HBASE-20597
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2, 1.4.4
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 2.0.1, 1.4.5
>
> Attachments: HBASE-20597-branch-1.patch, HBASE-20597.patch
>
>
> The code that closes down a ZKW that fails to initialize when attempting to 
> connect to the remote cluster is not MT safe and can in theory leak 
> ZooKeeperWatcher instances. The allocation of a new ZKW and store to the 
> reference is not atomic. Might have concurrent allocations with only one 
> winning store, leading to leaked ZKW instances. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488180#comment-16488180
 ] 

Ted Yu commented on HBASE-20627:


[~stack]:
Do you want this in branch-2.0 ?

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20632) Failure of RSes belonging to RSgroup for System tables makes the cluster unavailable

2018-05-23 Thread Biju Nair (JIRA)
Biju Nair created HBASE-20632:
-

 Summary: Failure of RSes belonging to RSgroup for System tables 
makes the cluster unavailable
 Key: HBASE-20632
 URL: https://issues.apache.org/jira/browse/HBASE-20632
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver
Affects Versions: 3.0.0
Reporter: Biju Nair


This was done on a local cluster (non hdfs) and following are the steps
 * Start a single node cluster and start an additional RS using 
{{local-regionservers.sh}}
 * Through hbase shell add a new rs group
 * 
{noformat}
hbase(main):001:0> add_rsgroup 'test_rsgroup'
Took 0.5503 seconds
hbase(main):002:0> list_rsgroups
NAME SERVER / TABLE
test_rsgroup
default server dob2-r3n13:16020
server dob2-r3n13:16022
table hbase:meta
table hbase:acl
table hbase:quota
table hbase:namespace
table hbase:rsgroup
2 row(s)
Took 0.0419 seconds{noformat}

 * Move one of the region servers to the new {{rsgroup}}
 * 
{noformat}
hbase(main):004:0> move_servers_rsgroup 'test_rsgroup',['dob2-r3n13:16020']
Took 6.4894 seconds
hbase(main):005:0> exit{noformat}

 * Stop the regionserver which is left in the {{default}} rsgroup
 * 
{noformat}
local-regionservers.sh stop 2{noformat}

The cluster becomes unusable even if the region server is restarted or even if 
all the services were brought down and brought up.

In {{1.1.x}} version, the cluster recovers fine. Looks like {{meta}} is 
assigned to a {{dummy}} regionserver and when the regionserver gets restarted 
it gets assigned. The following is what we can see in {{master}} UI when the 
{{rs}} is down
{noformat}
1588230740  hbase:meta,,1.1588230740 state=PENDING_OPEN, ts=Wed May 23 
18:24:01 EDT 2018 (1s ago), server=localhost,1,1{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488175#comment-16488175
 ] 

Andrew Purtell commented on HBASE-20627:


This applies to rsgroups in all branches, right? Please backport to all active 
branches: master, branch-2, branch-2.0, branch-1, branch-1.4. Thanks.

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488178#comment-16488178
 ] 

Andrew Purtell commented on HBASE-20627:


If you don't want to do branch-1, that's fine, please just let me know and I'll 
do it

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20631) B: Merge command enhancements

2018-05-23 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-20631:
-

 Summary: B: Merge command enhancements 
 Key: HBASE-20631
 URL: https://issues.apache.org/jira/browse/HBASE-20631
 Project: HBase
  Issue Type: New Feature
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Currently, merge supports only list of backup ids, which users must provide. 
Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20630) B: Delete command enhancements

2018-05-23 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-20630:
-

 Summary: B: Delete command enhancements
 Key: HBASE-20630
 URL: https://issues.apache.org/jira/browse/HBASE-20630
 Project: HBase
  Issue Type: New Feature
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Make the command more useable. Currently, user needs to provide list of backup 
ids to delete. It would be nice to have more convenient options, such as: 
deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Thanks for the review, Andrew

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20601) Add multiPut support and other miscellaneous to PE

2018-05-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488147#comment-16488147
 ] 

Ted Yu commented on HBASE-20601:


Allan:
Can you commit this to master branch and resolve ?

> Add multiPut support and other miscellaneous to PE
> --
>
> Key: HBASE-20601
> URL: https://issues.apache.org/jira/browse/HBASE-20601
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HBASE-20601.002.patch, HBASE-20601.003.patch, 
> HBASE-20601.branch-2.002.patch, HBASE-20601.branch-2.003.patch, 
> HBASE-20601.branch-2.004.patch, HBASE-20601.branch-2.005.patch, 
> HBASE-20601.branch-2.006.patch, HBASE-20601.branch-2.patch, HBASE-20601.patch
>
>
> Add some useful stuff and some refinement to PE tool
> 1. Add multiPut support
> Though we have BufferedMutator, sometimes we need to benchmark batch put in a 
> certain number.
> Set --multiPut=number to enable batchput(meanwhile, --autoflush need be set 
> to false)
> 2. Add Connection Number support
> Before, there is only on parameter to control the connection used by threads. 
> oneCon=true means all threads use one connection, false means each thread has 
> it own connection.
> When thread number is high and oneCon=false, we noticed high context switch 
> frequency in the machine which PE run on, disturbing the benchmark 
> results(each connection has its own netty worker threads, 2*CPU IIRC).  
> So, added a new parameter connCount to PE. set --connCount=2 means all 
> threads will share 2 connections.
> 3. Add avg RT and avg TPS/QPS statstic for all threads
> Useful when we want to meansure the total throughtput of the cluster
> 4. Delete some redundant code
> Now RandomWriteTest is inherited from SequentialWrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488146#comment-16488146
 ] 

Hadoop QA commented on HBASE-20588:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
24s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 16 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 
45s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924818/HBASE-20588.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f7aa32adf1a3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 304d3e6fa9 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12922/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12922/testReport/ |
| Max. process+thread count | 4448 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-20331) clean up shaded packaging for 2.1

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488122#comment-16488122
 ] 

Hudson commented on HBASE-20331:


Results for branch HBASE-20331
[build #17 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/17/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/17//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/17//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/17//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/17//console].


> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488113#comment-16488113
 ] 

Andrew Purtell commented on HBASE-20627:


Thanks!

+1


> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20390) IMC Default Parameters for 2.0.0

2018-05-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488105#comment-16488105
 ] 

stack commented on HBASE-20390:
---

+1 for branch-2.0. +1 on patch.

> IMC Default Parameters for 2.0.0
> 
>
> Key: HBASE-20390
> URL: https://issues.apache.org/jira/browse/HBASE-20390
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
>Priority: Major
> Attachments: HBASE-20390-branch-2.0-01.patch, 
> HBASE-20390-branch-2.0-01.patch, HBASE-20390.branch-2.0.002.patch, 
> HBASE-20390.branch-2.0.003.patch, HBase 2.0 performance evaluation - 
> throughput SSD_HDD.pdf, hits.ihc.png
>
>
> Setting new default parameters for in-memory compaction based on performance 
> tests done in HBASE-20188 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20594) provide utility to compare old and new descriptors

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488102#comment-16488102
 ] 

Hadoop QA commented on HBASE-20594:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
34s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-checkstyle {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
35s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
49s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  2m 
40s{color} | {color:red} patch has 24 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m 
56s{color} | {color:red} The patch causes 24 errors with Hadoop v2.6.5. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m 
48s{color} | {color:red} The patch causes 24 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  5m 
46s{color} | {color:red} The patch causes 24 errors with Hadoop v3.0.0. {color} 
|
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-checkstyle {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hbase-checkstyle in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 45s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HBASE-20588) Space quota change after quota violation doesn't seem to take in effect

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488101#comment-16488101
 ] 

Hadoop QA commented on HBASE-20588:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
34s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
17s{color} | {color:red} hbase-server: The patch generated 2 new + 4 unchanged 
- 0 fixed = 6 total (was 4) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m  3s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.security.token.TestZKSecretWatcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20588 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924813/HBASE-20588.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 24c90434385d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 304d3e6fa9 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12921/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| whitespace | 

[jira] [Commented] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488097#comment-16488097
 ] 

Hadoop QA commented on HBASE-20627:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} hbase-rsgroup generated 3 new + 104 unchanged - 0 fixed = 107 
total (was 104) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 11s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20627 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924829/20627.v3.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8607f7c16b96 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 079f168c5c |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| javac | 

[jira] [Assigned] (HBASE-20542) Better heap utilization for IMC with MSLABs

2018-05-23 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel reassigned HBASE-20542:
-

Assignee: Eshcar Hillel

> Better heap utilization for IMC with MSLABs
> ---
>
> Key: HBASE-20542
> URL: https://issues.apache.org/jira/browse/HBASE-20542
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
>Priority: Major
>
> Following HBASE-20188 we realized in-memory compaction combined with MSLABs 
> may suffer from heap under-utilization due to internal fragmentation. This 
> jira presents a solution to circumvent this problem. The main idea is to have 
> each update operation check if it will cause overflow in the active segment 
> *before* it is writing the new value (instead of checking the size after the 
> write is completed), and if it is then the active segment is atomically 
> swapped with a new empty segment, and is pushed (full-yet-not-overflowed) to 
> the compaction pipeline. Later on the IMC deamon will run its compaction 
> operation (flatten index/merge indices/data compaction) in the background. 
> Some subtle concurrency issues should be handled with care. We next elaborate 
> on them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20594) provide utility to compare old and new descriptors

2018-05-23 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20594:
--
Attachment: HBASE-20594.v3.patch

> provide utility to compare old and new descriptors
> --
>
> Key: HBASE-20594
> URL: https://issues.apache.org/jira/browse/HBASE-20594
> Project: HBase
>  Issue Type: Improvement
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20594.patch, HBASE-20594.v2.patch, 
> HBASE-20594.v3.patch
>
>
> HBASE-20567 gives us hooks that give both the old and new descriptor in 
> pre/postModify* events, but comparing them is still cumbersome. We should 
> provide users some kind of utility for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20390) IMC Default Parameters for 2.0.0

2018-05-23 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488062#comment-16488062
 ] 

Eshcar Hillel commented on HBASE-20390:
---

QA passed successfully.
Recall that these parameters showed performance improvement wrt the previous 
default params, and we plan to continue the work to reduce internal 
fragmentation in HBASE-20542.
I plan to commit this to master.
Should I commit this to branch-2.1.0 [~Apache9]?
Should I commit this to branch-2.0.1 [~chia7712]?
Should I commit this to branch-2.0 [~stack]?

> IMC Default Parameters for 2.0.0
> 
>
> Key: HBASE-20390
> URL: https://issues.apache.org/jira/browse/HBASE-20390
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
>Priority: Major
> Attachments: HBASE-20390-branch-2.0-01.patch, 
> HBASE-20390-branch-2.0-01.patch, HBASE-20390.branch-2.0.002.patch, 
> HBASE-20390.branch-2.0.003.patch, HBase 2.0 performance evaluation - 
> throughput SSD_HDD.pdf, hits.ihc.png
>
>
> Setting new default parameters for in-memory compaction based on performance 
> tests done in HBASE-20188 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20628) SegmentScanner does over-comparing when one flushing

2018-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488046#comment-16488046
 ] 

Hadoop QA commented on HBASE-20628:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 13s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush |
|   | hadoop.hbase.regionserver.TestCompactingToCellFlatMapMemStore |
|   | hadoop.hbase.TestAcidGuaranteesWithEagerPolicy |
|   | hadoop.hbase.regionserver.TestCompactingMemStore |
|   | hadoop.hbase.regionserver.TestHStore |
|   | hadoop.hbase.TestIOFencing |
|   | hadoop.hbase.TestAcidGuaranteesWithBasicPolicy |
|   | hadoop.hbase.regionserver.TestMajorCompaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:369877d |
| JIRA Issue | HBASE-20628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924802/HBASE-20628.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 33d4626c356e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-20627) Relocate RS Group pre/post hooks from RSGroupAdminServer to RSGroupAdminEndpoint

2018-05-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20627:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Relocate RS Group pre/post hooks from RSGroupAdminServer to 
> RSGroupAdminEndpoint
> 
>
> Key: HBASE-20627
> URL: https://issues.apache.org/jira/browse/HBASE-20627
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20627.v1.txt, 20627.v2.txt, 20627.v3.txt
>
>
> Currently RS Group pre/post hooks are called from RSGroupAdminServer.
> e.g. RSGroupAdminServer#removeRSGroup :
> {code}
>   if (master.getMasterCoprocessorHost() != null) {
> master.getMasterCoprocessorHost().preRemoveRSGroup(name);
>   }
> {code}
> RSGroupAdminServer#removeRSGroup is called by RSGroupAdminEndpoint :
> {code}
> checkPermission("removeRSGroup");
> groupAdminServer.removeRSGroup(request.getRSGroupName());
> {code}
> If permission check fails, the pre hook wouldn't be called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >