[jira] [Commented] (HBASE-20741) Split of a region with replicas creates all daughter regions and its replica in same server

2018-09-03 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602627#comment-16602627
 ] 

ramkrishna.s.vasudevan commented on HBASE-20741:


The push I had to do as 2 commits because in the first commit only the test 
case went in. I think in the command line I missed some arg. 

> Split of a region with replicas creates all daughter regions and its replica 
> in same server
> ---
>
> Key: HBASE-20741
> URL: https://issues.apache.org/jira/browse/HBASE-20741
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Affects Versions: 3.0.0, 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20741.patch, HBASE-20741_1.patch, 
> HBASE-20741_2.patch, HBASE-20741_2.patch, HBASE-20741_3.patch, 
> HBASE-20741_4.patch, HBASE-20741_5.patch, HBASE-20741_5.patch
>
>
> Generally it is better that the parent region when split creates the daughter 
> region in the same target server. 
> But for replicas also we do the same and all the replica regions are created 
> in the same target server. We should ideally be doing a round robin and only 
> the primary daughter region should be opened in the intended target server 
> (where the parent was previously opened).
> [~huaxiang] FYI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20741) Split of a region with replicas creates all daughter regions and its replica in same server

2018-09-03 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602621#comment-16602621
 ] 

ramkrishna.s.vasudevan commented on HBASE-20741:


All the failure tests passes. Pushed to master. Thanks for the reviews 
[~huaxiang], [~saint@gmail.com] & [~yuzhih...@gmail.com].
[~saint@gmail.com], [~Apache9]
Do you want this in 2.0 and 2.2 branches?

> Split of a region with replicas creates all daughter regions and its replica 
> in same server
> ---
>
> Key: HBASE-20741
> URL: https://issues.apache.org/jira/browse/HBASE-20741
> Project: HBase
>  Issue Type: Bug
>  Components: read replicas
>Affects Versions: 3.0.0, 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20741.patch, HBASE-20741_1.patch, 
> HBASE-20741_2.patch, HBASE-20741_2.patch, HBASE-20741_3.patch, 
> HBASE-20741_4.patch, HBASE-20741_5.patch, HBASE-20741_5.patch
>
>
> Generally it is better that the parent region when split creates the daughter 
> region in the same target server. 
> But for replicas also we do the same and all the replica regions are created 
> in the same target server. We should ideally be doing a round robin and only 
> the primary daughter region should be opened in the intended target server 
> (where the parent was previously opened).
> [~huaxiang] FYI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-03 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602613#comment-16602613
 ] 

huaxiang sun commented on HBASE-21102:
--

[~ram_krish], I was out this weekend, will take a look tomorrow morning PST, 
thanks.

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-09-03 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602606#comment-16602606
 ] 

Toshihiro Suzuki commented on HBASE-21030:
--

[~stack] Can I push it to branch-2.0? I've already pushed to all branches 
except branch-2.0.

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Subrat Mishra
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8, 1.2.7, 2.1.1
>
> Attachments: HBASE-21030.master.001.patch
>
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-09-03 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602598#comment-16602598
 ] 

Anoop Sam John commented on HBASE-21030:


We need commit to all relevant branches.. Can u pls do that for missing 
branch(s)?

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Subrat Mishra
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8, 1.2.7, 2.1.1
>
> Attachments: HBASE-21030.master.001.patch
>
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20892) [UI] Start / End keys are empty on table.jsp

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602596#comment-16602596
 ] 

Hadoop QA commented on HBASE-20892:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}124m 
57s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938183/HBASE-20892.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 54cd6ee0bfff 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dc79029966 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14292/testReport/ |
| Max. process+thread count | 4765 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14292/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> [UI] Start / End keys are empty on table.jsp
> 
>
> Key: HBASE-20892
> URL: https://issues.apache.org/jira/browse/HBASE-20892
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: web-ui
> Attachments: HBASE-20892.master.001.patch, 
> HBASE-20892.master.001.patch, new_table_jsp.png, table.jsp.png
>
>
> When viewing table.jsp?name=TestTable , I found that the Start / End keys for 
> all the regions were simply dashes without real value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21144:
--
Attachment: HBASE-21144-v1.patch

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602583#comment-16602583
 ] 

Duo Zhang commented on HBASE-21144:
---

Our build machines are slow so I use a large wait time. Let me fix the 
checkstyle issue and commit it to master to see if it works.

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-21129:
--
  Resolution: Resolved
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602575#comment-16602575
 ] 

Reid Chan commented on HBASE-21129:
---

Thanks Ted, pushed to master branch and branch-2.

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602561#comment-16602561
 ] 

Ted Yu commented on HBASE-21129:


Failed test doesn't involve Filter.

Go ahead with commit.

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602559#comment-16602559
 ] 

Reid Chan commented on HBASE-21129:
---

Any more comments, Ted?

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21146) (2.0) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21146:
---
Fix Version/s: (was: 2.2.0)
   (was: 1.5.0)
   (was: 3.0.0)
   2.0.3

> (2.0) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21146
> URL: https://issues.apache.org/jira/browse/HBASE-21146
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 2.0.3
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21145) (2.1) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-21145:
--

Assignee: Duo Zhang  (was: David Manning)

> (2.1) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21145
> URL: https://issues.apache.org/jira/browse/HBASE-21145
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21145) (2.1) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21145:
---
Fix Version/s: (was: 2.2.0)
   (was: 1.5.0)
   (was: 3.0.0)
   2.1.1

> (2.1) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21145
> URL: https://issues.apache.org/jira/browse/HBASE-21145
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: Duo Zhang
>Priority: Minor
> Fix For: 2.1.1
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21147) (1.4) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602558#comment-16602558
 ] 

Josh Elser commented on HBASE-21147:


[~apurtell], I see 1.4.7 should pass, but didn't want to mess up branch-1.4. 
The branch-1 applies cleanly. I can commit there with your go-ahead.

> (1.4) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21147
> URL: https://issues.apache.org/jira/browse/HBASE-21147
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.4.8
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21146) (2.0) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-21146:
--

Assignee: stack  (was: David Manning)

> (2.0) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21146
> URL: https://issues.apache.org/jira/browse/HBASE-21146
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.3
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21126) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21126:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Add ability for HBase Canary to ignore a configurable number of ZooKeeper 
> down nodes
> 
>
> Key: HBASE-21126
> URL: https://issues.apache.org/jira/browse/HBASE-21126
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21147) (1.4) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21147:
---
Fix Version/s: (was: 2.2.0)
   (was: 1.5.0)
   (was: 3.0.0)
   1.4.8

> (1.4) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21147
> URL: https://issues.apache.org/jira/browse/HBASE-21147
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.4.8
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21147) (1.4) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)
Josh Elser created HBASE-21147:
--

 Summary: (1.4) Add ability for HBase Canary to ignore a 
configurable number of ZooKeeper down nodes
 Key: HBASE-21147
 URL: https://issues.apache.org/jira/browse/HBASE-21147
 Project: HBase
  Issue Type: Improvement
  Components: canary, Zookeeper
Affects Versions: 1.0.0, 3.0.0, 2.0.0
Reporter: David Manning
Assignee: David Manning
 Fix For: 3.0.0, 1.5.0, 2.2.0
 Attachments: HBASE-21126.branch-1.001.patch, 
HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt

When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
-treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
server in the ensemble. If any server is unavailable or unresponsive, the 
canary will exit with a failure code.

If we use the Canary to gauge server health, and alert accordingly, this can be 
too strict. For example, in a 5-node ZooKeeper cluster, having one node down is 
safe and expected in rolling upgrades/patches.

This is a request to allow the Canary to take another parameter
{code:java}
-permittedZookeeperFailures {code}
If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.

(This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21147) (1.4) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-21147:
--

Assignee: Andrew Purtell  (was: David Manning)

> (1.4) Add ability for HBase Canary to ignore a configurable number of 
> ZooKeeper down nodes
> --
>
> Key: HBASE-21147
> URL: https://issues.apache.org/jira/browse/HBASE-21147
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 1.4.8
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21145) (2.1) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)
Josh Elser created HBASE-21145:
--

 Summary: (2.1) Add ability for HBase Canary to ignore a 
configurable number of ZooKeeper down nodes
 Key: HBASE-21145
 URL: https://issues.apache.org/jira/browse/HBASE-21145
 Project: HBase
  Issue Type: Improvement
  Components: canary, Zookeeper
Affects Versions: 1.0.0, 3.0.0, 2.0.0
Reporter: David Manning
Assignee: David Manning
 Fix For: 3.0.0, 1.5.0, 2.2.0
 Attachments: HBASE-21126.branch-1.001.patch, 
HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt

When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
-treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
server in the ensemble. If any server is unavailable or unresponsive, the 
canary will exit with a failure code.

If we use the Canary to gauge server health, and alert accordingly, this can be 
too strict. For example, in a 5-node ZooKeeper cluster, having one node down is 
safe and expected in rolling upgrades/patches.

This is a request to allow the Canary to take another parameter
{code:java}
-permittedZookeeperFailures {code}
If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.

(This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21146) (2.0) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)
Josh Elser created HBASE-21146:
--

 Summary: (2.0) Add ability for HBase Canary to ignore a 
configurable number of ZooKeeper down nodes
 Key: HBASE-21146
 URL: https://issues.apache.org/jira/browse/HBASE-21146
 Project: HBase
  Issue Type: Improvement
  Components: canary, Zookeeper
Affects Versions: 1.0.0, 3.0.0, 2.0.0
Reporter: David Manning
Assignee: David Manning
 Fix For: 3.0.0, 1.5.0, 2.2.0
 Attachments: HBASE-21126.branch-1.001.patch, 
HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt

When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
-treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
server in the ensemble. If any server is unavailable or unresponsive, the 
canary will exit with a failure code.

If we use the Canary to gauge server health, and alert accordingly, this can be 
too strict. For example, in a 5-node ZooKeeper cluster, having one node down is 
safe and expected in rolling upgrades/patches.

This is a request to allow the Canary to take another parameter
{code:java}
-permittedZookeeperFailures {code}
If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.

(This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21126) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21126:
---
Fix Version/s: (was: 1.3.3)

> Add ability for HBase Canary to ignore a configurable number of ZooKeeper 
> down nodes
> 
>
> Key: HBASE-21126
> URL: https://issues.apache.org/jira/browse/HBASE-21126
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21126) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21126:
---
Fix Version/s: (was: 1.4.8)

> Add ability for HBase Canary to ignore a configurable number of ZooKeeper 
> down nodes
> 
>
> Key: HBASE-21126
> URL: https://issues.apache.org/jira/browse/HBASE-21126
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21126) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21126:
---
Fix Version/s: (was: 1.3.0)
   (was: 2.0.0)
   1.4.8
   2.2.0
   1.3.3
   1.5.0

> Add ability for HBase Canary to ignore a configurable number of ZooKeeper 
> down nodes
> 
>
> Key: HBASE-21126
> URL: https://issues.apache.org/jira/browse/HBASE-21126
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21126) Add ability for HBase Canary to ignore a configurable number of ZooKeeper down nodes

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602555#comment-16602555
 ] 

Josh Elser commented on HBASE-21126:


This looks good to me. Thanks for the branch-1 patch too, David.

Will spin out some clones of this issue to get RM approval where applicable.

> Add ability for HBase Canary to ignore a configurable number of ZooKeeper 
> down nodes
> 
>
> Key: HBASE-21126
> URL: https://issues.apache.org/jira/browse/HBASE-21126
> Project: HBase
>  Issue Type: Improvement
>  Components: canary, Zookeeper
>Affects Versions: 1.0.0, 3.0.0, 2.0.0
>Reporter: David Manning
>Assignee: David Manning
>Priority: Minor
> Fix For: 3.0.0, 1.3.0, 2.0.0
>
> Attachments: HBASE-21126.branch-1.001.patch, 
> HBASE-21126.master.001.patch, HBASE-21126.master.002.patch, 
> HBASE-21126.master.003.patch, zookeeperCanaryLocalTestValidation.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running org.apache.hadoop.hbase.tool.Canary with args -zookeeper 
> -treatFailureAsError, the Canary will try to get a znode from each ZooKeeper 
> server in the ensemble. If any server is unavailable or unresponsive, the 
> canary will exit with a failure code.
> If we use the Canary to gauge server health, and alert accordingly, this can 
> be too strict. For example, in a 5-node ZooKeeper cluster, having one node 
> down is safe and expected in rolling upgrades/patches.
> This is a request to allow the Canary to take another parameter
> {code:java}
> -permittedZookeeperFailures {code}
> If N=1, in the 5-node ZooKeeper ensemble example, then the Canary will still 
> pass if 4 ZooKeeper nodes are reachable, but fail if 3 or fewer are reachable.
> (This is my first Jira posting... sorry if I messed anything up.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20892) [UI] Start / End keys are empty on table.jsp

2018-09-03 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602553#comment-16602553
 ] 

Guangxu Cheng commented on HBASE-20892:
---

Re-trigger QA bot.

> [UI] Start / End keys are empty on table.jsp
> 
>
> Key: HBASE-20892
> URL: https://issues.apache.org/jira/browse/HBASE-20892
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: web-ui
> Attachments: HBASE-20892.master.001.patch, 
> HBASE-20892.master.001.patch, new_table_jsp.png, table.jsp.png
>
>
> When viewing table.jsp?name=TestTable , I found that the Start / End keys for 
> all the regions were simply dashes without real value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20892) [UI] Start / End keys are empty on table.jsp

2018-09-03 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-20892:
--
Attachment: HBASE-20892.master.001.patch

> [UI] Start / End keys are empty on table.jsp
> 
>
> Key: HBASE-20892
> URL: https://issues.apache.org/jira/browse/HBASE-20892
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Ted Yu
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: web-ui
> Attachments: HBASE-20892.master.001.patch, 
> HBASE-20892.master.001.patch, new_table_jsp.png, table.jsp.png
>
>
> When viewing table.jsp?name=TestTable , I found that the Start / End keys for 
> all the regions were simply dashes without real value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21127) TableRecordReader need to handle cursor result too

2018-09-03 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602538#comment-16602538
 ] 

Guanghao Zhang commented on HBASE-21127:


ping [~stack] for 2.0 and [~apurtell] for 1.4.

> TableRecordReader need to handle cursor result too
> --
>
> Key: HBASE-21127
> URL: https://issues.apache.org/jira/browse/HBASE-21127
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 2.2.0, 1.4.7
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21127.master.001.patch, 
> HBASE-21127.master.002.patch, HBASE-21127.master.002.patch
>
>
> *strong text*TableRecordReaderImpl need to handle cursor result too. If not, 
> nextKeyValue may return false and miss some data when get a cursor result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21127) TableRecordReader need to handle cursor result too

2018-09-03 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602534#comment-16602534
 ] 

Duo Zhang commented on HBASE-21127:
---

+1.

> TableRecordReader need to handle cursor result too
> --
>
> Key: HBASE-21127
> URL: https://issues.apache.org/jira/browse/HBASE-21127
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 2.2.0, 1.4.7
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21127.master.001.patch, 
> HBASE-21127.master.002.patch, HBASE-21127.master.002.patch
>
>
> *strong text*TableRecordReaderImpl need to handle cursor result too. If not, 
> nextKeyValue may return false and miss some data when get a cursor result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602527#comment-16602527
 ] 

Josh Elser edited comment on HBASE-20952 at 9/4/18 1:38 AM:


[~yuzhih...@gmail.com] wanna put this on reviewboard? I'm expecting significant 
feedback :)

Also, patch needs an explanation covering:
 * What is the "new API"?
 * What was the reasoning behind this API?
 * What's this "ListWAL" thing?

Thanks.


was (Author: elserj):
[~yuzhih...@gmail.com] wanna put this on reviewboard? I'm expecting significant 
feedback :)

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602526#comment-16602526
 ] 

Josh Elser commented on HBASE-20952:


{quote}Just need to see the API. Would like to have some provenance on what 
informed the choice made – what was studied, what implementations were 
considered, that sort of thing. Thanks.
{quote}
Yup, that's what's coming here.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602527#comment-16602527
 ] 

Josh Elser commented on HBASE-20952:


[~yuzhih...@gmail.com] wanna put this on reviewboard? I'm expecting significant 
feedback :)

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-09-03 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602499#comment-16602499
 ] 

Toshihiro Suzuki commented on HBASE-21030:
--

Oh I thought Stack meant javadoc for API. Yes, It's not on doc book. [~stack]

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Subrat Mishra
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8, 1.2.7, 2.1.1
>
> Attachments: HBASE-21030.master.001.patch
>
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602495#comment-16602495
 ] 

stack commented on HBASE-21144:
---

+1 Go for it.

This seems a bit of a long wait though...  Add a note that its over-the-top for 
later examination on commit?

135 Waiter.waitFor(am.getConfiguration(), 1,
136   () -> am.getRegionStates().getRegionStateNode(regionInfo) != 
null);

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602489#comment-16602489
 ] 

stack commented on HBASE-20952:
---

Just need to see the API. Would like to have some provenance on what informed 
the choice made -- what was studied, what implementations were considered, that 
sort of thing. Thanks.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602483#comment-16602483
 ] 

Sergey Soldatov commented on HBASE-20952:
-

bq. Will the old FB's hydrabase impl help here in determining the APIs needed 
here?

If we are talking about HBASE-12259, than nope. Actually, most of the work for 
Hydrabase was made for the consensus protocol implementation and only a few 
attempts to apply that to the WAL system itself ( that were successfully 
dropped due to not accept for hbase-consensus module). We don't want to add our 
own implementation for quorum based consensus protocol. We want to make current 
WAL system flexible enough to build a new  WAL implementation based whether on 
some 3rd party consensus protocol implementation (RAFT/Paxos/etc) or any 
existing Distributed Log implementations (Apache Kafka, Apache BookKeeper, 
etc). The interfaces should be simple with a meaningful public contract and the 
number of interfaces to implement should be reasonable as well. 

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20952:
---
Attachment: 20952.v1.txt

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602414#comment-16602414
 ] 

Ted Yu commented on HBASE-20952:


The mega patch would be the aggregate diff for all the work in wal-refactor 
repository. 

I can attach it here but wonder what format would be easier for the community 
to review.

Sergey has some work outside the wal-refactor repo.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602384#comment-16602384
 ] 

Josh Elser commented on HBASE-20952:


{quote}Will the old FB's hydrabase impl help here in determining the APIs 
needed here?
{quote}
Good question! I haven't looked closely enough to say for sure. I looked 
through the patch previously, but I don't recall if there was a clear notion of 
an API. I'll have to take a look.

I know that [~yuzhih...@gmail.com], [~an...@apache.org], and [~sergey.soldatov] 
have been iterating on this. I was hoping that we would have an initial patch 
posted this past Friday, but that obviously didn't happen :)

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20766) Verify Replication Tool Has Typo "remove cluster"

2018-09-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602271#comment-16602271
 ] 

Sean Busbey commented on HBASE-20766:
-

I'm not putting this in Patch Available because I think Test Patch will get the 
wrong thing to check. I'll post a manual run before merging.

> Verify Replication Tool Has Typo "remove cluster"
> -
>
> Key: HBASE-20766
> URL: https://issues.apache.org/jira/browse/HBASE-20766
> Project: HBase
>  Issue Type: Bug
>Reporter: Clay B.
>Assignee: Ferran Fernandez Garrido
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-20766.master.001.patch
>
>
> The verify replication tool has a trivial typo "remove cluster" instead of 
> "remote cluster": 
> https://github.com/apache/hbase/blob/a6eeb26cc0b4d0af3fff50b5b931b6847df1f9d2/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java#L355



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20766) Verify Replication Tool Has Typo "remove cluster"

2018-09-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20766:
---

Assignee: Ferran Fernandez Garrido

> Verify Replication Tool Has Typo "remove cluster"
> -
>
> Key: HBASE-20766
> URL: https://issues.apache.org/jira/browse/HBASE-20766
> Project: HBase
>  Issue Type: Bug
>Reporter: Clay B.
>Assignee: Ferran Fernandez Garrido
>Priority: Trivial
>  Labels: beginner
> Attachments: HBASE-20766.master.001.patch
>
>
> The verify replication tool has a trivial typo "remove cluster" instead of 
> "remote cluster": 
> https://github.com/apache/hbase/blob/a6eeb26cc0b4d0af3fff50b5b931b6847df1f9d2/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java#L355



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602239#comment-16602239
 ] 

Hadoop QA commented on HBASE-21144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} hbase-server: The patch generated 3 new + 273 unchanged 
- 2 fixed = 276 total (was 275) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}228m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}262m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21144 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938119/HBASE-21144.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e52ac72ffe6f 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dc79029966 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14291/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 

[jira] [Commented] (HBASE-21139) Concurrent invocations of MetricsTableAggregateSourceImpl.getOrCreateTableSource may return unregistered MetricsTableSource

2018-09-03 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602156#comment-16602156
 ] 

Ted Yu commented on HBASE-21139:


The lazy table metrics registration results in penalty for the first flushes.
Excerpt from log shows delay (note the same timestamp 08:18:23,234) :
{code}
2018-09-02 08:18:23,232 DEBUG 
[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-2] 
regionserver.MetricsTableSourceImpl(124): Creating new  
MetricsTableSourceImpl for table 'testtb-1535901500805'
2018-09-02 08:18:23,233 DEBUG 
[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-2] 
regionserver.MetricsTableSourceImpl(137): registering metrics for testtb-   
1535901500805
2018-09-02 08:18:23,234 INFO  
[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-1] 
regionserver.HRegion(2822): Finished flush of dataSize ~2.29 KB/2343,   
heapSize ~5.16 KB/5280, currentSize=0 B/0 for fa403f6a4fb8dbc1a1c389744fce2d58 
in 280ms, sequenceid=5, compaction requested=false
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52758,1535901497238)-snapshot-pool11-thread-1] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 6 ms to register  
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52758,1535901497238)-snapshot-pool11-thread-1,5,FailOnTimeoutGroup]
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-1] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 0 ms to register  
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-1,5,FailOnTimeoutGroup]
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52762,1535901497314)-snapshot-pool9-thread-1] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 6 ms to register   
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52762,1535901497314)-snapshot-pool9-thread-1,5,FailOnTimeoutGroup]
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52762,1535901497314)-snapshot-pool9-thread-2] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 6 ms to register   
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52762,1535901497314)-snapshot-pool9-thread-2,5,FailOnTimeoutGroup]
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52758,1535901497238)-snapshot-pool11-thread-2] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 5 ms to register  
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52758,1535901497238)-snapshot-pool11-thread-2,5,FailOnTimeoutGroup]
2018-09-02 08:18:23,234 DEBUG 
[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-2] 
regionserver.MetricsTableAggregateSourceImpl(84): it took 6 ms to register  
testtb-1535901500805 
Thread[rs(hw13463.attlocal.net,52760,1535901497280)-snapshot-pool10-thread-2,5,FailOnTimeoutGroup]
{code}

> Concurrent invocations of 
> MetricsTableAggregateSourceImpl.getOrCreateTableSource may return 
> unregistered MetricsTableSource
> ---
>
> Key: HBASE-21139
> URL: https://issues.apache.org/jira/browse/HBASE-21139
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> From test output of TestRestoreFlushSnapshotFromClient :
> {code}
> 2018-09-01 21:09:38,174 WARN  [member: 
> 'hw13463.attlocal.net,49623,1535861370108' subprocedure-pool6-thread-1] 
> snapshot.  
> RegionServerSnapshotManager$SnapshotSubprocedurePool(348): Got Exception in 
> SnapshotSubprocedurePool
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:324)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:173)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:193)
>   at 
> org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:189)
>   at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:53)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>   at 
> 

[jira] [Commented] (HBASE-21136) NPE in MetricsTableSourceImpl.updateFlushTime

2018-09-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602111#comment-16602111
 ] 

Hudson commented on HBASE-21136:


Results for branch master
[build #470 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/470/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/470//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/470//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/470//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> NPE in MetricsTableSourceImpl.updateFlushTime
> -
>
> Key: HBASE-21136
> URL: https://issues.apache.org/jira/browse/HBASE-21136
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Guanghao Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21136-v1.patch, HBASE-21136.patch
>
>
> See https://builds.apache.org/job/PreCommit-HBASE-Build/14260/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-03 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602101#comment-16602101
 ] 

ramkrishna.s.vasudevan commented on HBASE-20952:


Will the old FB's hydrabase impl help here in determining the APIs needed here?

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602044#comment-16602044
 ] 

Reid Chan commented on HBASE-21129:
---

Unrelated failed test.

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21132) return wrong result in rest multiget

2018-09-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602043#comment-16602043
 ] 

Hudson commented on HBASE-21132:


Results for branch branch-1
[build #441 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/441/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/441//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/441//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/441//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> return wrong result in rest multiget
> 
>
> Key: HBASE-21132
> URL: https://issues.apache.org/jira/browse/HBASE-21132
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21132-branch-1-addendum.patch, 
> HBASE-21132.master.001.patch, HBASE-21132.master.001.patch, 
> HBASE-21132.master.001.patch, HBASE-21132.master.002.patch
>
>
> There are two ways to specify columns in multi-gets feature。
> 1、Specify columns in PathParam as described in HBASE-15870. Examples: 
> {code:sh}GET /t1/multiget/cf1:c1,cf2?row=r1{code}
> 2、Specify columns in QueryParam. Examples:
> {code:sh} GET /t1/multiget?row=r1/cf1:c1,cf2=r2/cf2 {code}
> However, when we specify columns in QueryParam, the result is wrong , the 
> rowkey contains the columns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602012#comment-16602012
 ] 

Hadoop QA commented on HBASE-21129:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} hbase-server: The patch generated 0 new + 35 
unchanged - 1 fixed = 35 total (was 36) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch hbase-spark passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  3m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestBlockEvictionFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21129 |
| JIRA Patch URL | 

[jira] [Updated] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21144:
--
Component/s: test
 amv2

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21144:
--
 Assignee: Duo Zhang
Fix Version/s: 2.0.3
   2.1.1
   2.2.0
   3.0.0
   Status: Patch Available  (was: Open)

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21144:
--
Attachment: HBASE-21144.patch

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601964#comment-16601964
 ] 

Hadoop QA commented on HBASE-21001:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
53s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
55s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}258m 44s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}292m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
|   | hadoop.hbase.regionserver.TestRowTooBig |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938087/HBASE-21001.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f23f1ff1921e 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / dc79029966 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 

[jira] [Commented] (HBASE-20741) Split of a region with replicas creates all daughter regions and its replica in same server

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601949#comment-16601949
 ] 

Hadoop QA commented on HBASE-20741:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}261m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}295m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.client.replication.TestReplicationAdminWithClusters |
|   | hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938085/HBASE-20741_5.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b2b212969573 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dc79029966 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 

[jira] [Commented] (HBASE-21127) TableRecordReader need to handle cursor result too

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601902#comment-16601902
 ] 

Hadoop QA commented on HBASE-21127:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
16s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
1s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21127 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938101/HBASE-21127.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3d2ee703f7b4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dc79029966 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14290/testReport/ |
| Max. process+thread count | 2533 (vs. ulimit of 1) |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14290/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> TableRecordReader 

[jira] [Created] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-03 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-21144:
-

 Summary: AssignmentManager.waitForAssignment is not stable
 Key: HBASE-21144
 URL: https://issues.apache.org/jira/browse/HBASE-21144
 Project: HBase
  Issue Type: Bug
Reporter: Duo Zhang


https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/

All replicas for meta table are on the same machine
{noformat}
2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
asf904.gq1.ygridcore.net,47561,1535917740998
2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
asf904.gq1.ygridcore.net,55408,1535917768453
2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
asf904.gq1.ygridcore.net,55408,1535917768453
{noformat}

But after calling am.waitForAssignment, the region location is still null...
{noformat}
2018-09-02 19:49:32,414 INFO  [Time-limited test] 
client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
hbase:meta,,1_0001.534574363 on null
2018-09-02 19:49:32,844 INFO  [Time-limited test] 
client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
hbase:meta,,1_0002.1657623790 on null
{noformat}
So we will not balance the replicas and cause TestMetaWithReplicas to hang 
forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21127) TableRecordReader need to handle cursor result too

2018-09-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21127:
---
Attachment: HBASE-21127.master.002.patch

> TableRecordReader need to handle cursor result too
> --
>
> Key: HBASE-21127
> URL: https://issues.apache.org/jira/browse/HBASE-21127
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 2.2.0, 1.4.7
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-21127.master.001.patch, 
> HBASE-21127.master.002.patch, HBASE-21127.master.002.patch
>
>
> *strong text*TableRecordReaderImpl need to handle cursor result too. If not, 
> nextKeyValue may return false and miss some data when get a cursor result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21142) ReopenTableRegionsProcedure sometimes hangs

2018-09-03 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601846#comment-16601846
 ] 

Duo Zhang commented on HBASE-21142:
---

It seems to be related with the machine...

On H16, we will get a bunch of failing UTs

{noformat}
org.apache.hadoop.hbase.rsgroup.TestRSGroups.org.apache.hadoop.hbase.rsgroup.TestRSGroups
org.apache.hadoop.hbase.rsgroup.TestRSGroups.org.apache.hadoop.hbase.rsgroup.TestRSGroups
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas.org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas.org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas.org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas.org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
org.apache.hadoop.hbase.master.TestAssignmentManagerMetrics.testRITAssignmentManagerMetrics
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority.test
org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting.testSplitLog
org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting.org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting
{noformat}

And on H4, we will pass, or only one failed UT which is
{noformat}
org.apache.hadoop.hbase.client.TestMetaWithReplicas.org.apache.hadoop.hbase.client.TestMetaWithReplicas
org.apache.hadoop.hbase.client.TestMetaWithReplicas.org.apache.hadoop.hbase.client.TestMetaWithReplicas
{noformat}

Maybe it is something wrong with the machine?

> ReopenTableRegionsProcedure sometimes hangs
> ---
>
> Key: HBASE-21142
> URL: https://issues.apache.org/jira/browse/HBASE-21142
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Priority: Major
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/364/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting-output.txt/*view*/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-21129:
--
Attachment: HBASE-21129.master.008.patch

> Clean up duplicate codes in #equals and #hashCode methods of Filter
> ---
>
> Key: HBASE-21129
> URL: https://issues.apache.org/jira/browse/HBASE-21129
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21129.master.001.patch, 
> HBASE-21129.master.002.patch, HBASE-21129.master.003.patch, 
> HBASE-21129.master.004.patch, HBASE-21129.master.005.patch, 
> HBASE-21129.master.006.patch, HBASE-21129.master.007.patch, 
> HBASE-21129.master.008.patch
>
>
> It is a follow-up of HBASE-19008, aiming to clean up duplicate codes in 
> #equals and #hashCode methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-03 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601835#comment-16601835
 ] 

Guangxu Cheng commented on HBASE-21143:
---

simple patch.

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-03 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21143:
--
Attachment: HBASE-21143.master.001.patch

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21129) Clean up duplicate codes in #equals and #hashCode methods of Filter

2018-09-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601821#comment-16601821
 ] 

Hadoop QA commented on HBASE-21129:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 30s{color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} hbase-server: The patch generated 0 new + 35 
unchanged - 1 fixed = 35 total (was 36) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-spark passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
43s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938076/HBASE-21129.master.007.patch
 |
| Optional Tests |  

[jira] [Created] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-03 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-21143:
-

 Summary: Update findbugs-maven-plugin to 3.0.4
 Key: HBASE-21143
 URL: https://issues.apache.org/jira/browse/HBASE-21143
 Project: HBase
  Issue Type: Bug
  Components: pom
Affects Versions: 2.0.2, 2.1.0, 3.0.0, 2.2.0
Reporter: Guangxu Cheng
Assignee: Guangxu Cheng


{code}
Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
(default) on project hbase: Execution default of goal 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies could 
not be resolved: Failed to collect dependencies at 
org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
(http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
 Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
update to the safe version, more information: 
http://xxx..xxx/?tab=blackList.
{code}
Recently, when I compile HBase with a new machine, I got the above error. Since 
the machine could not connect to the external network, we visited our internal 
Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was added to the 
blacklist and could not be downloaded. See details, 
org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
[CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
upgrade the version.
{code:xml}
  org.codehaus.mojo
  findbugs-maven-plugin
  3.0.0
  
  

${project.basedir}/../dev-support/findbugs-exclude.xml
{code}
Look at the history commit record, findbugs-maven-plugin has been upgraded to 
3.0.4 in HBASE-18264, but one place is missing which still using the version of 
3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21142) ReopenTableRegionsProcedure sometimes hangs

2018-09-03 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16601789#comment-16601789
 ] 

Allan Yang commented on HBASE-21142:


Never encounter this before……

> ReopenTableRegionsProcedure sometimes hangs
> ---
>
> Key: HBASE-21142
> URL: https://issues.apache.org/jira/browse/HBASE-21142
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Priority: Major
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/364/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting-output.txt/*view*/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)