[jira] [Commented] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-07-17 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546503#comment-16546503
 ] 

Yu Li commented on HBASE-20411:
---

Noticed when checking 2.1.0RC1 compatibility report, that in this JIRA we have 
changed the visibility of {{MemStoreSize#constructors}} from public to 
package-private. Since {{MemStoreSize}} is not marked as IS.Evolving, and in 
our [refguide matrix|http://hbase.apache.org/book.html#hbase.versioning] we 
have below content:
 !compatibility-matrix.png! 

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: huaxiang sun
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 
> 2.pe.write.32026.lock.svg, 2.pe.write.ameliorate.106553.lock.svg, 
> 41901.lock.svg, HBASE-20411-atomiclong-longadder.patch, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch, HBASE-20411.branch-2.0.008.patch, 
> HBASE-20411.branch-2.0.009.patch, HBASE-20411.branch-2.0.010.patch, 
> HBASE-20411.branch-2.0.011.patch, HBASE-20411.branch-2.0.012.patch, 
> HBASE-20411.branch-2.0.013.patch, compatibility-matrix.png, 
> jmc6.write_time_locks.png
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-07-17 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546503#comment-16546503
 ] 

Yu Li edited comment on HBASE-20411 at 7/17/18 12:11 PM:
-

Noticed when checking 2.1.0RC1 compatibility report, that in this JIRA we have 
changed the visibility of {{MemStoreSize#constructors}} from public to 
package-private. Since {{MemStoreSize}} is not marked as IS.Evolving, and in 
our [refguide matrix|http://hbase.apache.org/book.html#hbase.versioning] we 
have below content, are we breaking the compatibility here? Or could you help 
correct my understanding here boss [~stack]? Thanks.
 !compatibility-matrix.png! 


was (Author: carp84):
Noticed when checking 2.1.0RC1 compatibility report, that in this JIRA we have 
changed the visibility of {{MemStoreSize#constructors}} from public to 
package-private. Since {{MemStoreSize}} is not marked as IS.Evolving, and in 
our [refguide matrix|http://hbase.apache.org/book.html#hbase.versioning] we 
have below content:
 !compatibility-matrix.png! 

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: huaxiang sun
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 
> 2.pe.write.32026.lock.svg, 2.pe.write.ameliorate.106553.lock.svg, 
> 41901.lock.svg, HBASE-20411-atomiclong-longadder.patch, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch, HBASE-20411.branch-2.0.008.patch, 
> HBASE-20411.branch-2.0.009.patch, HBASE-20411.branch-2.0.010.patch, 
> HBASE-20411.branch-2.0.011.patch, HBASE-20411.branch-2.0.012.patch, 
> HBASE-20411.branch-2.0.013.patch, compatibility-matrix.png, 
> jmc6.write_time_locks.png
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20565:
-
Attachment: HBASE-20565.v1.patch

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.4
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20565.v1.patch, debug.diff, debug.log, 
> test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546631#comment-16546631
 ] 

Ted Yu commented on HBASE-20855:


Looks like the correct command should be:
{code}
/bin/sh -c 'pip install pylint'
{code}
[~mdrob]:
What do you think ?

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20867) RS may get killed while master restarts

2018-07-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546649#comment-16546649
 ] 

Duo Zhang commented on HBASE-20867:
---

+1. Please fix the checkstyle issue when committing.

> RS may get killed while master restarts
> ---
>
> Key: HBASE-20867
> URL: https://issues.apache.org/jira/browse/HBASE-20867
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20867.branch-2.0.001.patch, 
> HBASE-20867.branch-2.0.002.patch, HBASE-20867.branch-2.0.003.patch, 
> HBASE-20867.branch-2.0.004.patch, HBASE-20867.branch-2.0.005.patch
>
>
> If the master is dispatching a RPC call to RS when aborting. A connection 
> exception may be thrown by the RPC layer(A IOException with "Connection 
> closed" message in this case). The RSProcedureDispatcher will regard is as an 
> un-retryable exception and pass it to UnassignProcedue.remoteCallFailed, 
> which will expire the RS.
> Actually, the RS is very healthy, only the master is restarting.
> I think we should deal with those kinds of connection exceptions in 
> RSProcedureDispatcher and retry the rpc call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20903) backport HBASE-20792 "info:servername and info:sn inconsistent for OPEN region" to branch-2.0

2018-07-17 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20903:
---
Status: Patch Available  (was: Open)

> backport HBASE-20792 "info:servername and info:sn inconsistent for OPEN 
> region" to branch-2.0
> -
>
> Key: HBASE-20903
> URL: https://issues.apache.org/jira/browse/HBASE-20903
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 2.0.2
>
>
> As discussed in HBASE-20864. This is a very serious bug which can cause RS 
> being killed or data loss. Should be backported to branch-2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20565:
-
Status: Patch Available  (was: Open)

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.4.4, 2.1.0
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 2.1.0, 1.5.0, 1.4.6, 2.0.2
>
> Attachments: HBASE-20565.v1.patch, debug.diff, debug.log, 
> test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20853) Polish "Add defaults to Table Interface so Implementors don't have to"

2018-07-17 Thread Balazs Meszaros (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-20853:

Attachment: HBASE-20853.master.004.patch

> Polish "Add defaults to Table Interface so Implementors don't have to"
> --
>
> Key: HBASE-20853
> URL: https://issues.apache.org/jira/browse/HBASE-20853
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Major
>  Labels: beginner, beginners
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20853.master.001.patch, 
> HBASE-20853.master.002.patch, HBASE-20853.master.003.patch, 
> HBASE-20853.master.004.patch
>
>
> This issue is to address feedback that came in after commit on the parent 
> (FYI [~chia7712]). See tail of parent issue and amendment attached to parent 
> adding better defaults to the Table Interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20411) Ameliorate MutableSegment synchronize

2018-07-17 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-20411:
--
Attachment: compatibility-matrix.png

> Ameliorate MutableSegment synchronize
> -
>
> Key: HBASE-20411
> URL: https://issues.apache.org/jira/browse/HBASE-20411
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: huaxiang sun
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: 2.load.patched.17704.lock.svg, 
> 2.load.patched.2.17704.lock.svg, 2.more.patch.12010.lock.svg, 
> 2.pe.write.32026.lock.svg, 2.pe.write.ameliorate.106553.lock.svg, 
> 41901.lock.svg, HBASE-20411-atomiclong-longadder.patch, 
> HBASE-20411.branch-2.0.001.patch, HBASE-20411.branch-2.0.002.patch, 
> HBASE-20411.branch-2.0.003.patch, HBASE-20411.branch-2.0.004.patch, 
> HBASE-20411.branch-2.0.005.patch, HBASE-20411.branch-2.0.006.patch, 
> HBASE-20411.branch-2.0.007.patch, HBASE-20411.branch-2.0.008.patch, 
> HBASE-20411.branch-2.0.009.patch, HBASE-20411.branch-2.0.010.patch, 
> HBASE-20411.branch-2.0.011.patch, HBASE-20411.branch-2.0.012.patch, 
> HBASE-20411.branch-2.0.013.patch, compatibility-matrix.png, 
> jmc6.write_time_locks.png
>
>
> This item is migrated from HBASE-20236 so it gets dedicated issue.
> Let me upload evidence that has this synchronize as a stake in our write-time 
> perf. I'll migrate the patch I posted with updates that come of comments 
> posted by [~mdrob] on the HBASE-20236 issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20565:
-
Affects Version/s: 2.1.0
   2.0.0

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.1.0, 1.4.4, 2.0.0
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20565.v1.patch, debug.diff, debug.log, 
> test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20565:
-
Fix Version/s: 2.0.2
   1.4.6
   1.5.0
   2.1.0

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.1.0, 1.4.4, 2.0.0
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 2.1.0, 1.5.0, 1.4.6, 2.0.2
>
> Attachments: HBASE-20565.v1.patch, debug.diff, debug.log, 
> test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546711#comment-16546711
 ] 

Hadoop QA commented on HBASE-20565:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
30s{color} | {color:red} hbase-client: The patch generated 1 new + 32 unchanged 
- 1 fixed = 33 total (was 33) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 10s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.filter.TestFilterList |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931938/HBASE-20565.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ad7290c50136 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d071 |
| maven | version: 

[jira] [Created] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)
Mike Drob created HBASE-20905:
-

 Summary: branch-1 docker build fails
 Key: HBASE-20905
 URL: https://issues.apache.org/jira/browse/HBASE-20905
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 1.5.0
Reporter: Jingyun Tian
Assignee: Mike Drob
 Fix For: 1.5.0


Docker build for precommit fails:
{quote}
19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-07-17 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-19893:
-
Attachment: HBASE-19893.master.005.patch

> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Attachments: 19893.master.004.patch, 19893.master.004.patch, 
> 19893.master.004.patch, HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.005.patch, 
> HBASE-19893.master.005.patch, 
> org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas-output.txt
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20860) Merged region's RIT state may not be cleaned after master restart

2018-07-17 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20860:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Merged region's RIT state may not be cleaned after master restart
> -
>
> Key: HBASE-20860
> URL: https://issues.apache.org/jira/browse/HBASE-20860
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-20860.branch-2.0.002.patch, 
> HBASE-20860.branch-2.0.003.patch, HBASE-20860.branch-2.0.004.patch, 
> HBASE-20860.branch-2.0.005.patch, HBASE-20860.branch-2.0.patch
>
>
> In MergeTableRegionsProcedure, we issue UnassignProcedures to offline regions 
> to merge. But if we restart master just after MergeTableRegionsProcedure 
> finished these two UnassignProcedure and before it can delete their meta 
> entries. The new master will found these two region is CLOSED but no 
> procedures are attached to them. They will be regard as RIT regions and 
> nobody will clean the RIT state for them later.
> A quick way to resolve this stuck situation in the production env is 
> restarting master again, since the meta entries are deleted in 
> MergeTableRegionsProcedure. Here, I offer a fix for this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20853) Polish "Add defaults to Table Interface so Implementors don't have to"

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546816#comment-16546816
 ] 

Hadoop QA commented on HBASE-20853:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931943/HBASE-20853.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cfdb60cf3151 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d071 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13660/testReport/ |
| Max. process+thread count | 291 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13660/console |
| Powered by | 

[jira] [Updated] (HBASE-20903) backport HBASE-20792 "info:servername and info:sn inconsistent for OPEN region" to branch-2.0

2018-07-17 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20903:
---
Attachment: HBASE-20903.branch-2.0.001.patch

> backport HBASE-20792 "info:servername and info:sn inconsistent for OPEN 
> region" to branch-2.0
> -
>
> Key: HBASE-20903
> URL: https://issues.apache.org/jira/browse/HBASE-20903
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20903.branch-2.0.001.patch
>
>
> As discussed in HBASE-20864. This is a very serious bug which can cause RS 
> being killed or data loss. Should be backported to branch-2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20893) Data loss if splitting region while ServerCrashProcedure executing

2018-07-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546667#comment-16546667
 ] 

Duo Zhang commented on HBASE-20893:
---

There are lots of empty lines in the test, please remove them. And we should 
use 'LOG.info("Begin to put data");'  instead of LOG.error?

And for the hasRecoveredEdit method, the comment says only default replica can 
reach here, but you can not prevent others use this method for non-default 
replica as it is public right? So I think here if we do not want to call 
getDefaultReplica, then we'd better add an assert here to confirm that no one 
use it with non-default replica.

Overall LGTM. +1 after fixing the above problems.

> Data loss if splitting region while ServerCrashProcedure executing
> --
>
> Key: HBASE-20893
> URL: https://issues.apache.org/jira/browse/HBASE-20893
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20893.branch-2.0.001.patch, 
> HBASE-20893.branch-2.0.002.patch
>
>
> Similar case as HBASE-20878.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546727#comment-16546727
 ] 

Ted Yu commented on HBASE-20855:


I am not aware.

If you want to file an issue, please let reporter be Jingyun.

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546768#comment-16546768
 ] 

Mike Drob commented on HBASE-20855:
---

Sure, filed HBASE-20905 and have a patch up there, please review.

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546767#comment-16546767
 ] 

Mike Drob commented on HBASE-20905:
---

pylint 2.0 requires python 3, let's pin to pylint 1.9.2 which appears to work.

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546770#comment-16546770
 ] 

Mike Drob commented on HBASE-20905:
---

branch-2 looks like it uses pylint 1.6.5 from the debian/ubuntu repos, we could 
go all the way back to that here if precommit isn't happy with my choice of 
1.9.2

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20904) Prometheus /metrics http endpoint for monitoring integration

2018-07-17 Thread Hari Sekhon (JIRA)
Hari Sekhon created HBASE-20904:
---

 Summary: Prometheus /metrics http endpoint for monitoring 
integration
 Key: HBASE-20904
 URL: https://issues.apache.org/jira/browse/HBASE-20904
 Project: HBase
  Issue Type: New Feature
Reporter: Hari Sekhon


Feature Request to add Prometheus /metrics http endpoint for monitoring 
integration:

https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cscrape_config%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546646#comment-16546646
 ] 

Duo Zhang commented on HBASE-20846:
---

Just ignore TestProcedureReplayOrder. Now we will restore locks when restarting 
master, so we do not rely on the loading order any more. We should use lock to 
obtain the correct execution order.

> Restore procedure locks when master restarts
> 
>
> Key: HBASE-20846
> URL: https://issues.apache.org/jira/browse/HBASE-20846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20846-v1.patch, HBASE-20846-v2.patch, 
> HBASE-20846-v3.patch, HBASE-20846.branch-2.0.002.patch, 
> HBASE-20846.branch-2.0.patch, HBASE-20846.patch
>
>
> Found this one when investigating ModifyTableProcedure got stuck while there 
> was a MoveRegionProcedure going on after master restart.
> Though this issue can be solved by HBASE-20752. But I discovered something 
> else.
> Before a MoveRegionProcedure can execute, it will hold the table's shared 
> lock. so,, when a UnassignProcedure was spwaned, it will not check the 
> table's shared lock since it is sure that its parent(MoveRegionProcedure) has 
> aquired the table's lock.
> {code:java}
> // If there is parent procedure, it would have already taken xlock, so no 
> need to take
>   // shared lock here. Otherwise, take shared lock.
>   if (!procedure.hasParent()
>   && waitTableQueueSharedLock(procedure, table) == null) {
>   return true;
>   }
> {code}
> But, it is not the case when Master was restarted. The child 
> procedure(UnassignProcedure) will be executed first after restart. Though it 
> has a parent(MoveRegionProcedure), but apprently the parent didn't hold the 
> table's lock.
> So, since it began to execute without hold the table's shared lock. A 
> ModifyTableProcedure can aquire the table's exclusive lock and execute at the 
> same time. Which is not possible if the master was not restarted.
> This will cause a stuck before HBASE-20752. But since HBASE-20752 has fixed, 
> I wrote a simple UT to repo this case.
> I think we don't have to check the parent for table's shared lock. It is a 
> shared lock, right? I think we can acquire it every time we need it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20860) Merged region's RIT state may not be cleaned after master restart

2018-07-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546647#comment-16546647
 ] 

Duo Zhang commented on HBASE-20860:
---

Can we resolve this issue then?

> Merged region's RIT state may not be cleaned after master restart
> -
>
> Key: HBASE-20860
> URL: https://issues.apache.org/jira/browse/HBASE-20860
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-20860.branch-2.0.002.patch, 
> HBASE-20860.branch-2.0.003.patch, HBASE-20860.branch-2.0.004.patch, 
> HBASE-20860.branch-2.0.005.patch, HBASE-20860.branch-2.0.patch
>
>
> In MergeTableRegionsProcedure, we issue UnassignProcedures to offline regions 
> to merge. But if we restart master just after MergeTableRegionsProcedure 
> finished these two UnassignProcedure and before it can delete their meta 
> entries. The new master will found these two region is CLOSED but no 
> procedures are attached to them. They will be regard as RIT regions and 
> nobody will clean the RIT state for them later.
> A quick way to resolve this stuck situation in the production env is 
> restarting master again, since the meta entries are deleted in 
> MergeTableRegionsProcedure. Here, I offer a fix for this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546709#comment-16546709
 ] 

Ted Yu commented on HBASE-20565:


Running Filter tests with patch I got:
{code}
[ERROR]   TestFilterList.testReversedFilterListWithMockSeekHintFilter:855 
expected: but was:
[ERROR]   TestFilterList.testTheMaximalRule:747 expected: 
but was:
[ERROR] Errors:
[ERROR]   TestFilterList.testHintPassThru:528 » NullPointer
{code}
Please fix failing tests.

> ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect 
> result since 1.4
> -
>
> Key: HBASE-20565
> URL: https://issues.apache.org/jira/browse/HBASE-20565
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.1.0, 1.4.4, 2.0.0
>Reporter: Jerry He
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 2.1.0, 1.5.0, 1.4.6, 2.0.2
>
> Attachments: HBASE-20565.v1.patch, debug.diff, debug.log, 
> test-branch-1.4.patch
>
>
> When ColumnPaginationFilter is combined with ColumnRangeFilter, we may see 
> incorrect result.
> Here is a simple example.
> One row with 10 columns c0, c1, c2, .., c9.  I have a ColumnRangeFilter for 
> range c2 to c9.  Then I have a ColumnPaginationFilter with limit 5 and offset 
> 0.  FileterList is FilterList(Operator.MUST_PASS_ALL, ColumnRangeFilter, 
> ColumnPaginationFilter).
> We expect 5 columns being returned.  But in HBase 1.4 and after, 4 columns 
> are returned.
> In 1.2.x, the correct 5 columns are returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546721#comment-16546721
 ] 

Mike Drob commented on HBASE-20855:
---

I'm taking a look. Is there already a separate issue for that?

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20905:
--
Status: Patch Available  (was: Open)

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20905:
--
Attachment: HBASE-20905.branch-1.001.patch

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20867) RS may get killed while master restarts

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546817#comment-16546817
 ] 

Hadoop QA commented on HBASE-20867:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hbase-client: The patch generated 0 new + 13 
unchanged - 2 fixed = 13 total (was 15) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} The patch hbase-server passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}143m 
13s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-20867 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931935/HBASE-20867.branch-2.0.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 2eec5b2d9380 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-20867) RS may get killed while master restarts

2018-07-17 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20867:
---
Attachment: HBASE-20867.branch-2.0.005.patch

> RS may get killed while master restarts
> ---
>
> Key: HBASE-20867
> URL: https://issues.apache.org/jira/browse/HBASE-20867
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20867.branch-2.0.001.patch, 
> HBASE-20867.branch-2.0.002.patch, HBASE-20867.branch-2.0.003.patch, 
> HBASE-20867.branch-2.0.004.patch, HBASE-20867.branch-2.0.005.patch
>
>
> If the master is dispatching a RPC call to RS when aborting. A connection 
> exception may be thrown by the RPC layer(A IOException with "Connection 
> closed" message in this case). The RSProcedureDispatcher will regard is as an 
> un-retryable exception and pass it to UnassignProcedue.remoteCallFailed, 
> which will expire the RS.
> Actually, the RS is very healthy, only the master is restarting.
> I think we should deal with those kinds of connection exceptions in 
> RSProcedureDispatcher and retry the rpc call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20870) Wrong HBase root dir in ITBLL's Search Tool

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546589#comment-16546589
 ] 

Hadoop QA commented on HBASE-20870:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 15s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}105m 
25s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-20870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931914/HBASE-20870.branch-2.0.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux fc9c10b0693e 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 5594f0b9fd |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| 

[jira] [Commented] (HBASE-20853) Polish "Add defaults to Table Interface so Implementors don't have to"

2018-07-17 Thread Balazs Meszaros (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546613#comment-16546613
 ] 

Balazs Meszaros commented on HBASE-20853:
-

Thanks [~chia7712], I wrote a safer code.

> Polish "Add defaults to Table Interface so Implementors don't have to"
> --
>
> Key: HBASE-20853
> URL: https://issues.apache.org/jira/browse/HBASE-20853
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Major
>  Labels: beginner, beginners
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20853.master.001.patch, 
> HBASE-20853.master.002.patch, HBASE-20853.master.003.patch, 
> HBASE-20853.master.004.patch
>
>
> This issue is to address feedback that came in after commit on the parent 
> (FYI [~chia7712]). See tail of parent issue and amendment attached to parent 
> adding better defaults to the Table Interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546670#comment-16546670
 ] 

Hadoop QA commented on HBASE-19893:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}419m 
54s{color} | {color:red} Docker failed to build yetus/hbase:b002b0b. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-19893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931892/HBASE-19893.master.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13650/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Attachments: 19893.master.004.patch, 19893.master.004.patch, 
> 19893.master.004.patch, HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.005.patch, 
> HBASE-19893.master.005.patch, 
> org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas-output.txt
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546769#comment-16546769
 ] 

Ted Yu commented on HBASE-20905:


lgtm

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546584#comment-16546584
 ] 

Zheng Hu commented on HBASE-20565:
--

Upload the patch.v1, and  pasted the discuss with [~anoop.hbase] ... 

> What if the order of filters be opposite way in FL?
A good question,   I think we need to tell the user explicitly to place the 
count-related filters at the last position.  In SQL syntax,  we accept the sql 
: select * from table where xxx and xxx  limit 1, 100, the limit is at the end 
of the statement,  sql such as: select * from table where xxx limit 1, 1000 and 
xx will not be accepted.  
I think it's meaningful to require the count-related filters put at the end of 
sub-filters. 


On Fri, May 25, 2018 at 6:25 PM, Anoop John  wrote:
> if  a cell has been filtered out by filter-A,  then no need to
pass the cell to filter-B and filter-C,  only the included cell set of
filter-A should be passed to filter-B, and only the included cell set
of filter-A & filter-B should be passed to filter-C ...

U mean u propose such a change now?  Then the order of filters matters
right?  Say the count based filter is coming second and the other
(which can filter out some cells) come as 1st, it will work. What if
the order of filters be opposite way in FL?

-Anoop-

On Fri, May 25, 2018 at 12:29 PM, OpenInx  wrote:
> I have to admit that my previous solution was one-sided...
> Not only the ColumnPaginationFilter has the problem, other counter-related
> filters also has the problem too.
>
>> We have 2 filters in a FL. We pass cell 1 and 2. First filter select cell1
>> but been filtered out by F2.  Now we need to tell both filters that we
>> have excludes this cell.  This will be useful for filters which work on
>> counting  basis.  It can reduce the counter which it would have advanced.
>> Pls see the possibility.
>
> Assume that FilterList =  filter-A  AND ColumnCountGetFilter ,  if cell x
> has been filtered out by filter-A,  then what the expected return code do
> the ColumnCountGetFilter#filterKeyValue shoud return ?
> In theory, the count in ColumnCountGetFilter  should not increment when
> checking the cell x .  So what is the purpose of passing the cell  x to
> ColumnCountGetFilter#filterKeyValue ?
> To get the return code from ColumnCountGetFilter for max the forward step ?
>
> Now, I'm thinking that the implementation in branch-1.2  is more reasonable,
> Assume that filterList = filter-A   AND   filter-B  AND filter-C  AND ,
> if  a cell has been filtered out by filter-A,  then no need to
> pass the cell to filter-B and filter-C,  only the included cell set of
> filter-A should be passed to filter-B, and only the included cell set of
> filter-A & filter-B should be passed to filter-C 
>
> The max rule can still working,  but only the include* return code should be
> merged into a max return code.
>
> I think the semantic is more reasonable.
>
>
> On Thu, May 24, 2018 at 4:31 PM, Anoop John  wrote:
>>
>> The offset is the cell offset in  a row na.  This says we already fetched
>> till there. So ya of there is another filter also along with this pagination
>> filter, it must be hard for the pagination filter to decide the column
>> offset for the next request.  So ya ideally the column offset might work
>> there.
>> But the issue is we can not really generalize this. It depends on the way
>> the col offset and column value offset is been implemented in pagination
>> filter.
>>
>> I kind of thinking that we need a generic framework change now. If we pass
>> all cells to all filters ( which is correct also) then there should be a way
>> later with which we say all filters that we decided later that this cell is
>> not included in result.
>>
>> We have 2 filters in a FL. We pass cell 1 and 2. First filter select cell1
>> but been filtered out by F2.  Now we need to tell both filters that we have
>> excludes this cell.  This will be useful for filters which work on counting
>> basis.  It can reduce the counter which it would have advanced.  Pls see the
>> possibility.
>>
>> I think previously the issue was the order of filters in FL mattered as we
>> wont pass all cells to all filters.  Now that is not an issue. But the later
>> filters possibly filtering out cells still an issue.   WDYT?
>>
>> Anoop
>>
>>
>> On Wednesday, May 23, 2018, OpenInx  wrote:
>>>
>>> > That previously if we have A and
>>> > B in FilterList with AND and if A is not including a cell, we were not
>>> > passing that to B?   (In 1.2  I mean)  and in later versions we start
>>> > passing it?
>>>
>>> Yes,  you can see the code in branch-1.2 [1], and in master branch [2].
>>>
>>> > I think it is a bug ..
>>> Sure, it's a bug.
>>>
>>> > Say if we have another
>>> > filter which might filter out some in between cells, then also we need
>>> > to have 5 cells to be included.
>>> If so ,  the offset is meaningless now, only the limit is  

[jira] [Comment Edited] (HBASE-20565) ColumnRangeFilter combined with ColumnPaginationFilter can produce incorrect result since 1.4

2018-07-17 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546584#comment-16546584
 ] 

Zheng Hu edited comment on HBASE-20565 at 7/17/18 1:19 PM:
---

Upload the patch.v1, and  Let me explain the core idea: 

Assume that filterList = filter-A   AND   filter-B  AND filter-C  AND ,
if  a cell has been filtered out by filter-A,  then no need to 
pass the cell to filter-B and filter-C,  only the included cell set of filter-A 
should be passed to filter-B, and only the included cell set of filter-A & 
filter-B should be passed to filter-C   

The max rule can still working,  but only the include* return code should be 
merged into a max return code. 

The problem is the order of filters may result in diff cells...so we need to 
tell the user explicitly to place the count-related filters at the last 
position.  In SQL syntax,  we accept the sql :
{code}
 select * from table where xxx and yyy  limit 1, 100,
{code}
the limit is at the end of the statement,

SQL such as: 
{code}
select * from table where xxx limit 1, 1000 and yyy
{code}
will not be accepted. 


was (Author: openinx):
Upload the patch.v1, and  pasted the discuss with [~anoop.hbase] ... 

> What if the order of filters be opposite way in FL?
A good question,   I think we need to tell the user explicitly to place the 
count-related filters at the last position.  In SQL syntax,  we accept the sql 
: select * from table where xxx and xxx  limit 1, 100, the limit is at the end 
of the statement,  sql such as: select * from table where xxx limit 1, 1000 and 
xx will not be accepted.  
I think it's meaningful to require the count-related filters put at the end of 
sub-filters. 


On Fri, May 25, 2018 at 6:25 PM, Anoop John  wrote:
> if  a cell has been filtered out by filter-A,  then no need to
pass the cell to filter-B and filter-C,  only the included cell set of
filter-A should be passed to filter-B, and only the included cell set
of filter-A & filter-B should be passed to filter-C ...

U mean u propose such a change now?  Then the order of filters matters
right?  Say the count based filter is coming second and the other
(which can filter out some cells) come as 1st, it will work. What if
the order of filters be opposite way in FL?

-Anoop-

On Fri, May 25, 2018 at 12:29 PM, OpenInx  wrote:
> I have to admit that my previous solution was one-sided...
> Not only the ColumnPaginationFilter has the problem, other counter-related
> filters also has the problem too.
>
>> We have 2 filters in a FL. We pass cell 1 and 2. First filter select cell1
>> but been filtered out by F2.  Now we need to tell both filters that we
>> have excludes this cell.  This will be useful for filters which work on
>> counting  basis.  It can reduce the counter which it would have advanced.
>> Pls see the possibility.
>
> Assume that FilterList =  filter-A  AND ColumnCountGetFilter ,  if cell x
> has been filtered out by filter-A,  then what the expected return code do
> the ColumnCountGetFilter#filterKeyValue shoud return ?
> In theory, the count in ColumnCountGetFilter  should not increment when
> checking the cell x .  So what is the purpose of passing the cell  x to
> ColumnCountGetFilter#filterKeyValue ?
> To get the return code from ColumnCountGetFilter for max the forward step ?
>
> Now, I'm thinking that the implementation in branch-1.2  is more reasonable,
> Assume that filterList = filter-A   AND   filter-B  AND filter-C  AND ,
> if  a cell has been filtered out by filter-A,  then no need to
> pass the cell to filter-B and filter-C,  only the included cell set of
> filter-A should be passed to filter-B, and only the included cell set of
> filter-A & filter-B should be passed to filter-C 
>
> The max rule can still working,  but only the include* return code should be
> merged into a max return code.
>
> I think the semantic is more reasonable.
>
>
> On Thu, May 24, 2018 at 4:31 PM, Anoop John  wrote:
>>
>> The offset is the cell offset in  a row na.  This says we already fetched
>> till there. So ya of there is another filter also along with this pagination
>> filter, it must be hard for the pagination filter to decide the column
>> offset for the next request.  So ya ideally the column offset might work
>> there.
>> But the issue is we can not really generalize this. It depends on the way
>> the col offset and column value offset is been implemented in pagination
>> filter.
>>
>> I kind of thinking that we need a generic framework change now. If we pass
>> all cells to all filters ( which is correct also) then there should be a way
>> later with which we say all filters that we decided later that this cell is
>> not included in result.
>>
>> We have 2 filters in a FL. We pass cell 1 and 2. First filter select cell1
>> but been filtered out by F2.  Now we need to tell both filters that we have
>> excludes this cell.  This will 

[jira] [Updated] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20846:
--
Attachment: HBASE-20846-v3.patch

> Restore procedure locks when master restarts
> 
>
> Key: HBASE-20846
> URL: https://issues.apache.org/jira/browse/HBASE-20846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20846-v1.patch, HBASE-20846-v2.patch, 
> HBASE-20846-v3.patch, HBASE-20846.branch-2.0.002.patch, 
> HBASE-20846.branch-2.0.patch, HBASE-20846.patch
>
>
> Found this one when investigating ModifyTableProcedure got stuck while there 
> was a MoveRegionProcedure going on after master restart.
> Though this issue can be solved by HBASE-20752. But I discovered something 
> else.
> Before a MoveRegionProcedure can execute, it will hold the table's shared 
> lock. so,, when a UnassignProcedure was spwaned, it will not check the 
> table's shared lock since it is sure that its parent(MoveRegionProcedure) has 
> aquired the table's lock.
> {code:java}
> // If there is parent procedure, it would have already taken xlock, so no 
> need to take
>   // shared lock here. Otherwise, take shared lock.
>   if (!procedure.hasParent()
>   && waitTableQueueSharedLock(procedure, table) == null) {
>   return true;
>   }
> {code}
> But, it is not the case when Master was restarted. The child 
> procedure(UnassignProcedure) will be executed first after restart. Though it 
> has a parent(MoveRegionProcedure), but apprently the parent didn't hold the 
> table's lock.
> So, since it began to execute without hold the table's shared lock. A 
> ModifyTableProcedure can aquire the table's exclusive lock and execute at the 
> same time. Which is not possible if the master was not restarted.
> This will cause a stuck before HBASE-20752. But since HBASE-20752 has fixed, 
> I wrote a simple UT to repo this case.
> I think we don't have to check the parent for table's shared lock. It is a 
> shared lock, right? I think we can acquire it every time we need it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20903) backport HBASE-20792 "info:servername and info:sn inconsistent for OPEN region" to branch-2.0

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546763#comment-16546763
 ] 

Hadoop QA commented on HBASE-20903:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
50s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
37s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-20903 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931936/HBASE-20903.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8b39477278b9 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 5594f0b9fd |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13656/testReport/ |
| Max. process+thread count | 3571 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13656/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546904#comment-16546904
 ] 

Hadoop QA commented on HBASE-19893:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}119m 
19s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-19893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931948/HBASE-19893.master.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-143-generic #192-Ubuntu SMP Tue 
Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d |
| maven | version: Apache Maven 3.0.5 
(r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 13:51:28+) |
| Default Java | 1.8.0_172 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13659/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13659/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: 

[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546882#comment-16546882
 ] 

Hadoop QA commented on HBASE-20905:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/13663/console in case of 
problems.


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546907#comment-16546907
 ] 

Hadoop QA commented on HBASE-20905:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
6s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:1f3957d |
| JIRA Issue | HBASE-20905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931950/HBASE-20905.branch-1.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux ebb00f1f2d07 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-1 / 7211d13 |
| maven | version: Apache Maven 3.0.5 |
| shellcheck | v0.4.7 |
| Max. process+thread count | 37 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13663/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547022#comment-16547022
 ] 

Hudson commented on HBASE-20905:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1135 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1135/])
HBASE-20905 pin pylint to 1.x in branch-1 (mdrob: rev 
34a9b272df9c7018d1080251f3ba1ccc77e9bbf6)
* (edit) dev-support/docker/Dockerfile


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546986#comment-16546986
 ] 

Hudson commented on HBASE-20905:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #436 (See 
[https://builds.apache.org/job/HBase-1.3-IT/436/])
HBASE-20905 pin pylint to 1.x in branch-1 (mdrob: rev 
53dba694d8e35d250b849e1965a94b66e75dd177)
* (edit) dev-support/docker/Dockerfile


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546976#comment-16546976
 ] 

Hadoop QA commented on HBASE-19893:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 27s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-19893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931948/HBASE-19893.master.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e533c622722d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d071 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13662/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13662/testReport/ |
| Max. process+thread count | 4445 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| 

[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546864#comment-16546864
 ] 

Mike Drob commented on HBASE-20905:
---

Not sure why precommit didn't start this automatically, launched a job manually 
for it though.

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20860) Merged region's RIT state may not be cleaned after master restart

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546943#comment-16546943
 ] 

Hudson commented on HBASE-20860:


Results for branch branch-2
[build #993 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/993/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/993//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/993//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/993//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Merged region's RIT state may not be cleaned after master restart
> -
>
> Key: HBASE-20860
> URL: https://issues.apache.org/jira/browse/HBASE-20860
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-20860.branch-2.0.002.patch, 
> HBASE-20860.branch-2.0.003.patch, HBASE-20860.branch-2.0.004.patch, 
> HBASE-20860.branch-2.0.005.patch, HBASE-20860.branch-2.0.patch
>
>
> In MergeTableRegionsProcedure, we issue UnassignProcedures to offline regions 
> to merge. But if we restart master just after MergeTableRegionsProcedure 
> finished these two UnassignProcedure and before it can delete their meta 
> entries. The new master will found these two region is CLOSED but no 
> procedures are attached to them. They will be regard as RIT regions and 
> nobody will clean the RIT state for them later.
> A quick way to resolve this stuck situation in the production env is 
> restarting master again, since the meta entries are deleted in 
> MergeTableRegionsProcedure. Here, I offer a fix for this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Mike Drob (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20905:
--
   Resolution: Fixed
Fix Version/s: 1.4.6
   1.3.3
   1.2.7
   Status: Resolved  (was: Patch Available)

> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547003#comment-16547003
 ] 

Hadoop QA commented on HBASE-20846:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle 
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hbase-procedure: The patch generated 1 new + 28 
unchanged - 16 fixed = 29 total (was 44) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} hbase-server: The patch generated 0 new + 316 
unchanged - 7 fixed = 316 total (was 323) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 1s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m  4s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}192m  1s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}244m 

[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546967#comment-16546967
 ] 

Mike Drob commented on HBASE-20855:
---

docker build should be fixed, I relaunched a precommit run with your latest 
patch, [~tianjingyun]

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547083#comment-16547083
 ] 

stack commented on HBASE-20448:
---

Yeah, bit of doc would help here. I've not been paying attention so am hacking 
it since can't find "guidance" for running ITBLL, say, in new regime (Trying to 
test the 2.1.0RC). For reference, here is what I used do:

{code}
export 
HADOOP_CLASSPATH="${HOME}/conf_hadoop:${HOME}/conf_hbase:`${HOME}/hbase/bin/hbase
 classpath`"
${HOME}/hadoop/bin/hadoop --config ${HOME}/conf_hadoop 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
serverKilling   -monkeyProps monkey.properties Generator 40 2500 g.$date
{code}


... but bin/hbase classpath just returns shaded clients now.

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.1
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20906) Hbase shell using Cygwin gives the error Could not find or load main class org.jruby.Main

2018-07-17 Thread Glenn Kruszewski (JIRA)
Glenn Kruszewski created HBASE-20906:


 Summary: Hbase shell using Cygwin gives the error Could not find 
or load main class org.jruby.Main
 Key: HBASE-20906
 URL: https://issues.apache.org/jira/browse/HBASE-20906
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.1
 Environment: Windows 10
Hbase 2.0.1
Hadoop 2.7.6
Cygwin 2.9.0
Reporter: Glenn Kruszewski


Executing hbase shell in Windows using Cygwin does not properly resolve the 
java classpath. This is caused by having cygpath calls converting to windows 
paths prior to locating the jar files. Line 241 in the hbase script has cygpath 
converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the file a 
for loop is used to locate jar files, however this is using the windows path so 
none for the jar files will be found, giving the error 'Could not find or load 
main class org.jruby.Main.'

As a work around, I've updated the hbase-env.sh to export 
HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547213#comment-16547213
 ] 

Hadoop QA commented on HBASE-20846:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle 
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hbase-procedure: The patch generated 1 new + 28 
unchanged - 16 fixed = 29 total (was 44) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} hbase-server: The patch generated 0 new + 316 
unchanged - 7 fixed = 316 total (was 323) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}196m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 

[jira] [Updated] (HBASE-20906) Hbase shell using Cygwin gives the error Could not find or load main class org.jruby.Main

2018-07-17 Thread Glenn Kruszewski (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glenn Kruszewski updated HBASE-20906:
-
Description: 
Executing hbase shell in Windows using Cygwin does not properly resolve the 
java classpath. This is caused by having cygpath calls converting to windows 
paths prior to locating the jar files. Line 241 in the hbase script has cygpath 
converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the file a 
for loop is used to locate jar files, however this is using the windows path so 
none for the jar files will be found, giving the error 'Could not find or load 
main class org.jruby.Main.'

As a work around, I've updated the hbase-env.sh to export 
HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/***

  was:
Executing hbase shell in Windows using Cygwin does not properly resolve the 
java classpath. This is caused by having cygpath calls converting to windows 
paths prior to locating the jar files. Line 241 in the hbase script has cygpath 
converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the file a 
for loop is used to locate jar files, however this is using the windows path so 
none for the jar files will be found, giving the error 'Could not find or load 
main class org.jruby.Main.'

As a work around, I've updated the hbase-env.sh to export 
HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/*


> Hbase shell using Cygwin gives the error Could not find or load main class 
> org.jruby.Main
> -
>
> Key: HBASE-20906
> URL: https://issues.apache.org/jira/browse/HBASE-20906
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.1
> Environment: Windows 10
> Hbase 2.0.1
> Hadoop 2.7.6
> Cygwin 2.9.0
>Reporter: Glenn Kruszewski
>Priority: Minor
>
> Executing hbase shell in Windows using Cygwin does not properly resolve the 
> java classpath. This is caused by having cygpath calls converting to windows 
> paths prior to locating the jar files. Line 241 in the hbase script has 
> cygpath converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the 
> file a for loop is used to locate jar files, however this is using the 
> windows path so none for the jar files will be found, giving the error 'Could 
> not find or load main class org.jruby.Main.'
> As a work around, I've updated the hbase-env.sh to export 
> HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/***



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-07-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20448:
--
Priority: Critical  (was: Major)

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.1.1
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20615) emphasize use of shaded client jars when they're present in an install

2018-07-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20615:
--
Release Note: 


HBase's built-in scripts now rely on the downstream facing shaded artifacts 
where possible. In particular, of interest to downstream users, the `hbase 
classpath` and `hbase mapredcp` commands now return the relevant shaded client 
artifact and only those third-party jars needed to make use of them (e.g. 
slf4j-api, commons-logging, htrace, etc).

Downstream users should note that by default the `hbase classpath` command will 
treat having `hadoop` on the shell's PATH as an implicit request to include the 
output of the `hadoop classpath` command in the returned classpath. This 
long-existing behavior can be opted out of by setting the environment variable 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP` to the value "true". For example: 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true" bin/hbase classpath`.

  was:


HBase's built in scripts now rely on the downstream facing shaded artifacts 
where possible. In particular interest to downstream users, the `hbase 
classpath` and `hbase mapredcp` commands now return the relevant shaded client 
artifact and only those third paty jars needed to make use of them (e.g. 
slf4j-api, commons-logging, htrace, etc).

Downstream users should note that by default the `hbase classpath` command will 
treat having `hadoop` on the shell's PATH as an implicit request to include the 
output of the `hadoop classpath` command in the returned classpath. This 
long-existing behavior can be opted out of by setting the environment variable 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP` to the value "true". For example: 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true" bin/hbase classpath`.


> emphasize use of shaded client jars when they're present in an install
> --
>
> Key: HBASE-20615
> URL: https://issues.apache.org/jira/browse/HBASE-20615
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Client, Usability
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20615.0.patch, HBASE-20615.1.patch, 
> HBASE-20615.2.patch
>
>
> Working through setting up an IT for our shaded artifacts in HBASE-20334 
> makes our lack of packaging seem like an oversight. While I could work around 
> by pulling the shaded clients out of whatever build process built the 
> convenience binary that we're trying to test, it seems v awkward.
> After reflecting on it more, it makes more sense to me for there to be a 
> common place in the install that folks running jobs against the cluster can 
> rely on. If they need to run without a full hbase install, that should still 
> work fine via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547057#comment-16547057
 ] 

Hudson commented on HBASE-20905:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1134 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1134/])
HBASE-20905 pin pylint to 1.x in branch-1 (mdrob: rev 
34a9b272df9c7018d1080251f3ba1ccc77e9bbf6)
* (edit) dev-support/docker/Dockerfile


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547166#comment-16547166
 ] 

stack commented on HBASE-20697:
---

bq. The fix is generic, getAllRegionLocations is not caching all regions' 
locations, instead, it only caches the first entry

Thanks [~huaxiang]. I was having trouble believing we were so broke.

+1 on backport to branch-1.x.

> Can't cache All region locations of the specify table by calling 
> table.getRegionLocator().getAllRegionLocations()
> -
>
> Key: HBASE-20697
> URL: https://issues.apache.org/jira/browse/HBASE-20697
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.1
>Reporter: zhaoyuan
>Assignee: zhaoyuan
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-20697.branch-1.2.001.patch, 
> HBASE-20697.branch-1.2.002.patch, HBASE-20697.branch-1.2.003.patch, 
> HBASE-20697.branch-1.2.004.patch, HBASE-20697.branch-1.addendum.patch, 
> HBASE-20697.master.001.patch, HBASE-20697.master.002.patch, 
> HBASE-20697.master.002.patch, HBASE-20697.master.003.patch
>
>
> When we upgrade and restart  a new version application which will read and 
> write to HBase, we will get some operation timeout. The time out is expected 
> because when the application restarts,It will not hold any region locations 
> cache and do communication with zk and meta regionserver to get region 
> locations.
> We want to avoid these timeouts so we do warmup work and as far as I am 
> concerned,the method table.getRegionLocator().getAllRegionLocations() will 
> fetch all region locations and cache them. However, it didn't work good. 
> There are still a lot of time outs,so it confused me. 
> I dig into the source code and find something below
> {code:java}
> // code placeholder
> public List getAllRegionLocations() throws IOException {
>   TableName tableName = getName();
>   NavigableMap locations =
>   MetaScanner.allTableRegions(this.connection, tableName);
>   ArrayList regions = new ArrayList<>(locations.size());
>   for (Entry entry : locations.entrySet()) {
> regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
>   }
>   if (regions.size() > 0) {
> connection.cacheLocation(tableName, new RegionLocations(regions));
>   }
>   return regions;
> }
> In MetaCache
> public void cacheLocation(final TableName tableName, final RegionLocations 
> locations) {
>   byte [] startKey = 
> locations.getRegionLocation().getRegionInfo().getStartKey();
>   ConcurrentMap tableLocations = 
> getTableLocations(tableName);
>   RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, 
> locations);
>   boolean isNewCacheEntry = (oldLocation == null);
>   if (isNewCacheEntry) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Cached location: " + locations);
> }
> addToCachedServers(locations);
> return;
>   }
> {code}
> It will collect all regions into one RegionLocations object and only cache 
> the first not null region location and then when we put or get to hbase, we 
> do getCacheLocation() 
> {code:java}
> // code placeholder
> public RegionLocations getCachedLocation(final TableName tableName, final 
> byte [] row) {
>   ConcurrentNavigableMap tableLocations =
> getTableLocations(tableName);
>   Entry e = tableLocations.floorEntry(row);
>   if (e == null) {
> if (metrics!= null) metrics.incrMetaCacheMiss();
> return null;
>   }
>   RegionLocations possibleRegion = e.getValue();
>   // make sure that the end key is greater than the row we're looking
>   // for, otherwise the row actually belongs in the next region, not
>   // this one. the exception case is when the endkey is
>   // HConstants.EMPTY_END_ROW, signifying that the region we're
>   // checking is actually the last region in the table.
>   byte[] endKey = 
> possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
>   if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
>   getRowComparator(tableName).compareRows(
>   endKey, 0, endKey.length, row, 0, row.length) > 0) {
> if (metrics != null) metrics.incrMetaCacheHit();
> return possibleRegion;
>   }
>   // Passed all the way through, so we got nothing - complete cache miss
>   if (metrics != null) metrics.incrMetaCacheMiss();
>   return null;
> }
> {code}
> It will choose the first location to be possibleRegion and possibly it will 
> miss match.
> So did I forget something or may be wrong somewhere? If this is indeed a bug 
> I think it can be fixed not very hard.
> Hope commiters and PMC review this !
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547198#comment-16547198
 ] 

Tak Lon (Stephen) Wu commented on HBASE-20856:
--

please ignore my previous comment, I figured it out after looking at the logic 
again, I will try to come up the patch and attach it later this week.

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Minor
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.branch-1.001.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.master.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: (was: HBASE-20401.master.001.patch)

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-07-17 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547060#comment-16547060
 ] 

huaxiang sun commented on HBASE-20697:
--

HBASE-20697 is a fix for HBASE-15674, which is committed to 1.3.0, 1.0.4, 
1.1.5, 1.2.2, 2.0.0. We need to committed this to 1.2, 1.3 as well.

> Can't cache All region locations of the specify table by calling 
> table.getRegionLocator().getAllRegionLocations()
> -
>
> Key: HBASE-20697
> URL: https://issues.apache.org/jira/browse/HBASE-20697
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 2.0.1
>Reporter: zhaoyuan
>Assignee: zhaoyuan
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-20697.branch-1.2.001.patch, 
> HBASE-20697.branch-1.2.002.patch, HBASE-20697.branch-1.2.003.patch, 
> HBASE-20697.branch-1.2.004.patch, HBASE-20697.branch-1.addendum.patch, 
> HBASE-20697.master.001.patch, HBASE-20697.master.002.patch, 
> HBASE-20697.master.002.patch, HBASE-20697.master.003.patch
>
>
> When we upgrade and restart  a new version application which will read and 
> write to HBase, we will get some operation timeout. The time out is expected 
> because when the application restarts,It will not hold any region locations 
> cache and do communication with zk and meta regionserver to get region 
> locations.
> We want to avoid these timeouts so we do warmup work and as far as I am 
> concerned,the method table.getRegionLocator().getAllRegionLocations() will 
> fetch all region locations and cache them. However, it didn't work good. 
> There are still a lot of time outs,so it confused me. 
> I dig into the source code and find something below
> {code:java}
> // code placeholder
> public List getAllRegionLocations() throws IOException {
>   TableName tableName = getName();
>   NavigableMap locations =
>   MetaScanner.allTableRegions(this.connection, tableName);
>   ArrayList regions = new ArrayList<>(locations.size());
>   for (Entry entry : locations.entrySet()) {
> regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
>   }
>   if (regions.size() > 0) {
> connection.cacheLocation(tableName, new RegionLocations(regions));
>   }
>   return regions;
> }
> In MetaCache
> public void cacheLocation(final TableName tableName, final RegionLocations 
> locations) {
>   byte [] startKey = 
> locations.getRegionLocation().getRegionInfo().getStartKey();
>   ConcurrentMap tableLocations = 
> getTableLocations(tableName);
>   RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, 
> locations);
>   boolean isNewCacheEntry = (oldLocation == null);
>   if (isNewCacheEntry) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Cached location: " + locations);
> }
> addToCachedServers(locations);
> return;
>   }
> {code}
> It will collect all regions into one RegionLocations object and only cache 
> the first not null region location and then when we put or get to hbase, we 
> do getCacheLocation() 
> {code:java}
> // code placeholder
> public RegionLocations getCachedLocation(final TableName tableName, final 
> byte [] row) {
>   ConcurrentNavigableMap tableLocations =
> getTableLocations(tableName);
>   Entry e = tableLocations.floorEntry(row);
>   if (e == null) {
> if (metrics!= null) metrics.incrMetaCacheMiss();
> return null;
>   }
>   RegionLocations possibleRegion = e.getValue();
>   // make sure that the end key is greater than the row we're looking
>   // for, otherwise the row actually belongs in the next region, not
>   // this one. the exception case is when the endkey is
>   // HConstants.EMPTY_END_ROW, signifying that the region we're
>   // checking is actually the last region in the table.
>   byte[] endKey = 
> possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
>   if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
>   getRowComparator(tableName).compareRows(
>   endKey, 0, endKey.length, row, 0, row.length) > 0) {
> if (metrics != null) metrics.incrMetaCacheHit();
> return possibleRegion;
>   }
>   // Passed all the way through, so we got nothing - complete cache miss
>   if (metrics != null) metrics.incrMetaCacheMiss();
>   return null;
> }
> {code}
> It will choose the first location to be possibleRegion and possibly it will 
> miss match.
> So did I forget something or may be wrong somewhere? If this is indeed a bug 
> I think it can be fixed not very hard.
> Hope commiters and PMC review this !
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547116#comment-16547116
 ] 

Mike Drob commented on HBASE-20448:
---

Try it with bin/hbase mapredcp, [~stack]?

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.1
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547157#comment-16547157
 ] 

stack commented on HBASE-20448:
---

Thanks [~mdrob]. Tried it. Didn't work. ClassNotFoundExceptions for basic types.

This *hack* works:

{code}
export HADOOP_CLASSPATH="${HOME}/conf_hadoop:${HOME}/conf_hbase:`ls 
~/hbase/lib/{.,client-facing-thirdparty}/*.jar|tr [:space:] ':'`"
${HOME}/hadoop/bin/hadoop --config ${HOME}/conf_hadoop 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList -libjars 
/home/stack/hbase/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar 
--monkey serverKilling   -monkeyProps monkey.properties Generator 40 2500 
g.$date
{code}

Note the -libjars to add in htrace-core 3.1.0.

This is hbase-2.1.0RC1 over hadoop-2.8.4.

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.1
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec

2018-07-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547167#comment-16547167
 ] 

Andrew Purtell commented on HBASE-20883:


bq. Would it hurt to also expose that information in the UI as well given that 
Total Requests Per Sec are already there

I'm not telling anyone not to do it, just that it won't be useful for moderate 
to large clusters. (shrug)


> HMaster Read / Write Requests Per Sec across RegionServers, currently only 
> Total Requests Per Sec 
> --
>
> Key: HBASE-20883
> URL: https://issues.apache.org/jira/browse/HBASE-20883
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, master, metrics, monitoring, UI, Usability
>Affects Versions: 1.1.2
>Reporter: Hari Sekhon
>Priority: Major
>
> HMaster currently shows Requests Per Second per RegionServer under HMaster 
> UI's /master-status page -> Region Servers -> Base Stats section in the Web 
> UI.
> Please add Reads Per Second and Writes Per Second per RegionServer alongside 
> this in the HMaster UI, and also expose the Read/Write/Total requests per sec 
> information in the HMaster JMX API.
> This will make it easier to find read or write hotspotting on HBase as a 
> combined total will minimize and mask differences between RegionServers. For 
> example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, 
> so write skew will be masked as it won't show enough significant difference 
> in the much larger combined Total Requests Per Second stat.
> For now I've written a Python tool to calculate this info from RegionServers 
> JMX read/write/total request counts but since HMaster is collecting this info 
> anyway it shouldn't be a big change to improve it to also show Reads / Writes 
> Per Sec as well as Total.
> Find my tools for more granular Read/Write Requests Per Sec Per Regionserver 
> and also Per Region at my [PyTools github 
> repo|https://github.com/harisekhon/pytools] along with a selection of other 
> HBase tools I've used for performance debugging over the years.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547177#comment-16547177
 ] 

Hadoop QA commented on HBASE-19893:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}166m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}211m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-19893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931948/HBASE-19893.master.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4697ef39259b 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d071 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13667/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13667/testReport/ |
| Max. process+thread count | 4644 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13667/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HBASE-20853) Polish "Add defaults to Table Interface so Implementors don't have to"

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547037#comment-16547037
 ] 

Hadoop QA commented on HBASE-20853:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
43s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931943/HBASE-20853.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ab43037d60e2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2997b6d071 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13665/testReport/ |
| Max. process+thread count | 273 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13665/console |
| Powered by | 

[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547072#comment-16547072
 ] 

Hadoop QA commented on HBASE-20855:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
59s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} hbase-client: The patch generated 1 new + 12 unchanged 
- 0 fixed = 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Updated] (HBASE-20906) Hbase shell using Cygwin gives the error Could not find or load main class org.jruby.Main

2018-07-17 Thread Glenn Kruszewski (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glenn Kruszewski updated HBASE-20906:
-
Description: 
Executing hbase shell in Windows using Cygwin does not properly resolve the 
java classpath. This is caused by having cygpath calls converting to windows 
paths prior to locating the jar files. Line 241 in the hbase script has cygpath 
converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the file a 
for loop is used to locate jar files, however this is using the windows path so 
none for the jar files will be found, giving the error 'Could not find or load 
main class org.jruby.Main.'

As a work around, I've updated hbase-env.sh
{code:java}
export HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/*{code}
 

  was:
Executing hbase shell in Windows using Cygwin does not properly resolve the 
java classpath. This is caused by having cygpath calls converting to windows 
paths prior to locating the jar files. Line 241 in the hbase script has cygpath 
converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the file a 
for loop is used to locate jar files, however this is using the windows path so 
none for the jar files will be found, giving the error 'Could not find or load 
main class org.jruby.Main.'

As a work around, I've updated the hbase-env.sh to export 
HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/***


> Hbase shell using Cygwin gives the error Could not find or load main class 
> org.jruby.Main
> -
>
> Key: HBASE-20906
> URL: https://issues.apache.org/jira/browse/HBASE-20906
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.1
> Environment: Windows 10
> Hbase 2.0.1
> Hadoop 2.7.6
> Cygwin 2.9.0
>Reporter: Glenn Kruszewski
>Priority: Minor
>
> Executing hbase shell in Windows using Cygwin does not properly resolve the 
> java classpath. This is caused by having cygpath calls converting to windows 
> paths prior to locating the jar files. Line 241 in the hbase script has 
> cygpath converting the CLASSPATH, HBASE_HOME, and HBASE_LOG_DIR, later in the 
> file a for loop is used to locate jar files, however this is using the 
> windows path so none for the jar files will be found, giving the error 'Could 
> not find or load main class org.jruby.Main.'
> As a work around, I've updated hbase-env.sh
> {code:java}
> export 
> HBASE_CLASSPATH=/usr/local/hbase/lib/ruby/*:/usr/local/hbase/lib/*{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20615) emphasize use of shaded client jars when they're present in an install

2018-07-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20615:
--
Hadoop Flags: Incompatible change,Reviewed

Marked as incompatible change because {code}bin/hbase classpath{code} output 
has completely changed.

> emphasize use of shaded client jars when they're present in an install
> --
>
> Key: HBASE-20615
> URL: https://issues.apache.org/jira/browse/HBASE-20615
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Client, Usability
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20615.0.patch, HBASE-20615.1.patch, 
> HBASE-20615.2.patch
>
>
> Working through setting up an IT for our shaded artifacts in HBASE-20334 
> makes our lack of packaging seem like an oversight. While I could work around 
> by pulling the shaded clients out of whatever build process built the 
> convenience binary that we're trying to test, it seems v awkward.
> After reflecting on it more, it makes more sense to me for there to be a 
> common place in the install that folks running jobs against the cluster can 
> rely on. If they need to run without a full hbase install, that should still 
> work fine via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.master.001.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.master.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Status: Patch Available  (was: Open)

I will add branch-1 and branch-2 patches later today

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0, 1.4.4, 2.0.0-beta-1, 3.0.0, 1.5.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.master.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.master.001.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.master.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547227#comment-16547227
 ] 

Ted Yu commented on HBASE-20401:


{code}
55static final long DEFAULT_CLEANER_THREAD_MAX_WAIT_MSEC = 60 * 1000;
{code}
Better add OLD_WALS_ to the identifier.
Same with the next DEFAULT constant.

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.master.001.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.master.002.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20853) Polish "Add defaults to Table Interface so Implementors don't have to"

2018-07-17 Thread Chia-Ping Tsai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547279#comment-16547279
 ] 

Chia-Ping Tsai commented on HBASE-20853:


{code:java}
default void delete(Delete delete) throws IOException {
- throw new NotImplementedException("Add an implementation!");
+ delete(Collections.singletonList(delete));
}{code}
 

The input of delete(List) must be a modifiable list since 
delete(List) will remove the succeed ops from the input list. IIRC, the 
singletonList is a immutable list.

> Polish "Add defaults to Table Interface so Implementors don't have to"
> --
>
> Key: HBASE-20853
> URL: https://issues.apache.org/jira/browse/HBASE-20853
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Major
>  Labels: beginner, beginners
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20853.master.001.patch, 
> HBASE-20853.master.002.patch, HBASE-20853.master.003.patch, 
> HBASE-20853.master.004.patch
>
>
> This issue is to address feedback that came in after commit on the parent 
> (FYI [~chia7712]). See tail of parent issue and amendment attached to parent 
> adding better defaults to the Table Interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.branch-2.001.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-20401:
-
Attachment: HBASE-20401.branch-1.002.patch

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547239#comment-16547239
 ] 

Tak Lon (Stephen) Wu commented on HBASE-20401:
--

[~yuzhih...@gmail.com] thanks for reviewing this, I have attached master branch 
and other branches with adding OLD_WALS_** those constants

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547254#comment-16547254
 ] 

Hudson commented on HBASE-18477:


Results for branch HBASE-18477
[build #267 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/267/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/267//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/267//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/267//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/267//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547309#comment-16547309
 ] 

Hudson commented on HBASE-20905:


Results for branch branch-1.3
[build #395 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/395/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/395//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/395//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/395//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547347#comment-16547347
 ] 

Reid Chan commented on HBASE-20401:
---

There's also a MAX_WAIT in HFileCleaner, can you work together?

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547349#comment-16547349
 ] 

Reid Chan commented on HBASE-20401:
---

BTW, provide patch for master branch first, and after +1, then provide patch 
for other branches.

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-07-17 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547273#comment-16547273
 ] 

Mike Drob commented on HBASE-20448:
---

Can use --internal-classpath to get the old behavior, though we don't want to 
document that as the official solution. I'll spend some time poking at this 
later.

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.1.1
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547284#comment-16547284
 ] 

Duo Zhang commented on HBASE-20846:
---

Tried the failed UT several times locally, all passed. The error on jenkins is 
that, the disable table is executed before we finish the merge table procedure, 
and we get a region in MERGED state. But this never happened for me locally, 
since merge table procedure will hold the shared lock of the table while 
disable table procedure needs the exclusive lock. And I added logs about 
acquire/release locks locally, it is fine, the disable table procedure does 
schedule before we finish the merge table procedure, but it can only be 
executed after the merge table procedure releases the shared lock on table.

Let's try again.

> Restore procedure locks when master restarts
> 
>
> Key: HBASE-20846
> URL: https://issues.apache.org/jira/browse/HBASE-20846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20846-v1.patch, HBASE-20846-v2.patch, 
> HBASE-20846-v3.patch, HBASE-20846-v4.patch, HBASE-20846.branch-2.0.002.patch, 
> HBASE-20846.branch-2.0.patch, HBASE-20846.patch
>
>
> Found this one when investigating ModifyTableProcedure got stuck while there 
> was a MoveRegionProcedure going on after master restart.
> Though this issue can be solved by HBASE-20752. But I discovered something 
> else.
> Before a MoveRegionProcedure can execute, it will hold the table's shared 
> lock. so,, when a UnassignProcedure was spwaned, it will not check the 
> table's shared lock since it is sure that its parent(MoveRegionProcedure) has 
> aquired the table's lock.
> {code:java}
> // If there is parent procedure, it would have already taken xlock, so no 
> need to take
>   // shared lock here. Otherwise, take shared lock.
>   if (!procedure.hasParent()
>   && waitTableQueueSharedLock(procedure, table) == null) {
>   return true;
>   }
> {code}
> But, it is not the case when Master was restarted. The child 
> procedure(UnassignProcedure) will be executed first after restart. Though it 
> has a parent(MoveRegionProcedure), but apprently the parent didn't hold the 
> table's lock.
> So, since it began to execute without hold the table's shared lock. A 
> ModifyTableProcedure can aquire the table's exclusive lock and execute at the 
> same time. Which is not possible if the master was not restarted.
> This will cause a stuck before HBASE-20752. But since HBASE-20752 has fixed, 
> I wrote a simple UT to repo this case.
> I think we don't have to check the parent for table's shared lock. It is a 
> shared lock, right? I think we can acquire it every time we need it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20846) Restore procedure locks when master restarts

2018-07-17 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20846:
--
Attachment: HBASE-20846-v4.patch

> Restore procedure locks when master restarts
> 
>
> Key: HBASE-20846
> URL: https://issues.apache.org/jira/browse/HBASE-20846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20846-v1.patch, HBASE-20846-v2.patch, 
> HBASE-20846-v3.patch, HBASE-20846-v4.patch, HBASE-20846.branch-2.0.002.patch, 
> HBASE-20846.branch-2.0.patch, HBASE-20846.patch
>
>
> Found this one when investigating ModifyTableProcedure got stuck while there 
> was a MoveRegionProcedure going on after master restart.
> Though this issue can be solved by HBASE-20752. But I discovered something 
> else.
> Before a MoveRegionProcedure can execute, it will hold the table's shared 
> lock. so,, when a UnassignProcedure was spwaned, it will not check the 
> table's shared lock since it is sure that its parent(MoveRegionProcedure) has 
> aquired the table's lock.
> {code:java}
> // If there is parent procedure, it would have already taken xlock, so no 
> need to take
>   // shared lock here. Otherwise, take shared lock.
>   if (!procedure.hasParent()
>   && waitTableQueueSharedLock(procedure, table) == null) {
>   return true;
>   }
> {code}
> But, it is not the case when Master was restarted. The child 
> procedure(UnassignProcedure) will be executed first after restart. Though it 
> has a parent(MoveRegionProcedure), but apprently the parent didn't hold the 
> table's lock.
> So, since it began to execute without hold the table's shared lock. A 
> ModifyTableProcedure can aquire the table's exclusive lock and execute at the 
> same time. Which is not possible if the master was not restarted.
> This will cause a stuck before HBASE-20752. But since HBASE-20752 has fixed, 
> I wrote a simple UT to repo this case.
> I think we don't have to check the parent for table's shared lock. It is a 
> shared lock, right? I think we can acquire it every time we need it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20905) branch-1 docker build fails

2018-07-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547305#comment-16547305
 ] 

Hudson commented on HBASE-20905:


Results for branch branch-1.2
[build #399 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/399/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/399//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/399//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/399//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> branch-1 docker build fails
> ---
>
> Key: HBASE-20905
> URL: https://issues.apache.org/jira/browse/HBASE-20905
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 1.5.0
>Reporter: Jingyun Tian
>Assignee: Mike Drob
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6
>
> Attachments: HBASE-20905.branch-1.001.patch
>
>
> Docker build for precommit fails:
> {quote}
> 19:08:29 Cleaning up...19:08:29 Command python setup.py egg_info failed with 
> error code 1 in /tmp/pip_build_root/pylint*19:08:29* Storing debug log for 
> failure in /root/.pip/pip.log*19:08:29* The command '/bin/sh -c pip install 
> pylint' returned a non-zero code: 1*19:08:29* 19:08:29 Total Elapsed time: 0m 
> 3s*19:08:29* 19:08:29 ERROR: Docker failed to build image.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20876) Improve docs style in HConstants

2018-07-17 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-20876:
--
   Resolution: Resolved
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Improve docs style in HConstants
> 
>
> Key: HBASE-20876
> URL: https://issues.apache.org/jira/browse/HBASE-20876
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: beginner, beginners, newbie
> Fix For: 3.0.0
>
> Attachments: HBASE-20876.master.001.patch
>
>
> In {{HConstants}}, there's a docs snippet:
> {code}
>  /** Don't use it! This'll get you the wrong path in a secure cluster.
>   * Use FileSystem.getHomeDirectory() or
>   * "/user/" + UserGroupInformation.getCurrentUser().getShortUserName()  */
> {code}
> It's ugly style.
> Let's improve this docs with following
> {code}
> /**
>  * Description
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable

2018-07-17 Thread Tak Lon (Stephen) Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547239#comment-16547239
 ] 

Tak Lon (Stephen) Wu edited comment on HBASE-20401 at 7/18/18 12:10 AM:


[~yuzhih...@gmail.com] thanks for reviewing this, I have attached master branch 
and other branches with adding {{OLD_WALS_}} those constants. NOTE that 
branch-1 has a minor difference in {{LOG.warn}} line. 


was (Author: taklwu):
[~yuzhih...@gmail.com] thanks for reviewing this, I have attached master branch 
and other branches with adding OLD_WALS_** those constants

> Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
> --
>
> Key: HBASE-20401
> URL: https://issues.apache.org/jira/browse/HBASE-20401
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20401.branch-1.001.patch, 
> HBASE-20401.branch-1.002.patch, HBASE-20401.branch-2.001.patch, 
> HBASE-20401.master.001.patch, HBASE-20401.master.002.patch
>
>
> When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls 
> CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for 
> notification (notify) from the fs.delete file thread. there might be two 
> situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished 
> when LogClearner call getResult.
>  # fs.delete never complete (strange but possible), then we need to wait for 
> a max of 60 seconds. here, 60 seconds might be too long
>  # getResult is waiting in the period of 500 milliseconds, but the fs.delete 
> has completed and setFromClear is set but yet notify(). one might want to 
> tune this 500 milliseconds to 200 or less .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20855) PeerConfigTracker only support one listener will cause problem when there is a recovered replication queue

2018-07-17 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547289#comment-16547289
 ] 

Jingyun Tian commented on HBASE-20855:
--

[~mdrob] Thx for your help.

> PeerConfigTracker only support one listener will cause problem when there is 
> a recovered replication queue
> --
>
> Key: HBASE-20855
> URL: https://issues.apache.org/jira/browse/HBASE-20855
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-20855.branch-1.001.patch, 
> HBASE-20855.branch-1.002.patch, HBASE-20855.branch-1.003.patch, 
> HBASE-20855.branch-1.004.patch, HBASE-20855.branch-1.005.patch
>
>
> {code}
> public void init(Context context) throws IOException {
>  this.ctx = context;
>  if (this.ctx != null){
>  ReplicationPeer peer = this.ctx.getReplicationPeer();
>  if (peer != null){
>  peer.trackPeerConfigChanges(this);
>  } else {
>  LOG.warn("Not tracking replication peer config changes for Peer Id " + 
> this.ctx.getPeerId() +
>  " because there's no such peer");
>  }
>  }
> }
> {code}
> As we know, replication source will set itself to the PeerConfigTracker in 
> ReplicationPeer. When there is one or more recovered queue, each queue will 
> generate a new replication source, But they share the same ReplicationPeer. 
> Then when it calls setListener, the new generated one will cover the older 
> one. Thus there will only has one ReplicationPeer that receive the peer 
> config change notify.
> {code}
> public synchronized void setListener(ReplicationPeerConfigListener listener){
>  this.listener = listener;
> }
> {code}
>  
> To solve this,  PeerConfigTracker need to support multiple listener and 
> listener should be removed when the replication endpoint terminated.
> I will upload a patch later with fix and UT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20907) Fix Intermittent failure on TestProcedurePriority

2018-07-17 Thread Yu Li (JIRA)
Yu Li created HBASE-20907:
-

 Summary: Fix Intermittent failure on TestProcedurePriority
 Key: HBASE-20907
 URL: https://issues.apache.org/jira/browse/HBASE-20907
 Project: HBase
  Issue Type: Test
Reporter: Yu Li
Assignee: Yu Li


>From a local UT check against 2.1.0-RC1, HMaster failed to initialize before 
>time out. Checking the test log we could see below message:
{noformat}
2018-07-17 20:06:37,142 DEBUG [Thread-4003] client.RpcRetryingCallerImpl(131): 
Call exception, tries=6, retries=6, started=4173 ms ago, cancelled=false, 
msg=java.io.IOException: Inject error
at 
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority$MyCP.preGetOp(TestProcedurePriority.java:92)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:841)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:838)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preGet(RegionCoprocessorHost.java:838)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2520)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2460)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
, details=row 'hbase:namespace' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=hdpdevm1.et2sqa.tbsite.net,59254,1531829189215, seqNum=-1, 
exception=java.io.IOException: java.io.IOException: Inject error
at 
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority$MyCP.preGetOp(TestProcedurePriority.java:92)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:841)
...
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
at 
org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1078)
at 
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:403)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:94)
{noformat}

In current test code we will set {{FAIL}} to true w/o checking whether 
namespace manager is already up, and if not lucky we will run into the above 
case and get a timeout.

The fix will be quite straight forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20907) Fix Intermittent failure on TestProcedurePriority

2018-07-17 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547336#comment-16547336
 ] 

Yu Li commented on HBASE-20907:
---

More information from UT output:
{noformat}
---
Test set: org.apache.hadoop.hbase.master.procedure.TestProcedurePriority
---
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 275.09 s <<< 
FAILURE! - in org.apache.hadoop.hbase.master.procedure.TestProcedurePriority
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority  Time elapsed: 
275.09 s  <<< ERROR!
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority.setUp(TestProcedurePriority.java:110)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.hadoop.hbase.master.procedure.TestProcedurePriority.setUp(TestProcedurePriority.java:110)

Process Thread Dump: Thread dump because: Master not initialized after 20ms 
seconds
Thread 5882 (Thread-4003):
  State: TIMED_WAITING
  Blocked count: 159
  Waited count: 270
  Stack:
java.lang.Object.wait(Native Method)

org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:167)
org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)

org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1078)

org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:403)

org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:94)

org.apache.hadoop.hbase.master.ClusterSchemaServiceImpl.doStart(ClusterSchemaServiceImpl.java:63)

org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.startAsync(AbstractService.java:226)

org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1136)

org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:984)

org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2110)
org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:567)
org.apache.hadoop.hbase.master.HMaster$$Lambda$35/397595326.run(Unknown 
Source)
java.lang.Thread.run(Thread.java:745)
{noformat}

> Fix Intermittent failure on TestProcedurePriority
> -
>
> Key: HBASE-20907
> URL: https://issues.apache.org/jira/browse/HBASE-20907
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> From a local UT check against 2.1.0-RC1, HMaster failed to initialize before 
> time out. Checking the test log we could see below message:
> {noformat}
> 2018-07-17 20:06:37,142 DEBUG [Thread-4003] 
> client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=6, 
> started=4173 ms ago, cancelled=false, msg=java.io.IOException: Inject error
> at 
> org.apache.hadoop.hbase.master.procedure.TestProcedurePriority$MyCP.preGetOp(TestProcedurePriority.java:92)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:841)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$19.call(RegionCoprocessorHost.java:838)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preGet(RegionCoprocessorHost.java:838)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2520)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2460)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> , details=row 'hbase:namespace' on table 'hbase:meta' at 
> region=hbase:meta,,1.1588230740, 
> hostname=hdpdevm1.et2sqa.tbsite.net,59254,1531829189215, seqNum=-1, 
> exception=java.io.IOException: java.io.IOException: Inject error
> at 
> org.apache.hadoop.hbase.master.procedure.TestProcedurePriority$MyCP.preGetOp(TestProcedurePriority.java:92)
>  

[jira] [Commented] (HBASE-20893) Data loss if splitting region while ServerCrashProcedure executing

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547388#comment-16547388
 ] 

stack commented on HBASE-20893:
---

Whats the dataloss scenario here? Children open but somehow they miss 
recovered.edits that may have shown up in parent because of an inopportune 
crash?

s/hasRecoveredEdit/hasRecoveredEdits/ (add 's')?

nits:

You can drop the WALSplitter from WALSplitte.getSplitEditFilesSorted ?

s/files.size() == 0/files.isEmpty()/

Or change...

565 if (files == null || files.size() == 0) {
566   return false;
567 } else {
568   return true;
569 }

to return files != null && !files.isEmpty();

And yeah, does the test repro the condition?

Thanks [~allan163]

> Data loss if splitting region while ServerCrashProcedure executing
> --
>
> Key: HBASE-20893
> URL: https://issues.apache.org/jira/browse/HBASE-20893
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20893.branch-2.0.001.patch, 
> HBASE-20893.branch-2.0.002.patch
>
>
> Similar case as HBASE-20878.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20895) NPE in RpcServer#readAndProcess

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547362#comment-16547362
 ] 

stack commented on HBASE-20895:
---

Linking to HBASE-14050; similar?

> NPE in RpcServer#readAndProcess
> ---
>
> Key: HBASE-20895
> URL: https://issues.apache.org/jira/browse/HBASE-20895
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 1.3.2
>Reporter: Andrew Purtell
>Assignee: Monani Mihir
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.6
>
>
> {noformat}
> 2018-07-10 16:25:55,005 DEBUG [.sfdc.net,port=60020] ipc.RpcServer - 
> RpcServer.listener,port=60020: Caught exception while reading:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1761)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:949)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:730)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:706)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This looks like it could be a use after close problem if there is concurrent 
> access to a Connection.
> In process() we might store a null back to the 'data' field.
> Meanwhile in readAndProcess() we have a case where we might be blocked on a 
> channel read and then after coming back from the read we go to use 'data' 
> after a null has been written back, leading to a NPE.
> {quote}count = channelRead(channel, data);
>  1761 ---> if (count >= 0 && *data.remaining()* == 0)
>  \{ process(); }{quote}
> Whether a NPE happens or not is going to depend on the timing of the store 
> back to 'data' in another thread and use of 'data' in this thread and whether 
> or not the JVM has optimized away a reload of 'data' (it's not declared 
> volatile)
> We should do a null check here just to be defensive. We should also look at 
> whether concurrent access to the Connection is happening and intended.The 
> above is just a theory. We should also look at other execution sequences that 
> could lead to 'data' being null in this location. At a glance I didn't find 
> one but the store to 'data' happens behind conditionals so it is possible. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547365#comment-16547365
 ] 

stack commented on HBASE-20884:
---

Out of interest, any differences between our old encoding and this new one?

+1 on change and approach.

> Replace usage of our Base64 implementation with java.util.Base64
> 
>
> Key: HBASE-20884
> URL: https://issues.apache.org/jira/browse/HBASE-20884
> Project: HBase
>  Issue Type: Task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 1.4.6, 2.0.2, 2.1.1
>
> Attachments: HBASE-20884.branch-1.001.patch, 
> HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch
>
>
> We have a public domain implementation of Base64 that is copied into our code 
> base and infrequently receives updates. We should replace usage of that with 
> the new Java 8 java.util.Base64 where possible.
> For the migration, I propose a phased approach.
> * Deprecate on 1.x and 2.x to signal to users that this is going away.
> * Replace usages on branch-2 and master with j.u.Base64
> * Delete our implementation of Base64 on master.
> Does this seem in line with our API compatibility requirements?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547369#comment-16547369
 ] 

stack commented on HBASE-20856:
---

Thank you [~taklwu]. Let me assign this to you in meantime (unassign if I am 
presuming too much). Thanks.

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Minor
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20895) NPE in RpcServer#readAndProcess

2018-07-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547366#comment-16547366
 ] 

Andrew Purtell commented on HBASE-20895:


Good memory! Yeah we could try removing the assignment of null back to.the 
field like we did on the other one. 

> NPE in RpcServer#readAndProcess
> ---
>
> Key: HBASE-20895
> URL: https://issues.apache.org/jira/browse/HBASE-20895
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 1.3.2
>Reporter: Andrew Purtell
>Assignee: Monani Mihir
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.6
>
>
> {noformat}
> 2018-07-10 16:25:55,005 DEBUG [.sfdc.net,port=60020] ipc.RpcServer - 
> RpcServer.listener,port=60020: Caught exception while reading:
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1761)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:949)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:730)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:706)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This looks like it could be a use after close problem if there is concurrent 
> access to a Connection.
> In process() we might store a null back to the 'data' field.
> Meanwhile in readAndProcess() we have a case where we might be blocked on a 
> channel read and then after coming back from the read we go to use 'data' 
> after a null has been written back, leading to a NPE.
> {quote}count = channelRead(channel, data);
>  1761 ---> if (count >= 0 && *data.remaining()* == 0)
>  \{ process(); }{quote}
> Whether a NPE happens or not is going to depend on the timing of the store 
> back to 'data' in another thread and use of 'data' in this thread and whether 
> or not the JVM has optimized away a reload of 'data' (it's not declared 
> volatile)
> We should do a null check here just to be defensive. We should also look at 
> whether concurrent access to the Connection is happening and intended.The 
> above is just a theory. We should also look at other execution sequences that 
> could lead to 'data' being null in this location. At a glance I didn't find 
> one but the store to 'data' happens behind conditionals so it is possible. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20878) Data loss if merging regions while ServerCrashProcedure executing

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547381#comment-16547381
 ] 

stack commented on HBASE-20878:
---

Patch LGTM. Good one. Nice comments in code on why this obtuse check.

Having to use WALSplitter.getSplitEditFilesSorted is ugly but I'm impressed you 
found this method... At least it hides a bunch of the recovered.edits mess.

Nits that are not important:

+ Change NavigableSet file to Collection and then do files == null 
|| files.isEmpty()...  so avoid an import and an == 0 on Collection.. no biggie.
+ Throw an HBaseIOE rather than IOE here throw new IOException? We throw too 
much base IOE as it is.

When the test runs, is it repro'ing the condition?

Thanks.

> Data loss if merging regions while ServerCrashProcedure executing
> -
>
> Key: HBASE-20878
> URL: https://issues.apache.org/jira/browse/HBASE-20878
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20878.branch-2.0.001.patch, 
> HBASE-20878.branch-2.0.002.patch, HBASE-20878.branch-2.0.003.patch
>
>
> In MergeTableRegionsProcedure, we close the regions to merge using 
> UnassignProcedure. But, if the RS these regions on is crashed, a 
> ServerCrashProcedure will execute at the same time. UnassignProcedures will 
> be blockd until all logs are split. But since these regions are closed for 
> merging, the regions won't open again, the recovered.edit in the region dir 
> won't be replay, thus, data will loss.
> I provided a test to repo this case. I seriously doubt Split region procedure 
> also has this kind of problem. I will check later



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20893) Data loss if splitting region while ServerCrashProcedure executing

2018-07-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16547391#comment-16547391
 ] 

stack commented on HBASE-20893:
---

I'm thinking this issue has always been present. What you lads thin?

> Data loss if splitting region while ServerCrashProcedure executing
> --
>
> Key: HBASE-20893
> URL: https://issues.apache.org/jira/browse/HBASE-20893
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20893.branch-2.0.001.patch, 
> HBASE-20893.branch-2.0.002.patch
>
>
> Similar case as HBASE-20878.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-20856:
-

Assignee: Tak Lon (Stephen) Wu

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >