[jira] [Updated] (HBASE-20783) NPE encountered when rolling update from master with an async peer to branch HBASE-19064

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20783:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-19064
   Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-19064. Thanks [~openinx] for contributing.

> NPE encountered when rolling update from master with an async peer to branch 
> HBASE-19064
> 
>
> Key: HBASE-20783
> URL: https://issues.apache.org/jira/browse/HBASE-20783
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-20783-HBASE-19064-v1.patch, 
> HBASE-20783-HBASE-19064-v1.patch
>
>
> {code}
> 2018-06-25 16:25:04,261 ERROR [Thread-14] master.HMaster: Failed to become 
> active master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.replication.SyncReplicationState.parseFrom(SyncReplicationState.java:72)
> at 
> org.apache.hadoop.hbase.replication.ZKReplicationPeerStorage.getSyncReplicationState(ZKReplicationPeerStorage.java:224)
> at 
> org.apache.hadoop.hbase.replication.ZKReplicationPeerStorage.getPeerSyncReplicationState(ZKReplicationPeerStorage.java:240)
> at 
> org.apache.hadoop.hbase.master.replication.ReplicationPeerManager.create(ReplicationPeerManager.java:479)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:755)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:895)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2126)
> at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:571)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20751:
--
Component/s: Performance

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-20751.
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-19064

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20783) NPE encountered when rolling update from master with an async peer to branch HBASE-19064

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523298#comment-16523298
 ] 

Duo Zhang commented on HBASE-20783:
---

+1. Let me commit.

> NPE encountered when rolling update from master with an async peer to branch 
> HBASE-19064
> 
>
> Key: HBASE-20783
> URL: https://issues.apache.org/jira/browse/HBASE-20783
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20783-HBASE-19064-v1.patch, 
> HBASE-20783-HBASE-19064-v1.patch
>
>
> {code}
> 2018-06-25 16:25:04,261 ERROR [Thread-14] master.HMaster: Failed to become 
> active master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.replication.SyncReplicationState.parseFrom(SyncReplicationState.java:72)
> at 
> org.apache.hadoop.hbase.replication.ZKReplicationPeerStorage.getSyncReplicationState(ZKReplicationPeerStorage.java:224)
> at 
> org.apache.hadoop.hbase.replication.ZKReplicationPeerStorage.getPeerSyncReplicationState(ZKReplicationPeerStorage.java:240)
> at 
> org.apache.hadoop.hbase.master.replication.ReplicationPeerManager.create(ReplicationPeerManager.java:479)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:755)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:895)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2126)
> at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:571)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523274#comment-16523274
 ] 

Zheng Hu edited comment on HBASE-20751 at 6/26/18 6:38 AM:
---

Uploaded the testing result, we have 3 cases: 
1.  master branch with the async replication enabled. 
2.  HBASE-19064 branch with async replication enabled. 
3.  HBASE-19064 branch with sync replication enabled. 

For each case,  I've created a table with 100 regions,  and used YCSB(bin/ycsb 
run  hbase10 -P workload -s -threads 120) with 120 threads to write the source 
hbase cluster. 

{code}
hbase> n_splits = 100
hbase> create 'ycsb-test', {NAME=>'C',  REPLICATION_SCOPE=>'1'}, {SPLITS => 
(1..n_splits).map {|i| "user#{1000+i*(-1000)/n_splits}"}}
{code}

The ycsb workload: 
{code}
table=ycsb-test
columnfamily=C
recordcount=1
operationcount=1
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1
autoflush=false
clientbuffering=false# Ensure that using PUT, rather than BufferMutator. 
BTW, use the default duraibility=SYNC_WAL. 
 
readallfields=true
writeallfields=true
 
readproportion=0
updateproportion=0
scanproportion=0
insertproportion=1.0
 
requestdistribution=zipfian
{code}



was (Author: openinx):
Uploaded the testing result, we have 3 cases: 
1.  master branch with the async replication enabled. 
2.  HBASE-19064 branch with async replication enabled. 
3.  HBASE-19064 branch with sync replication enabled. 

For each case,  I've created a table with 100 regions,  and used YCSBbin/ycsb 
run  hbase10 -P workload -s -threads 120) with 120 threads to write the source 
hbase cluster. 

{code}
hbase> n_splits = 100
hbase> create 'ycsb-test', {NAME=>'C',  REPLICATION_SCOPE=>'1'}, {SPLITS => 
(1..n_splits).map {|i| "user#{1000+i*(-1000)/n_splits}"}}
{code}

The ycsb workload: 
{code}
table=ycsb-test
columnfamily=C
recordcount=1
operationcount=1
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1
autoflush=false
clientbuffering=false# Ensure that using PUT, rather than BufferMutator. 
BTW, use the default duraibility=SYNC_WAL. 
 
readallfields=true
writeallfields=true
 
readproportion=0
updateproportion=0
scanproportion=0
insertproportion=1.0
 
requestdistribution=zipfian
{code}


>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523286#comment-16523286
 ] 

Zheng Hu commented on HBASE-20751:
--

There is a small peak on the avg and p99 curves, because of the balanced 
regions, the region server flushed the memstore(s) at the same time... 

bq. Seems no difference between master and HBASE-19064 if we do not use sync 
replication? 
Yes, the QPS & AVG are similar when comparing the master branch with async 
replication and HBASE-19064 with async replication.  BTW the HBASE-19064  with 
sync replication dropped about 13%,  we can optimize  the sync replication in 
phrase#2. 

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523274#comment-16523274
 ] 

Zheng Hu commented on HBASE-20751:
--

Uploaded the testing result, we have 3 cases: 
1.  master branch with the async replication enabled. 
2.  HBASE-19064 branch with async replication enabled. 
3.  HBASE-19064 branch with sync replication enabled. 

For each case,  I've created a table with 100 regions,  and used YCSBbin/ycsb 
run  hbase10 -P workload -s -threads 120) with 120 threads to write the source 
hbase cluster. 

{code}
hbase> n_splits = 100
hbase> create 'ycsb-test', {NAME=>'C',  REPLICATION_SCOPE=>'1'}, {SPLITS => 
(1..n_splits).map {|i| "user#{1000+i*(-1000)/n_splits}"}}
{code}

The ycsb workload: 
{code}
table=ycsb-test
columnfamily=C
recordcount=1
operationcount=1
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1
autoflush=false
clientbuffering=false# Ensure that using PUT, rather than BufferMutator. 
BTW, use the default duraibility=SYNC_WAL. 
 
readallfields=true
writeallfields=true
 
readproportion=0
updateproportion=0
scanproportion=0
insertproportion=1.0
 
requestdistribution=zipfian
{code}


>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523272#comment-16523272
 ] 

Duo Zhang commented on HBASE-20751:
---

Seems no difference between master and HBASE-19064 if we do not use sync 
replication? Good.

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20751:
-
Attachment: HBASE-19064-branch-async-replication.png
HBASE-19064-branch-sync-replication.png
master-branch-async-replication-put1.png

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-19064-branch-async-replication.png, 
> HBASE-19064-branch-sync-replication.png, 
> master-branch-async-replication-put1.png, result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20751:
-
Attachment: result.png

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: result.png, ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-13017) Backport HBASE-12035 Keep table state in Meta to branch-1

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-13017.
---
   Resolution: Won't Fix
Fix Version/s: (was: 1.5.0)

Not active for a long time. Let's resolve it for now. Can reopen later if we 
still want to backport it branch-1.

> Backport HBASE-12035 Keep table state in Meta to branch-1
> -
>
> Key: HBASE-13017
> URL: https://issues.apache.org/jira/browse/HBASE-13017
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.1.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Major
>  Labels: backport
> Attachments: HBASE-13017-branch-1.patch, 
> HBASE-13017-branch-1.v1.patch, HBASE-13017-branch-1.v1.patch, 
> HBASE-13017-branch-1.v2.patch, HBASE-13017-branch-1.v3.patch, 
> HBASE-13017-branch-1.v4.patch, HBASE-13017-branch-1.v5.patch, 
> HBASE-13017-branch-1.v6.patch
>
>
> Lets backport that feature to branch-1.0 adapting HBASE-12035 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20751) Performance comparison of synchronous replication branch and master branch

2018-06-25 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-20751:
-
Attachment: ycsb-log.tar.gz

>  Performance comparison of synchronous replication branch and master branch
> ---
>
> Key: HBASE-20751
> URL: https://issues.apache.org/jira/browse/HBASE-20751
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: ycsb-log.tar.gz
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-13147) Load actual META table descriptor, don't use statically defined one.

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-13147.
---
Resolution: Later

Partially solved by HBASE-17931, which means when adding new families to meta 
table, we need to first upgrade a RS to carry the meta region and then it will 
have the new families.

Can reopen later if we have better solution.

> Load actual META table descriptor, don't use statically defined one.
> 
>
> Key: HBASE-13147
> URL: https://issues.apache.org/jira/browse/HBASE-13147
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
>Priority: Major
> Attachments: HBASE-13147-branch-1.patch, 
> HBASE-13147-branch-1.v2.patch, HBASE-13147.patch, HBASE-13147.v2.patch, 
> HBASE-13147.v3.patch, HBASE-13147.v4.patch, HBASE-13147.v4.patch, 
> HBASE-13147.v5.patch, HBASE-13147.v6.patch, HBASE-13147.v7.patch
>
>
> In HBASE-13087 stumbled on the fact, that region servers don't see actual 
> meta descriptor, they use their own, statically compiled.
> Need to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523261#comment-16523261
 ] 

Duo Zhang commented on HBASE-20788:
---

{quote}
its this true? What if the 2.0 server carrying hbase:meta goes down before 
another hbase2 is up on the cluster?
{quote}

Usually it will be fine. As we will upgrade master at the end, so no one will 
write to the new table state family. But if you have tables which have 
REPLICATION_SCOPE set to 1, then for 2.1.0 there will be problem. In 2.1.0, 
when opening a region with REPLICATION_SCOPE set to 1, we will write a barrier 
in the new replication barrier family, for serial replication. So if the meta 
region is then assigned to a RS with the old version, then it will fail to 
replay the wal and meta region could never online. For 2.0 there is no problem.

But anyway, for upgrading you will soon have several RSes with the new 
version... And it is known that we do not support downgrading, so...

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19064) Synchronous replication for HBase

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523259#comment-16523259
 ] 

Hudson commented on HBASE-19064:


Results for branch HBASE-19064
[build #173 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/173/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/173//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/173//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/173//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523258#comment-16523258
 ] 

Hadoop QA commented on HBASE-20785:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
5s{color} | {color:red} hbase-server: The patch generated 1 new + 30 unchanged 
- 0 fixed = 31 total (was 30) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
5m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 50s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:4b0a87a |
| JIRA Issue | HBASE-20785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929138/HBASE-20785.branch-1.4.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8ca67676bef5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13

[jira] [Commented] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523240#comment-16523240
 ] 

stack commented on HBASE-20788:
---

Doc is great. Let me commit in morning. I'll prefix it with the tag 
"Experimental", at least until it has been tried by others.

bq. It is OK that during the rolling upgrading there are region server crashes. 

its this true? What if the 2.0 server carrying hbase:meta goes down before 
another hbase2 is up on the cluster?

That this works is super sweet. Good on you [~Apache9]

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523237#comment-16523237
 ] 

stack commented on HBASE-20781:
---

.0002 Fix broke unit tests. More explanation of the optimization in comments.

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.branch-2.0.002.patch, HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20781:
--
Attachment: HBASE-20781.branch-2.0.002.patch

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.branch-2.0.002.patch, HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20780) ServerRpcConnection logging cleanup

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523217#comment-16523217
 ] 

Hudson commented on HBASE-20780:


Results for branch branch-2.0
[build #474 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> ServerRpcConnection logging cleanup
> ---
>
> Key: HBASE-20780
> URL: https://issues.apache.org/jira/browse/HBASE-20780
> Project: HBase
>  Issue Type: Sub-task
>  Components: logging, Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: 2.0621.2.12782.lock.svg, 2.0621.2M.46340.lock.svg, 
> 2.0623.111354.lock.svg, HBASE-20780.branch-2.0.001.patch, 
> HBASE-20780.branch-2.0.001.patch
>
>
> The logging we do inside in connection header parsing shows as worst offender 
> in the perf locking profiles. Its odd, but easy to clean up. I'd doubt it 
> makes any difference in throughput but lets get it out of the way. Let me 
> load up a few samples of what it current looks like.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523216#comment-16523216
 ] 

Hudson commented on HBASE-20403:


Results for branch branch-2.0
[build #474 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20777) RpcConnection could still remain opened after we shutdown the NettyRpcServer

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523218#comment-16523218
 ] 

Hudson commented on HBASE-20777:


Results for branch branch-2.0
[build #474 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> RpcConnection could still remain opened after we shutdown the NettyRpcServer
> 
>
> Key: HBASE-20777
> URL: https://issues.apache.org/jira/browse/HBASE-20777
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: HBASE-20777-v1.patch, HBASE-20777.patch, 
> org.apache.hadoop.hbase.client.TestAsyncTableBatch-output.txt
>
>
> The log is very strange, we keep sending request to a dead RS, and the result 
> is not connection refused, but rpc timeout, and later it becomes 
> CallQueueTooBig...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523215#comment-16523215
 ] 

Hudson commented on HBASE-20770:


Results for branch branch-2.0
[build #474 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/474//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.2
>
> Attachments: HBASE-20770.branch-2.0.001.patch
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleane

[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523194#comment-16523194
 ] 

Hudson commented on HBASE-20770:


Results for branch branch-2
[build #908 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.2
>
> Attachments: HBASE-20770.branch-2.0.001.patch
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> 

[jira] [Commented] (HBASE-20780) ServerRpcConnection logging cleanup

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523195#comment-16523195
 ] 

Hudson commented on HBASE-20780:


Results for branch branch-2
[build #908 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ServerRpcConnection logging cleanup
> ---
>
> Key: HBASE-20780
> URL: https://issues.apache.org/jira/browse/HBASE-20780
> Project: HBase
>  Issue Type: Sub-task
>  Components: logging, Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: 2.0621.2.12782.lock.svg, 2.0621.2M.46340.lock.svg, 
> 2.0623.111354.lock.svg, HBASE-20780.branch-2.0.001.patch, 
> HBASE-20780.branch-2.0.001.patch
>
>
> The logging we do inside in connection header parsing shows as worst offender 
> in the perf locking profiles. Its odd, but easy to clean up. I'd doubt it 
> makes any difference in throughput but lets get it out of the way. Let me 
> load up a few samples of what it current looks like.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523193#comment-16523193
 ] 

Hudson commented on HBASE-20403:


Results for branch branch-2
[build #908 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20777) RpcConnection could still remain opened after we shutdown the NettyRpcServer

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523196#comment-16523196
 ] 

Hudson commented on HBASE-20777:


Results for branch branch-2
[build #908 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/908//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> RpcConnection could still remain opened after we shutdown the NettyRpcServer
> 
>
> Key: HBASE-20777
> URL: https://issues.apache.org/jira/browse/HBASE-20777
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: HBASE-20777-v1.patch, HBASE-20777.patch, 
> org.apache.hadoop.hbase.client.TestAsyncTableBatch-output.txt
>
>
> The log is very strange, we keep sending request to a dead RS, and the result 
> is not connection refused, but rpc timeout, and later it becomes 
> CallQueueTooBig...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523161#comment-16523161
 ] 

Hadoop QA commented on HBASE-20788:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  6m  
6s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
50s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20788 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929129/HBASE-20788.patch |
| Optional Tests |  asflicense  refguide  |
| uname | Linux 28fd2fe17466 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4ba6242a62 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13388/artifact/patchprocess/branch-site/book.html
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13388/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 83 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13388/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523157#comment-16523157
 ] 

Hadoop QA commented on HBASE-20769:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
22s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929125/HBASE-20769.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a030d521957a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4ba6242a62 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13387/testReport/ |
| Max. process+thread count | 3879 (vs. ulimit of 1) |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13387/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> getSp

[jira] [Updated] (HBASE-19164) Avoid UUID.randomUUID in tests

2018-06-25 Thread Sahil Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Aggarwal updated HBASE-19164:
---
Attachment: HBASE-19164.master.005.patch

> Avoid UUID.randomUUID in tests
> --
>
> Key: HBASE-19164
> URL: https://issues.apache.org/jira/browse/HBASE-19164
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Sahil Aggarwal
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-19164.master.001.patch, 
> HBASE-19164.master.002.patch, HBASE-19164.master.003.patch, 
> HBASE-19164.master.004.patch, HBASE-19164.master.005.patch
>
>
> We have a lot of places in our test code where we use {{UUID.randomUUID}} to 
> generate table names or paths for uniqueness. Unfortunately, this uses up a 
> good chunk of system entropy, since Sun chose that random UUID's should use 
> the NativePRNGBlocking implementation.
> We don't need to block on entropy for random bits to pick a random table name 
> in a test, so we can use something that doesn't strain the system too much - 
> secure random can be a source of problems on some VM or containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19164) Avoid UUID.randomUUID in tests

2018-06-25 Thread Sahil Aggarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523153#comment-16523153
 ] 

Sahil Aggarwal commented on HBASE-19164:


Rebased the patch.

> Avoid UUID.randomUUID in tests
> --
>
> Key: HBASE-19164
> URL: https://issues.apache.org/jira/browse/HBASE-19164
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Sahil Aggarwal
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-19164.master.001.patch, 
> HBASE-19164.master.002.patch, HBASE-19164.master.003.patch, 
> HBASE-19164.master.004.patch
>
>
> We have a lot of places in our test code where we use {{UUID.randomUUID}} to 
> generate table names or paths for uniqueness. Unfortunately, this uses up a 
> good chunk of system entropy, since Sun chose that random UUID's should use 
> the NativePRNGBlocking implementation.
> We don't need to block on entropy for random bits to pick a random table name 
> in a test, so we can use something that doesn't strain the system too much - 
> secure random can be a source of problems on some VM or containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19164) Avoid UUID.randomUUID in tests

2018-06-25 Thread Sahil Aggarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Aggarwal updated HBASE-19164:
---
Attachment: HBASE-19164.master.004.patch

> Avoid UUID.randomUUID in tests
> --
>
> Key: HBASE-19164
> URL: https://issues.apache.org/jira/browse/HBASE-19164
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Sahil Aggarwal
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-19164.master.001.patch, 
> HBASE-19164.master.002.patch, HBASE-19164.master.003.patch, 
> HBASE-19164.master.004.patch
>
>
> We have a lot of places in our test code where we use {{UUID.randomUUID}} to 
> generate table names or paths for uniqueness. Unfortunately, this uses up a 
> good chunk of system entropy, since Sun chose that random UUID's should use 
> the NativePRNGBlocking implementation.
> We don't need to block on entropy for random bits to pick a random table name 
> in a test, so we can use something that doesn't strain the system too much - 
> secure random can be a source of problems on some VM or containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20785:
--
Attachment: HBASE-20785.branch-1.4.001.patch

> NPE getting metrics in PE testing scans
> ---
>
> Key: HBASE-20785
> URL: https://issues.apache.org/jira/browse/HBASE-20785
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 1.4.4
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20785.branch-1.4.001.patch, 
> HBASE-20785.branch-1.4.001.patch, HBASE-20785.branch-1.4.001.patch
>
>
> Comparing scans using PE. In branch-1 at least, I was getting an NPE when we 
> tried to use a null metrics instance. Seems transient around startup. 
> One-liner patch coming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523149#comment-16523149
 ] 

Mike Drob commented on HBASE-20786:
---

An advantage of separate pools is that tasks are less likely to be starved out.

> Table create with thousands of regions takes too long
> -
>
> Key: HBASE-20786
> URL: https://issues.apache.org/jira/browse/HBASE-20786
> Project: HBase
>  Issue Type: Umbrella
>  Components: Performance
>Reporter: stack
>Priority: Major
>
> Internal testing has create of a table with 33k regions taking 18 minutes. 
> Let me provide more info below. We have an executor with default ten threads 
> handling the creation of the regions in HDFS which helps distribute out the 
> load but its not enough. This cluster had >600 servers. Let me add detail.
> Need to spend some time on speeding up create/assigns. Made this an umbrella 
> issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523143#comment-16523143
 ] 

Andrew Purtell commented on HBASE-20786:


There are a lot of fixed size executor pools in the master for various tasks. 
Can they be serviced by one common pool, maybe with a relatively small core 
pool but with a large upper bound? Same thought applies to RSes to a lesser 
extent. It's annoying to have to adjust them up or down due to environment 
particulars, although I suppose that is not common. (I needed to do it for 
deploy of new test tables on slow FS (S3)) Will help here aside from other 
improvements. 

> Table create with thousands of regions takes too long
> -
>
> Key: HBASE-20786
> URL: https://issues.apache.org/jira/browse/HBASE-20786
> Project: HBase
>  Issue Type: Umbrella
>  Components: Performance
>Reporter: stack
>Priority: Major
>
> Internal testing has create of a table with 33k regions taking 18 minutes. 
> Let me provide more info below. We have an executor with default ten threads 
> handling the creation of the regions in HDFS which helps distribute out the 
> load but its not enough. This cluster had >600 servers. Let me add detail.
> Need to spend some time on speeding up create/assigns. Made this an umbrella 
> issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523139#comment-16523139
 ] 

Duo Zhang commented on HBASE-20788:
---

[~stack] FYI.

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20788:
--
Attachment: HBASE-20788.patch

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20788:
--
Status: Patch Available  (was: Open)

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: HBASE-20788.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-25 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523132#comment-16523132
 ] 

Jingyun Tian commented on HBASE-20769:
--

[~openinx] Patch updated. Pls check it out.

> getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl
> ---
>
> Key: HBASE-20769
> URL: https://issues.apache.org/jira/browse/HBASE-20769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20769.master.001.patch, 
> HBASE-20769.master.002.patch, HBASE-20769.master.003.patch, 
> HBASE-20769.master.004.patch
>
>
> When numSplits > 1, getSplits may create split that has start row smaller 
> than user specified scan's start row or stop row larger than user specified 
> scan's stop row.
> {code}
> byte[][] sp = sa.split(hri.getStartKey(), hri.getEndKey(), numSplits, 
> true);
> for (int i = 0; i < sp.length - 1; i++) {
>   if (PrivateCellUtil.overlappingKeys(scan.getStartRow(), 
> scan.getStopRow(), sp[i],
>   sp[i + 1])) {
> List hosts =
> calculateLocationsForInputSplit(conf, htd, hri, tableDir, 
> localityEnabled);
> Scan boundedScan = new Scan(scan);
> boundedScan.setStartRow(sp[i]);
> boundedScan.setStopRow(sp[i + 1]);
> splits.add(new InputSplit(htd, hri, hosts, boundedScan, 
> restoreDir));
>   }
> }
> {code}
> Since we split keys by the range of regions, when sp[i] < scan.getStartRow() 
> or sp[i + 1] > scan.getStopRow(), the created bounded scan may contain range 
> that over user defined scan.
> fix should be simple:
> {code}
> boundedScan.setStartRow(
>  Bytes.compareTo(scan.getStartRow(), sp[i]) > 0 ? scan.getStartRow() : sp[i]);
>  boundedScan.setStopRow(
>  Bytes.compareTo(scan.getStopRow(), sp[i + 1]) < 0 ? scan.getStopRow() : sp[i 
> + 1]);
> {code}
> I will also try to add UTs to help discover this problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-25 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-20769:
-
Attachment: HBASE-20769.master.004.patch

> getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl
> ---
>
> Key: HBASE-20769
> URL: https://issues.apache.org/jira/browse/HBASE-20769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20769.master.001.patch, 
> HBASE-20769.master.002.patch, HBASE-20769.master.003.patch, 
> HBASE-20769.master.004.patch
>
>
> When numSplits > 1, getSplits may create split that has start row smaller 
> than user specified scan's start row or stop row larger than user specified 
> scan's stop row.
> {code}
> byte[][] sp = sa.split(hri.getStartKey(), hri.getEndKey(), numSplits, 
> true);
> for (int i = 0; i < sp.length - 1; i++) {
>   if (PrivateCellUtil.overlappingKeys(scan.getStartRow(), 
> scan.getStopRow(), sp[i],
>   sp[i + 1])) {
> List hosts =
> calculateLocationsForInputSplit(conf, htd, hri, tableDir, 
> localityEnabled);
> Scan boundedScan = new Scan(scan);
> boundedScan.setStartRow(sp[i]);
> boundedScan.setStopRow(sp[i + 1]);
> splits.add(new InputSplit(htd, hri, hosts, boundedScan, 
> restoreDir));
>   }
> }
> {code}
> Since we split keys by the range of regions, when sp[i] < scan.getStartRow() 
> or sp[i + 1] > scan.getStopRow(), the created bounded scan may contain range 
> that over user defined scan.
> fix should be simple:
> {code}
> boundedScan.setStartRow(
>  Bytes.compareTo(scan.getStartRow(), sp[i]) > 0 ? scan.getStartRow() : sp[i]);
>  boundedScan.setStopRow(
>  Bytes.compareTo(scan.getStopRow(), sp[i + 1]) < 0 ? scan.getStopRow() : sp[i 
> + 1]);
> {code}
> I will also try to add UTs to help discover this problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523093#comment-16523093
 ] 

Duo Zhang commented on HBASE-20781:
---

The approach is fine. +1. But seems we broke some UTs?

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523092#comment-16523092
 ] 

Hadoop QA commented on HBASE-20781:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
12s{color} | {color:red} hbase-server: The patch generated 2 new + 222 
unchanged - 1 fixed = 224 total (was 223) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 46s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.wal.TestAsyncFSWAL |
|   | hadoop.hbase.wal.TestFSHLogProvider |
|   | hadoop.hbase.regionserver.wal.TestFSHLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-20781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929099/HBASE-20781.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux fd737c98fdb4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / a8494626cf |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13385/artifact/patchprocess/diff-checkstyle-hbase-server

[jira] [Commented] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523078#comment-16523078
 ] 

Hadoop QA commented on HBASE-20785:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
11s{color} | {color:red} hbase-server: The patch generated 1 new + 30 unchanged 
- 0 fixed = 31 total (was 30) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:4b0a87a |
| JIRA Issue | HBASE-20785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929107/HBASE-20785.branch-1.4.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f015778b0c5f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13

[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523038#comment-16523038
 ] 

Andrew Purtell commented on HBASE-19722:


The branch-1 patch fails this test

{noformat}
[INFO] Running org.apache.hadoop.hbase.coprocessor.TestMetaTableMetrics
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 28.562 
s <<< FAILURE! - in org.apache.hadoop.hbase.coprocessor.TestMetaTableMetrics
[ERROR] test(org.apache.hadoop.hbase.coprocessor.TestMetaTableMetrics)  Time 
elapsed: 14.202 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.hadoop.hbase.coprocessor.TestMetaTableMetrics.test(TestMetaTableMetrics.java:202)
{noformat}


> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20777) RpcConnection could still remain opened after we shutdown the NettyRpcServer

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20777:
--
Component/s: rpc

> RpcConnection could still remain opened after we shutdown the NettyRpcServer
> 
>
> Key: HBASE-20777
> URL: https://issues.apache.org/jira/browse/HBASE-20777
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: HBASE-20777-v1.patch, HBASE-20777.patch, 
> org.apache.hadoop.hbase.client.TestAsyncTableBatch-output.txt
>
>
> The log is very strange, we keep sending request to a dead RS, and the result 
> is not connection refused, but rpc timeout, and later it becomes 
> CallQueueTooBig...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20777) RpcConnection could still remain opened after we shutdown the NettyRpcServer

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20777:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   2.0.2
   2.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+.

> RpcConnection could still remain opened after we shutdown the NettyRpcServer
> 
>
> Key: HBASE-20777
> URL: https://issues.apache.org/jira/browse/HBASE-20777
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: HBASE-20777-v1.patch, HBASE-20777.patch, 
> org.apache.hadoop.hbase.client.TestAsyncTableBatch-output.txt
>
>
> The log is very strange, we keep sending request to a dead RS, and the result 
> is not connection refused, but rpc timeout, and later it becomes 
> CallQueueTooBig...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20788:
--
Priority: Blocker  (was: Major)

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20788:
--
Fix Version/s: 2.1.0

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20788:
--
Component/s: documentation

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-20788:
-

Assignee: Duo Zhang

> Write up a doc about how to rolling upgrade from 1.x to 2.x
> ---
>
> Key: HBASE-20788
> URL: https://issues.apache.org/jira/browse/HBASE-20788
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20788) Write up a doc about how to rolling upgrade from 1.x to 2.x

2018-06-25 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-20788:
-

 Summary: Write up a doc about how to rolling upgrade from 1.x to 
2.x
 Key: HBASE-20788
 URL: https://issues.apache.org/jira/browse/HBASE-20788
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20777) RpcConnection could still remain opened after we shutdown the NettyRpcServer

2018-06-25 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523004#comment-16523004
 ] 

Duo Zhang commented on HBASE-20777:
---

TestAsyncTableBatch is fine now. Let me pushed to all branches which have netty 
rpc server.

But there is another problem

https://builds.apache.org/job/HBASE-Flaky-Tests/33682/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.multiwal.TestReplicationKillMasterRSCompressedWithMultipleAsyncWAL-output.txt/*view*/

{noformat}
2018-06-25 16:36:04,306 DEBUG [master/asf911:0.Chore.1] 
client.ResultBoundedCompletionService(226): Replica 0 returns 
java.net.SocketTimeoutException: callTimeout=6, callDuration=68578: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:55296 failed on connection exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 syscall:getsockopt(..) failed: Connection refused: 
asf911.gq1.ygridcore.net/67.195.81.155:55296 row '' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=asf911.gq1.ygridcore.net,55296,1529944208029, seqNum=-1
java.net.SocketTimeoutException: callTimeout=6, callDuration=68578: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:55296 failed on connection exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 syscall:getsockopt(..) failed: Connection refused: 
asf911.gq1.ygridcore.net/67.195.81.155:55296 row '' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=asf911.gq1.ygridcore.net,55296,1529944208029, seqNum=-1
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:158)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Call to 
asf911.gq1.ygridcore.net/67.195.81.155:55296 failed on connection exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 syscall:getsockopt(..) failed: Connection refused: 
asf911.gq1.ygridcore.net/67.195.81.155:55296
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:165)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
at 
org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:92)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1377)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:929)
at 
org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:179)
at 
org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$500(NettyRpcConnection.java:71)
at 
org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:267)
at 
org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:261)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(Default

[jira] [Commented] (HBASE-20631) B&R: Merge command enhancements

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16523002#comment-16523002
 ] 

Hadoop QA commented on HBASE-20631:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hbase-backup: The patch generated 1 new + 3 unchanged 
- 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
27s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929104/HBASE-20631-v3.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e772cee1a6f8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4ba6242a62 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13384/artifact/patchprocess/diff-checkstyle-hbase-backup.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13384/testReport/ |
| Max. process+thread count | 4196 (vs. ulimit of 1) |
| modules | C: hbase-backup U: hbase-backup |
| Console output | 
https://builds

[jira] [Commented] (HBASE-20701) too much logging when balancer runs from BaseLoadBalancer

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522999#comment-16522999
 ] 

Hudson commented on HBASE-20701:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #424 (See 
[https://builds.apache.org/job/HBase-1.3-IT/424/])
HBASE-20701 too much logging when balancer runs from BaseLoadBalancer 
(apurtell: rev 3a5d00398df14a10dcb1a95df0c6c68a2bafd8e9)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> too much logging when balancer runs from BaseLoadBalancer
> -
>
> Key: HBASE-20701
> URL: https://issues.apache.org/jira/browse/HBASE-20701
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Monani Mihir
>Assignee: Monani Mihir
>Priority: Trivial
> Fix For: 1.5.0, 1.3.3, 1.4.6
>
> Attachments: HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.3.patch, HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.4.patch, HBASE-20701-branch-1.4.patch, 
> HBASE-20701.branch-1.001.patch
>
>
> When balancer runs, it tries to find least loaded server with better locality 
> for current region. During this, we make debug level logging for each of 
> those regions. It creates too much amount of logging at debug level , we 
> should move this to trace level logging.
> {code:java}
> int getLeastLoadedTopServerForRegion (int region, int currentServer) {
> ...
> if (leastLoadedServerIndex != -1) {
> LOG.debug("Pick the least loaded server " + 
> servers[leastLoadedServerIndex].getHostname()
> + " with better locality for region " + regions[region]);
> }
> ...
> }{code}
> This was fixed in branch-2.0 as part of -HBASE-14614-  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20701) too much logging when balancer runs from BaseLoadBalancer

2018-06-25 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20701:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> too much logging when balancer runs from BaseLoadBalancer
> -
>
> Key: HBASE-20701
> URL: https://issues.apache.org/jira/browse/HBASE-20701
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Monani Mihir
>Assignee: Monani Mihir
>Priority: Trivial
> Fix For: 1.5.0, 1.3.3, 1.4.6
>
> Attachments: HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.3.patch, HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.4.patch, HBASE-20701-branch-1.4.patch, 
> HBASE-20701.branch-1.001.patch
>
>
> When balancer runs, it tries to find least loaded server with better locality 
> for current region. During this, we make debug level logging for each of 
> those regions. It creates too much amount of logging at debug level , we 
> should move this to trace level logging.
> {code:java}
> int getLeastLoadedTopServerForRegion (int region, int currentServer) {
> ...
> if (leastLoadedServerIndex != -1) {
> LOG.debug("Pick the least loaded server " + 
> servers[leastLoadedServerIndex].getHostname()
> + " with better locality for region " + regions[region]);
> }
> ...
> }{code}
> This was fixed in branch-2.0 as part of -HBASE-14614-  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20787) Rebase the HBASE-18477 onto the current master to continue dev

2018-06-25 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522979#comment-16522979
 ] 

Zach York commented on HBASE-20787:
---

I will also remove the various commits/reverts of the initial patch to simplify 
things.

> Rebase the HBASE-18477 onto the current master to continue dev
> --
>
> Key: HBASE-20787
> URL: https://issues.apache.org/jira/browse/HBASE-20787
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
>Priority: Minor
> Fix For: HBASE-18477
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20787) Rebase the HBASE-18477 onto the current master to continue dev

2018-06-25 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20787:
--
Issue Type: Sub-task  (was: Task)
Parent: HBASE-18477

> Rebase the HBASE-18477 onto the current master to continue dev
> --
>
> Key: HBASE-20787
> URL: https://issues.apache.org/jira/browse/HBASE-20787
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: HBASE-18477
>Reporter: Zach York
>Assignee: Zach York
>Priority: Minor
> Fix For: HBASE-18477
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20787) Rebase the HBASE-18477 onto the current master to continue dev

2018-06-25 Thread Zach York (JIRA)
Zach York created HBASE-20787:
-

 Summary: Rebase the HBASE-18477 onto the current master to 
continue dev
 Key: HBASE-20787
 URL: https://issues.apache.org/jira/browse/HBASE-20787
 Project: HBase
  Issue Type: Task
Affects Versions: HBASE-18477
Reporter: Zach York
Assignee: Zach York
 Fix For: HBASE-18477






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20780) ServerRpcConnection logging cleanup

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20780:
--
   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed logging change to branch-2.0+

> ServerRpcConnection logging cleanup
> ---
>
> Key: HBASE-20780
> URL: https://issues.apache.org/jira/browse/HBASE-20780
> Project: HBase
>  Issue Type: Sub-task
>  Components: logging, Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: 2.0621.2.12782.lock.svg, 2.0621.2M.46340.lock.svg, 
> 2.0623.111354.lock.svg, HBASE-20780.branch-2.0.001.patch, 
> HBASE-20780.branch-2.0.001.patch
>
>
> The logging we do inside in connection header parsing shows as worst offender 
> in the perf locking profiles. Its odd, but easy to clean up. I'd doubt it 
> makes any difference in throughput but lets get it out of the way. Let me 
> load up a few samples of what it current looks like.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522942#comment-16522942
 ] 

stack commented on HBASE-20786:
---

oh... nvm. Details have log-rolled away. Will try and run a new create... will 
be back.

> Table create with thousands of regions takes too long
> -
>
> Key: HBASE-20786
> URL: https://issues.apache.org/jira/browse/HBASE-20786
> Project: HBase
>  Issue Type: Umbrella
>  Components: Performance
>Reporter: stack
>Priority: Major
>
> Internal testing has create of a table with 33k regions taking 18 minutes. 
> Let me provide more info below. We have an executor with default ten threads 
> handling the creation of the regions in HDFS which helps distribute out the 
> load but its not enough. This cluster had >600 servers. Let me add detail.
> Need to spend some time on speeding up create/assigns. Made this an umbrella 
> issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522941#comment-16522941
 ] 

stack commented on HBASE-20786:
---

bq. Does all the creation happen on master?

Yes.

bq. Can it be farmed out to region servers?

RSs are participating in region deploy. The creation of regions is bottlenecked 
on back and forth w/ NN. M has an executor running so can do a bunch of creates 
against the NN at a time. Could farm this out but RS will just be asking NN.

bq.  Is this a result of all splits needing to go through master becoming a 
bottleneck?

M is running the process so is likely bottleneck.

But, we've not spent time on this. Lets dig in on where we are bottlenecking. 
Let me add some more notes.

> Table create with thousands of regions takes too long
> -
>
> Key: HBASE-20786
> URL: https://issues.apache.org/jira/browse/HBASE-20786
> Project: HBase
>  Issue Type: Umbrella
>  Components: Performance
>Reporter: stack
>Priority: Major
>
> Internal testing has create of a table with 33k regions taking 18 minutes. 
> Let me provide more info below. We have an executor with default ten threads 
> handling the creation of the regions in HDFS which helps distribute out the 
> load but its not enough. This cluster had >600 servers. Let me add detail.
> Need to spend some time on speeding up create/assigns. Made this an umbrella 
> issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522940#comment-16522940
 ] 

stack commented on HBASE-19722:
---

[~xucang] Needs a nice release note on how to deploy and some output on what 
gets reported to get folks excited about this nice new facility.

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522939#comment-16522939
 ] 

stack commented on HBASE-19722:
---

[~apurtell] in hbase-2.0 too please.

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20785:
--
Attachment: HBASE-20785.branch-1.4.001.patch

> NPE getting metrics in PE testing scans
> ---
>
> Key: HBASE-20785
> URL: https://issues.apache.org/jira/browse/HBASE-20785
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 1.4.4
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20785.branch-1.4.001.patch, 
> HBASE-20785.branch-1.4.001.patch
>
>
> Comparing scans using PE. In branch-1 at least, I was getting an NPE when we 
> tried to use a null metrics instance. Seems transient around startup. 
> One-liner patch coming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20631) B&R: Merge command enhancements

2018-06-25 Thread Vladimir Rodionov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-20631:
--
Attachment: HBASE-20631-v3.patch

> B&R: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20631-v1.patch, HBASE-20631-v2.patch, 
> HBASE-20631-v3.patch
>
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522909#comment-16522909
 ] 

stack commented on HBASE-20781:
---

 [^HBASE-20781.branch-2.0.001.patch] accumulates families in the WALEdit even 
if a CP adds Cells that have families outside of those in the current 
transaction set. Added some cleanup too of WALEdit (it could do w/ more). 
Removed some duplication of WALEdit methods.

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522907#comment-16522907
 ] 

stack commented on HBASE-20781:
---

So, FSWALEntry should be good but perhaps you are worried about WALEdit 
[~Apache9]. Yes, CPs can add edits for inclusion in a WALEdit and they *could* 
be edits that have a Column Family outside of the current transaction Set.

Let me accommodate inside this hack by making it so we auto-accumulate families 
even if the custom add done for CPs.

Related, there is HBASE-19134, Make WALKey an Interface; expose Read-Only 
version to CPs, which is in 2.0.0. WALKey is read-only. CPs cannot #add 
directly to WALEdit which means we should be good with our internal handling of 
adds to WALEdit.

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20781:
--
Attachment: HBASE-20781.branch-2.0.001.patch

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.branch-2.0.001.patch, 
> HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522848#comment-16522848
 ] 

Hadoop QA commented on HBASE-20785:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
15s{color} | {color:red} hbase-server: The patch generated 1 new + 30 unchanged 
- 0 fixed = 31 total (was 30) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 12s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:4b0a87a |
| JIRA Issue | HBASE-20785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929077/HBASE-20785.branch-1.4.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbasean

[jira] [Updated] (HBASE-19722) Meta query statistics metrics source

2018-06-25 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19722:
---
Summary: Meta query statistics metrics source  (was: Implement a meta query 
statistics metrics source)

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Implement a meta query statistics metrics source

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522822#comment-16522822
 ] 

Andrew Purtell commented on HBASE-19722:


Ok, let's bring this one home. Committing shortly.

> Implement a meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-19722.branch-1.v001.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname. 
> Can be implemented as a coprocessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522820#comment-16522820
 ] 

Mike Drob commented on HBASE-20786:
---

Does all the creation happen on master? Can it be farmed out to region servers? 
Is this a result of all splits needing to go through master becoming a 
bottleneck?

> Table create with thousands of regions takes too long
> -
>
> Key: HBASE-20786
> URL: https://issues.apache.org/jira/browse/HBASE-20786
> Project: HBase
>  Issue Type: Umbrella
>  Components: Performance
>Reporter: stack
>Priority: Major
>
> Internal testing has create of a table with 33k regions taking 18 minutes. 
> Let me provide more info below. We have an executor with default ten threads 
> handling the creation of the regions in HDFS which helps distribute out the 
> load but its not enough. This cluster had >600 servers. Let me add detail.
> Need to spend some time on speeding up create/assigns. Made this an umbrella 
> issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522804#comment-16522804
 ] 

stack commented on HBASE-20781:
---

FSWALEntry is @InterfaceAudience.Private

FSWALEntry is not passed to CPs anywhere that I can see; its an internal object.

Therefore we should be find w/ this savings.

Let me upload new patch w/ some extra doc and a fix for checkstyle.

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20786) Table create with thousands of regions takes too long

2018-06-25 Thread stack (JIRA)
stack created HBASE-20786:
-

 Summary: Table create with thousands of regions takes too long
 Key: HBASE-20786
 URL: https://issues.apache.org/jira/browse/HBASE-20786
 Project: HBase
  Issue Type: Umbrella
  Components: Performance
Reporter: stack


Internal testing has create of a table with 33k regions taking 18 minutes. Let 
me provide more info below. We have an executor with default ten threads 
handling the creation of the regions in HDFS which helps distribute out the 
load but its not enough. This cluster had >600 servers. Let me add detail.

Need to spend some time on speeding up create/assigns. Made this an umbrella 
issue so can pick off pieces of the problem as subtasks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20781) Save recalculating families in a WALEdit batch of Cells

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522787#comment-16522787
 ] 

stack commented on HBASE-20781:
---

bq. IIRC the extra code which is used to construct the families for FSWALEntry 
is introduced by per column family flush. I can remember it because I'm sure 
that I also wanted to eliminate it in the first place but finally gave up. 
Maybe the problem is in CP we can add or remove cells in WALEdit? Not sure, 
long time ago...

Thanks [~Apache9]

You thinking we should keep this extra iteration? Should be fine if CPs add 
edits for families already in the WALEdit. If they want to add edits for column 
families not in current batch, they can update the WALEdit Set of families too? 
Let me add a note. A note is not much by way of protection but I think better 
to have CPs suffer than pay this CPU on each and every edit?

> Save recalculating families in a WALEdit batch of Cells
> ---
>
> Key: HBASE-20781
> URL: https://issues.apache.org/jira/browse/HBASE-20781
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: 2.0621.2.12782.cpu.svg, 2.0621.256k.51351.cpu.svg, 
> 2.0623.families.121250.cpu.svg, HBASE-20781.master.001.patch
>
>
> Doing a doMiniBatchMutate, we calculate the set of families that the WALEdit 
> Cells belong to up front but down after the RingBuffer when we make an 
> FSWALEdit, we spin through all the Cells again to figure the set of families 
> in a particularly painful way. Just pass the calculated family set in the 
> WALEdit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522743#comment-16522743
 ] 

Andrew Purtell commented on HBASE-20403:


I don't believe branch-1 is affected because of locking in hfileblock. 

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522725#comment-16522725
 ] 

Todd Lipcon commented on HBASE-20403:
-

I would guess it's not affected because it has locking in the file reader path. 
The locking was removed by HBASE-17917 in 2.0

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522721#comment-16522721
 ] 

Mike Drob commented on HBASE-20403:
---

is branch-1 affected by this also? I'd imagine yes since I don't think the scan 
code has been rewritten, but I also would have expected to see this before if 
so.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522717#comment-16522717
 ] 

Andrew Purtell commented on HBASE-20403:


Thanks for the fix and the commit [~tlipcon]

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20728) Failure and recovery of all RSes in a RSgroup requires master restart for region assignments

2018-06-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522714#comment-16522714
 ] 

Sakthi commented on HBASE-20728:


Sure [~apurtell]

> Failure and recovery of all RSes in a RSgroup requires master restart for 
> region assignments
> 
>
> Key: HBASE-20728
> URL: https://issues.apache.org/jira/browse/HBASE-20728
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Minor
>
> If all the RSes in a RSgroup hosting user tables fail and recover, master 
> still looks for old RSes (with old timestamp in the RS identifier) to assign 
> regions. i.e. Regions are left in transition making the tables in the RSGroup 
> unavailable. User need to restart {{master}} or manually assign the regions 
> to make the tables available. Steps to recreate the scenario in a local 
> cluster
>  - Add required properties to {{site.xml}} to enable {{rsgroup}} and start 
> hbase
>  - Bring up multiple region servers using {{local-regionservers.sh start}}
>  - Create a {{rsgroup}} and move a subset of  {{regionservers}} to the group
>  - Create a table, move it to the group and put some data
>  - Stop the {{regionservers}} in the group and restart them
>  - From the {{master UI}}, we can see that the region for the table in 
> transition and the RS name in the {{RIT}} message has the old timestamp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20728) Failure and recovery of all RSes in a RSgroup requires master restart for region assignments

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522713#comment-16522713
 ] 

Andrew Purtell commented on HBASE-20728:


[~jatsakthi] Please go ahead. It looks ok to work on this.

> Failure and recovery of all RSes in a RSgroup requires master restart for 
> region assignments
> 
>
> Key: HBASE-20728
> URL: https://issues.apache.org/jira/browse/HBASE-20728
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Minor
>
> If all the RSes in a RSgroup hosting user tables fail and recover, master 
> still looks for old RSes (with old timestamp in the RS identifier) to assign 
> regions. i.e. Regions are left in transition making the tables in the RSGroup 
> unavailable. User need to restart {{master}} or manually assign the regions 
> to make the tables available. Steps to recreate the scenario in a local 
> cluster
>  - Add required properties to {{site.xml}} to enable {{rsgroup}} and start 
> hbase
>  - Bring up multiple region servers using {{local-regionservers.sh start}}
>  - Create a {{rsgroup}} and move a subset of  {{regionservers}} to the group
>  - Create a table, move it to the group and put some data
>  - Stop the {{regionservers}} in the group and restart them
>  - From the {{master UI}}, we can see that the region for the table in 
> transition and the RS name in the {{RIT}} message has the old timestamp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.2
   2.1.0
   Status: Resolved  (was: Patch Available)

Committed to master, branch-2, branch-2.1, branch-2.0. Appears my commit access 
still works after 6 years! Thanks for the review.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20306) LoadTestTool does not print summary at end of run

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522701#comment-16522701
 ] 

Andrew Purtell commented on HBASE-20306:


Any progress here?

> LoadTestTool does not print summary at end of run
> -
>
> Key: HBASE-20306
> URL: https://issues.apache.org/jira/browse/HBASE-20306
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: beginner
>
> ltt currently prints status as it goes, but doesn't give a nice summary of 
> what happened so users have to infer it from the last status line printed.
> Would be nice to print a real summary with statistics about what was run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20704) Sometimes some compacted storefiles are not archived on region close

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522698#comment-16522698
 ] 

Andrew Purtell edited comment on HBASE-20704 at 6/25/18 7:13 PM:
-

bq. For 1.x in the small window the region hasn't be set a closed by the RS yet 
the client will get an NPE when the scan tries to access the fs stream. The 
client will retry.

My opinion is a NPE is always a bug. Ideally we do something more graceful. 
Even simply substituting an IOException with suitably descriptive message would 
be better then have NullPointerException appear in a log somewhere. Guaranteed 
to draw attention/concern. 


was (Author: apurtell):
bq. For 1.x in the small window the region hasn't be set a closed by the RS yet 
the client will get an NPE when the scan tries to access the fs stream. The 
client will retry.

My opinion is a NPE is always a bug. Ideally we do something more graceful than 
propagate it to the client for a retry. Even simply substituting an IOException 
(with suitably descriptive message) would be better.

> Sometimes some compacted storefiles are not archived on region close
> 
>
> Key: HBASE-20704
> URL: https://issues.apache.org/jira/browse/HBASE-20704
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0
>Reporter: Francis Liu
>Assignee: Francis Liu
>Priority: Critical
> Attachments: HBASE-20704.001.patch, HBASE-20704.002.patch
>
>
> During region close compacted files which have not yet been archived by the 
> discharger are archived as part of the region closing process. It is 
> important that these files are wholly archived to insure data consistency. ie 
> a storefile containing delete tombstones can be archived while older 
> storefiles containing cells that were supposed to be deleted are left 
> unarchived thereby undeleting those cells. 
> On region close a compacted storefile is skipped from archiving if it has 
> read references (ie open scanners). This behavior is correct for when the 
> discharger chore runs but on region close consistency is of course more 
> important so we should add a special case to ignore any references on the 
> storefile and go ahead and archive it. 
> Attached patch contains a unit test that reproduces the problem and the 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20770:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+. Thanks for review [~reidchan]

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.2
>
> Attachments: HBASE-20770.branch-2.0.001.patch
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
> 2018-06-21 01:19:13,011 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c

[jira] [Commented] (HBASE-20704) Sometimes some compacted storefiles are not archived on region close

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522698#comment-16522698
 ] 

Andrew Purtell commented on HBASE-20704:


bq. For 1.x in the small window the region hasn't be set a closed by the RS yet 
the client will get an NPE when the scan tries to access the fs stream. The 
client will retry.

My opinion is a NPE is always a bug. Ideally we do something more graceful than 
propagate it to the client for a retry. Even simply substituting an IOException 
(with suitably descriptive message) would be better.

> Sometimes some compacted storefiles are not archived on region close
> 
>
> Key: HBASE-20704
> URL: https://issues.apache.org/jira/browse/HBASE-20704
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0
>Reporter: Francis Liu
>Assignee: Francis Liu
>Priority: Critical
> Attachments: HBASE-20704.001.patch, HBASE-20704.002.patch
>
>
> During region close compacted files which have not yet been archived by the 
> discharger are archived as part of the region closing process. It is 
> important that these files are wholly archived to insure data consistency. ie 
> a storefile containing delete tombstones can be archived while older 
> storefiles containing cells that were supposed to be deleted are left 
> unarchived thereby undeleting those cells. 
> On region close a compacted storefile is skipped from archiving if it has 
> read references (ie open scanners). This behavior is correct for when the 
> discharger chore runs but on region close consistency is of course more 
> important so we should add a special case to ignore any references on the 
> storefile and go ahead and archive it. 
> Attached patch contains a unit test that reproduces the problem and the 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20785:
--
Status: Patch Available  (was: Open)

> NPE getting metrics in PE testing scans
> ---
>
> Key: HBASE-20785
> URL: https://issues.apache.org/jira/browse/HBASE-20785
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 1.4.4
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20785.branch-1.4.001.patch
>
>
> Comparing scans using PE. In branch-1 at least, I was getting an NPE when we 
> tried to use a null metrics instance. Seems transient around startup. 
> One-liner patch coming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20785:
--
Attachment: HBASE-20785.branch-1.4.001.patch

> NPE getting metrics in PE testing scans
> ---
>
> Key: HBASE-20785
> URL: https://issues.apache.org/jira/browse/HBASE-20785
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 1.4.4
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-20785.branch-1.4.001.patch
>
>
> Comparing scans using PE. In branch-1 at least, I was getting an NPE when we 
> tried to use a null metrics instance. Seems transient around startup. 
> One-liner patch coming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20701) too much logging when balancer runs from BaseLoadBalancer

2018-06-25 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20701:
---
Fix Version/s: 1.4.6
   1.3.3
   1.5.0

+1

> too much logging when balancer runs from BaseLoadBalancer
> -
>
> Key: HBASE-20701
> URL: https://issues.apache.org/jira/browse/HBASE-20701
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Monani Mihir
>Assignee: Monani Mihir
>Priority: Trivial
> Fix For: 1.5.0, 1.3.3, 1.4.6
>
> Attachments: HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.3.patch, HBASE-20701-branch-1.3.patch, 
> HBASE-20701-branch-1.4.patch, HBASE-20701-branch-1.4.patch, 
> HBASE-20701.branch-1.001.patch
>
>
> When balancer runs, it tries to find least loaded server with better locality 
> for current region. During this, we make debug level logging for each of 
> those regions. It creates too much amount of logging at debug level , we 
> should move this to trace level logging.
> {code:java}
> int getLeastLoadedTopServerForRegion (int region, int currentServer) {
> ...
> if (leastLoadedServerIndex != -1) {
> LOG.debug("Pick the least loaded server " + 
> servers[leastLoadedServerIndex].getHostname()
> + " with better locality for region " + regions[region]);
> }
> ...
> }{code}
> This was fixed in branch-2.0 as part of -HBASE-14614-  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20785) NPE getting metrics in PE testing scans

2018-06-25 Thread stack (JIRA)
stack created HBASE-20785:
-

 Summary: NPE getting metrics in PE testing scans
 Key: HBASE-20785
 URL: https://issues.apache.org/jira/browse/HBASE-20785
 Project: HBase
  Issue Type: Bug
  Components: Performance
Affects Versions: 1.4.4
Reporter: stack
Assignee: stack


Comparing scans using PE. In branch-1 at least, I was getting an NPE when we 
tried to use a null metrics instance. Seems transient around startup. One-liner 
patch coming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522693#comment-16522693
 ] 

Mike Drob commented on HBASE-20403:
---

bq. Got a report back from an internal test cluster who was previously 
reproducing this issue. With this patch applied the issue seems to be resolved.
Sounds good, you still have commit access, right? ;)

+1

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20666) Unsuccessful table creation leaves entry in rsgroup meta table

2018-06-25 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522691#comment-16522691
 ] 

Andrew Purtell commented on HBASE-20666:


[~nihaljain.cs] I think it's ok to proceed

> Unsuccessful table creation leaves entry in rsgroup meta table
> --
>
> Key: HBASE-20666
> URL: https://issues.apache.org/jira/browse/HBASE-20666
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> If a table creation fails in a cluster enabled with {{rsgroup}} feature, the 
> table is still listed as part of {{default}} rsgroup.
> To recreate the scenario:
> - Create a namespace (NS) with number of region limit
> - Create table in the NS which satisfies the region limit by pre-splitting
> - Create a new table in the NS which will fail
> - {{list_rsgroup}} will show the table being part of {{default}} rsgroup and 
> data can be found in {{hbase:rsgroup}} table
> Would be good to revert the entry when the table creation fails or a script 
> to clean up the metadata.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-25 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522682#comment-16522682
 ] 

Todd Lipcon commented on HBASE-20403:
-

Got a report back from an internal test cluster who was previously reproducing 
this issue. With this patch applied the issue seems to be resolved.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20763) Update guava >=24.1.1

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522674#comment-16522674
 ] 

stack commented on HBASE-20763:
---

Yeah. +1. Lets see if we can update other stuff...  For 2.1.0?

pb is now 3.6 (we are at 3.3.4). Some of the changes look helpful: e.g.

3.6.0 "Added a UTF-8 decoder that uses Unsafe to directly decode a byte buffer."
3.4.0 "Optimized CodedInputStream to do less copies when parsing large bytes 
fields."

Netty is now 4.1.25 (we are 4.1.12).



> Update guava >=24.1.1
> -
>
> Key: HBASE-20763
> URL: https://issues.apache.org/jira/browse/HBASE-20763
> Project: HBase
>  Issue Type: Task
>  Components: thirdparty
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: thirdparty-2.2.0
>
> Attachments: HBASE-20763.001.patch
>
>
> We should update Guava in hbase-thirdparty to stop shipping the code cited as 
> vulnerable in CVE-2018-10237. We do not invoke this code ourselves and users 
> would have to try pretty hard to use it themselves, but we've seen more 
> strange things before ;)
> Let's just bump up the dependency and move on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20553) Add dependency CVE checking to nightly tests

2018-06-25 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522665#comment-16522665
 ] 

Sakthi commented on HBASE-20553:


Sure. Makes sense.

> Add dependency CVE checking to nightly tests
> 
>
> Key: HBASE-20553
> URL: https://issues.apache.org/jira/browse/HBASE-20553
> Project: HBase
>  Issue Type: Umbrella
>  Components: dependencies
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
>
> We should proactively work to flag dependencies with known CVEs so that we 
> can then update them early in our development instead of near a release.
> YETUS-441 is working to add a plugin for this, we should grab a copy early to 
> make sure it works for us.
> Rough outline:
> 1. [install yetus locally|http://yetus.apache.org/downloads/]
> 2. [install the dependency-check 
> cli|https://www.owasp.org/index.php/OWASP_Dependency_Check] (homebrew 
> instructions on right hand margin)
> 3. Get a local copy of the OWASP datafile ({{dependency-check --updateonly 
> --data /some/local/path/to/dir}})
> 4. Run {{hbase_nightly_yetus.sh}} using matching environment variables from 
> the “yetus general check” (currently [line #126 in our nightly 
> Jenkinsfile|https://github.com/apache/hbase/blob/master/dev-support/Jenkinsfile#L126])
> 5. Grab the plugin definition and suppression file from from YETUS-441
> 6. put the plugin definition either in a directory of dev-support or into the 
> hbase-personality.sh directly
> 7. Re-run {{hbase_nightly_yetus.sh}} to verify that the plugin results show 
> up. (Probably this will involve adding new pointers for “where is the 
> suppression file”, “where is the OWASP datafile” and pointing them somewhere 
> locally.)
> Once all of that is in place we’ll get the changes needed into a branch that 
> we can test out. Over in YETUS-441 I’ll need to add a jenkins job that’ll 
> handle periodically updating a copy of the datafile for the OWASP dependency 
> checker. Presuming I have that in place by the time we have a nightly branch 
> to check this out, then we’ll also need to update our nightly Jenkinsfile to 
> fetch the data file from that job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19997) [rolling upgrade] 1.x => 2.x

2018-06-25 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522663#comment-16522663
 ] 

stack commented on HBASE-19997:
---

Wow! That is great news. The zkless assignment in hbase1 master supports hbase2 
RS? Thats great (I'm surprised but in the best way -- smile).  So, a hbase1 
Master can assign regions to an hbase2 RS? Yeah, a write-up.. .even if it was 
just a few lines so can try it would be sweet. As others try it we can fill it 
out more.

This is good news.

> [rolling upgrade] 1.x => 2.x
> 
>
> Key: HBASE-19997
> URL: https://issues.apache.org/jira/browse/HBASE-19997
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Blocker
> Fix For: 2.1.0
>
> Attachments: Screenshot from 2018-05-03 14-43-46.png
>
>
> An umbrella issue of issues needed so folks can do a rolling upgrade from 
> hbase-1.x to hbase-2.x.
> (Recent) Notables:
>  * hbase-1.x can't read hbase-2.x WALs -- hbase-1.x doesn't know the 
> AsyncProtobufLogWriter class used writing the WAL -- see 
> https://issues.apache.org/jira/browse/HBASE-19166?focusedCommentId=16362897&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16362897
>  for exception.
>  ** Might be ok... means WAL split fails on an hbase1 RS... must wait till an 
> hbase-2.x RS picks up the WAL for it to be split.
>  * hbase-1 can't open regions from tables created by hbase-2; it can't find 
> the Table descriptor. See 
> https://issues.apache.org/jira/browse/HBASE-19116?focusedCommentId=16363276&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16363276
>  ** This might be ok if the tables we are doing rolling upgrade over were 
> written with hbase-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-18366) Fix flaky test hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta

2018-06-25 Thread Umesh Agashe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe resolved HBASE-18366.
--
Resolution: Not A Problem

Not flaky anymore. Fixed by other JIRAs.

> Fix flaky test 
> hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta
> -
>
> Key: HBASE-18366
> URL: https://issues.apache.org/jira/browse/HBASE-18366
> Project: HBase
>  Issue Type: Bug
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-18366.fix1.patch, hbase-18366.fix2.patch
>
>
> It worked for a few days after enabling it with HBASE-18278. But started 
> failing after commits:
> 6786b2b
> 68436c9
> 75d2eca
> 50bb045
> df93c13
> It works with one commit before: c5abb6c. Need to see what changed with those 
> commits.
> Currently it fails with TableNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-06-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16522639#comment-16522639
 ] 

Hadoop QA commented on HBASE-18201:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m  
9s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
25s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m  
1s{color} | {color:blue} patch has no errors when building the reference guide. 
See footer for rendered docs, which you should manually inspect. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}184m  
4s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}257m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-18201 |
| JIRA Patch URL | 
https://issues

[jira] [Resolved] (HBASE-20635) Support to convert the shaded user permission proto to client user permission object

2018-06-25 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved HBASE-20635.
-
Resolution: Fixed

bq. You understand the difference between hbase-protocol and 
hbase-protocol-shaded and that the shaded utils are for internal use only?
Yes understood [~stack]. Thanks.

> Support to convert the shaded user permission proto to client user permission 
> object
> 
>
> Key: HBASE-20635
> URL: https://issues.apache.org/jira/browse/HBASE-20635
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20635.patch, HBASE-20635_v2.patch, 
> PHOENIX-4528_5.x-HBase-2.0_v2.patch
>
>
> Currently we have API to build the protobuf UserPermission to client user 
> permission in AccessControlUtil but we cannot do the same when we use shaded 
> protobufs.
> {noformat}
>   /**
>* Converts a user permission proto to a client user permission object.
>*
>* @param proto the protobuf UserPermission
>* @return the converted UserPermission
>*/
>   public static UserPermission 
> toUserPermission(AccessControlProtos.UserPermission proto) {
> return new UserPermission(proto.getUser().toByteArray(),
> toTablePermission(proto.getPermission()));
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20762) precommit should archive generated LICENSE file

2018-06-25 Thread Mike Drob (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved HBASE-20762.
---
Resolution: Not A Problem

> precommit should archive generated LICENSE file
> ---
>
> Key: HBASE-20762
> URL: https://issues.apache.org/jira/browse/HBASE-20762
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Mike Drob
>Priority: Major
>
> When a precommit run fails due to license issues, we get pointed to a file in 
> our maven logs:
> {noformat}
> /testptch/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
> {noformat}
> But we don't have that file saved, so we don't know what the actual failure 
> was. So we should save that in our build artifacts. Or maybe we can print a 
> snippet from that file directly into the maven log. Both would be acceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >