[jira] [Reopened] (HBASE-22620) When a cluster open replication,regionserver will not clean up the walLog references on zk due to no wal entry need to be replicated

2019-06-27 Thread leizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang reopened HBASE-22620:
--

> When a cluster open replication,regionserver will not clean up the walLog 
> references on zk due to no wal entry need to be replicated
> 
>
> Key: HBASE-22620
> URL: https://issues.apache.org/jira/browse/HBASE-22620
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.4, 1.4.9
>Reporter: leizhang
>Priority: Major
> Fix For: 2.1.0
>
>
> When I open the replication feature on my hbase cluster (20 regionserver 
> nodes) and added a peer cluster, for example, I create a table with 3 regions 
> with REPLICATION_SCOPE set to 1, which opened on 3 regionservers of 20. Due 
> to no data(entryBatch) to replicate ,the left 17 nodes  accumulate lots of 
> wal references on the zk node 
> "/hbase/replication/rs/\{resionserver}/\{peerId}/"  and will not be cleaned 
> up, which cause lots of wal file on hdfs will not be cleaned up either. When 
> I check my test cluster after about four months, it accumulates about 5w wal 
> files in the oldWal directory on hdfs. The source code shows that only there 
> are data to be replicated, and after some data is replicated in the source 
> endpoint, then it will executed the useless wal file check, and clean their 
> references on zk, and the hdfs useless wal files will be cleaned up normally. 
> So I think do we need other method to trigger the useless wal cleaning job in 
> a replication cluster? May be  in the  replication progress report  schedule 
> task  (just like ReplicationStatisticsTask.class)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-22620) When a cluster open replication,regionserver will not clean up the walLog references on zk due to no wal entry need to be replicated

2019-06-27 Thread leizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HBASE-22620:
-
Comment: was deleted

(was: thank you very much !  I check the  source code of Hbase2.1.0 ,and find 
the 

entryReader.take()   has been replaced by entryReader.poll(getEntriesTimeout);

then the tread will not be blocked and will excute the following logic, and the 
problem can be solved !)

> When a cluster open replication,regionserver will not clean up the walLog 
> references on zk due to no wal entry need to be replicated
> 
>
> Key: HBASE-22620
> URL: https://issues.apache.org/jira/browse/HBASE-22620
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.4, 1.4.9
>Reporter: leizhang
>Priority: Major
> Fix For: 2.1.0
>
>
> When I open the replication feature on my hbase cluster (20 regionserver 
> nodes) and added a peer cluster, for example, I create a table with 3 regions 
> with REPLICATION_SCOPE set to 1, which opened on 3 regionservers of 20. Due 
> to no data(entryBatch) to replicate ,the left 17 nodes  accumulate lots of 
> wal references on the zk node 
> "/hbase/replication/rs/\{resionserver}/\{peerId}/"  and will not be cleaned 
> up, which cause lots of wal file on hdfs will not be cleaned up either. When 
> I check my test cluster after about four months, it accumulates about 5w wal 
> files in the oldWal directory on hdfs. The source code shows that only there 
> are data to be replicated, and after some data is replicated in the source 
> endpoint, then it will executed the useless wal file check, and clean their 
> references on zk, and the hdfs useless wal files will be cleaned up normally. 
> So I think do we need other method to trigger the useless wal cleaning job in 
> a replication cluster? May be  in the  replication progress report  schedule 
> task  (just like ReplicationStatisticsTask.class)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874683#comment-16874683
 ] 

stack commented on HBASE-20368:
---

Thanks [~arshiya9414] I just put up a patch for 2.1... lets see what hadoopqa 
says. If good, will commit. Thanks for following up here.

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch, 
> HBASE-20368.branch-2.1.001.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20368:
--
Attachment: HBASE-20368.branch-2.1.001.patch

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch, 
> HBASE-20368.branch-2.1.001.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread Syeda Arshiya Tabreen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874677#comment-16874677
 ] 

Syeda Arshiya Tabreen commented on HBASE-20368:
---

[~stack] We are using hbase-2.1 version. Thanks

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] saintstack commented on issue #344: HBASE-22632 SplitTableRegionProcedure and MergeTableRegionsProcedure …

2019-06-27 Thread GitBox
saintstack commented on issue #344: HBASE-22632 SplitTableRegionProcedure and 
MergeTableRegionsProcedure …
URL: https://github.com/apache/hbase/pull/344#issuecomment-506594564
 
 
   Oh. Ignore. Missed that its merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2019-06-27 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874660#comment-16874660
 ] 

stack commented on HBASE-19893:
---

[~brfrn169] See [~jatsakthi] question above sir. Perhaps branch-2.2 and less do 
not have this issue (but fix version said 2.2.0 until I changed it just now to 
include master branch). Thanks.

> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: 19893.master.004.patch, 19893.master.004.patch, 
> 19893.master.004.patch, HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.005.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.006.patch, 
> org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas-output.txt
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2019-06-27 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19893:
--
Fix Version/s: 2.3.0
   3.0.0

> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: 19893.master.004.patch, 19893.master.004.patch, 
> 19893.master.004.patch, HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.005.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.006.patch, 
> org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas-output.txt
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22640) Random init hstore lastFlushTime

2019-06-27 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874654#comment-16874654
 ] 

stack commented on HBASE-22640:
---

Patch looks good. Did you try it? Does it work? Thanks.

> Random init  hstore lastFlushTime
> -
>
> Key: HBASE-22640
> URL: https://issues.apache.org/jira/browse/HBASE-22640
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 3.0.0, 2.2.1
>
> Attachments: HBASE-22640-master-v1.patch
>
>
> During with open region use current time as hstore last flush time, and no 
> mush data put cause memstore flush, after flushCheckInterval all memstore 
> will flush together bring concentrated IO and compaction make high request 
> latency;So random init lastFlushTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22640) Random init hstore lastFlushTime

2019-06-27 Thread Bing Xiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Xiao updated HBASE-22640:
--
Status: Patch Available  (was: Open)

> Random init  hstore lastFlushTime
> -
>
> Key: HBASE-22640
> URL: https://issues.apache.org/jira/browse/HBASE-22640
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 3.0.0, 2.2.1
>
> Attachments: HBASE-22640-master-v1.patch
>
>
> During with open region use current time as hstore last flush time, and no 
> mush data put cause memstore flush, after flushCheckInterval all memstore 
> will flush together bring concentrated IO and compaction make high request 
> latency;So random init lastFlushTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] jatsakthi commented on issue #316: HBASE-22595 Changed suppressions to full qualified class name

2019-06-27 Thread GitBox
jatsakthi commented on issue #316: HBASE-22595 Changed suppressions to full 
qualified class name
URL: https://github.com/apache/hbase/pull/316#issuecomment-506537726
 
 
   lgtm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on a change in pull request #316: HBASE-22595 Changed suppressions to full qualified class name

2019-06-27 Thread GitBox
jatsakthi commented on a change in pull request #316: HBASE-22595 Changed 
suppressions to full qualified class name
URL: https://github.com/apache/hbase/pull/316#discussion_r298397974
 
 

 ##
 File path: 
hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
 ##
 @@ -36,13 +36,13 @@
   
   
   
-  
-  
-  
-  
-  
-  
+  
+  
+  
+  
+  
 
 Review comment:
   Unrelated to this jira, is the naming of "StartcodeAgnosticServerName" class 
correct? Like, is it intentionally not StartCodeAgnosticServerName?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on issue #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-06-27 Thread GitBox
jatsakthi commented on issue #322: HBASE-22586 Javadoc Warnings related to 
@param tag
URL: https://github.com/apache/hbase/pull/322#issuecomment-506535873
 
 
   @syedmurtazahassan any updates on this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] jatsakthi commented on issue #345: HBASE-22638 : Checkstyle changes for Zookeeper Utility classes

2019-06-27 Thread GitBox
jatsakthi commented on issue #345: HBASE-22638 : Checkstyle changes for 
Zookeeper Utility classes
URL: https://github.com/apache/hbase/pull/345#issuecomment-506534008
 
 
   Any other classes in the zookeeper module where such changes can be done? If 
yes, it would be better to accommodate all of them under the same jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-19893) restore_snapshot is broken in master branch when region splits

2019-06-27 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874545#comment-16874545
 ] 

Sakthi commented on HBASE-19893:


Any reason, why this wasn't backported to all 2.x branches? Unless any 
objections, I'll open Jira for the backports.

> restore_snapshot is broken in master branch when region splits
> --
>
> Key: HBASE-19893
> URL: https://issues.apache.org/jira/browse/HBASE-19893
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: 19893.master.004.patch, 19893.master.004.patch, 
> 19893.master.004.patch, HBASE-19893.master.001.patch, 
> HBASE-19893.master.002.patch, HBASE-19893.master.003.patch, 
> HBASE-19893.master.003.patch, HBASE-19893.master.004.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.005.patch, 
> HBASE-19893.master.005.patch, HBASE-19893.master.006.patch, 
> org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas-output.txt
>
>
> When I was investigating HBASE-19850, I found restore_snapshot didn't work in 
> master branch.
>  
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf"
> {code}
> 2. Load data (2000 rows) to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> {code}
> 3. Split the table
> {code:java}
> split "test"
> {code}
> 4. Take a snapshot
> {code:java}
> snapshot "test", "snap"
> {code}
> 5. Load more data (2000 rows) to the table and split the table agin
> {code:java}
> (2000...4000).each{|i| put "test", "row#{i}", "cf:col", "val"}
> split "test"
> {code}
> 6. Restore the table from the snapshot 
> {code:java}
> disable "test"
> restore_snapshot "snap"
> enable "test"
> {code}
> 7. Scan the table
> {code:java}
> scan "test"
> {code}
> However, this scan returns only 244 rows (it should return 2000 rows) like 
> the following:
> {code:java}
> hbase(main):038:0> scan "test"
> ROW COLUMN+CELL
>  row78 column=cf:col, timestamp=1517298307049, value=val
> 
>   row999 column=cf:col, timestamp=1517298307608, value=val
> 244 row(s)
> Took 0.1500 seconds
> {code}
>  
> Also, the restored table should have 2 online regions but it has 3 online 
> regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22632) SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store files for unknown column families

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874529#comment-16874529
 ] 

Hudson commented on HBASE-22632:


Results for branch branch-2
[build #2031 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2031/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2031//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2031//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2031//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store 
> files for unknown column families
> 
>
> Key: HBASE-22632
> URL: https://issues.apache.org/jira/browse/HBASE-22632
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22632-UT.patch
>
>
> Hit this problem in our internal staging cluster. Not sure why, but probably, 
> there was a partial successful 'alter table' call that removed a family. As 
> it is 'partial successful', there are still some stale store files of the 
> removed family left under the region directory. And in 
> SplitTableRegionProcedure and MergeTableRegionsProcedure, we will get all the 
> store files by listing the file system, so we will also get the stale store 
> files for the family which should have been removed already, and then causes 
> NPE when we want to access the ColumnFamilyDescriptor.
> Although it is not the common case that there are store files for removed 
> families, but FWIW, I think we can do something to make our procedures more 
> robust...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22632) SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store files for unknown column families

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874505#comment-16874505
 ] 

Hudson commented on HBASE-22632:


Results for branch master
[build #1180 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1180/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1180//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1180//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1180//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store 
> files for unknown column families
> 
>
> Key: HBASE-22632
> URL: https://issues.apache.org/jira/browse/HBASE-22632
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22632-UT.patch
>
>
> Hit this problem in our internal staging cluster. Not sure why, but probably, 
> there was a partial successful 'alter table' call that removed a family. As 
> it is 'partial successful', there are still some stale store files of the 
> removed family left under the region directory. And in 
> SplitTableRegionProcedure and MergeTableRegionsProcedure, we will get all the 
> store files by listing the file system, so we will also get the stale store 
> files for the family which should have been removed already, and then causes 
> NPE when we want to access the ColumnFamilyDescriptor.
> Although it is not the common case that there are store files for removed 
> families, but FWIW, I think we can do something to make our procedures more 
> robust...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22632) SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store files for unknown column families

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874500#comment-16874500
 ] 

Hudson commented on HBASE-22632:


Results for branch branch-2.1
[build #1307 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1307/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1307//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1307//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1307//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store 
> files for unknown column families
> 
>
> Key: HBASE-22632
> URL: https://issues.apache.org/jira/browse/HBASE-22632
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22632-UT.patch
>
>
> Hit this problem in our internal staging cluster. Not sure why, but probably, 
> there was a partial successful 'alter table' call that removed a family. As 
> it is 'partial successful', there are still some stale store files of the 
> removed family left under the region directory. And in 
> SplitTableRegionProcedure and MergeTableRegionsProcedure, we will get all the 
> store files by listing the file system, so we will also get the stale store 
> files for the family which should have been removed already, and then causes 
> NPE when we want to access the ColumnFamilyDescriptor.
> Although it is not the common case that there are store files for removed 
> families, but FWIW, I think we can do something to make our procedures more 
> robust...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22632) SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store files for unknown column families

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874491#comment-16874491
 ] 

Hudson commented on HBASE-22632:


Results for branch branch-2.2
[build #392 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/392/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/392//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/392//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/392//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store 
> files for unknown column families
> 
>
> Key: HBASE-22632
> URL: https://issues.apache.org/jira/browse/HBASE-22632
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22632-UT.patch
>
>
> Hit this problem in our internal staging cluster. Not sure why, but probably, 
> there was a partial successful 'alter table' call that removed a family. As 
> it is 'partial successful', there are still some stale store files of the 
> removed family left under the region directory. And in 
> SplitTableRegionProcedure and MergeTableRegionsProcedure, we will get all the 
> store files by listing the file system, so we will also get the stale store 
> files for the family which should have been removed already, and then causes 
> NPE when we want to access the ColumnFamilyDescriptor.
> Although it is not the common case that there are store files for removed 
> families, but FWIW, I think we can do something to make our procedures more 
> robust...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22632) SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store files for unknown column families

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874476#comment-16874476
 ] 

Hudson commented on HBASE-22632:


Results for branch branch-2.0
[build #1706 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1706/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1706//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1706//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1706//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> SplitTableRegionProcedure and MergeTableRegionsProcedure should skip store 
> files for unknown column families
> 
>
> Key: HBASE-22632
> URL: https://issues.apache.org/jira/browse/HBASE-22632
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22632-UT.patch
>
>
> Hit this problem in our internal staging cluster. Not sure why, but probably, 
> there was a partial successful 'alter table' call that removed a family. As 
> it is 'partial successful', there are still some stale store files of the 
> removed family left under the region directory. And in 
> SplitTableRegionProcedure and MergeTableRegionsProcedure, we will get all the 
> store files by listing the file system, so we will also get the stale store 
> files for the family which should have been removed already, and then causes 
> NPE when we want to access the ColumnFamilyDescriptor.
> Although it is not the common case that there are store files for removed 
> families, but FWIW, I think we can do something to make our procedures more 
> robust...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] anmolnar commented on issue #289: HBASE-13798 TestFromClientSide* don't close the Table (branch-2)

2019-06-27 Thread GitBox
anmolnar commented on issue #289: HBASE-13798 TestFromClientSide* don't close 
the Table (branch-2)
URL: https://github.com/apache/hbase/pull/289#issuecomment-506488412
 
 
   @busbey Yes. I'm still on holiday, but will come back to this next week.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506482040
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 291 | branch-2.1 passed |
   | +1 | compile | 30 | branch-2.1 passed |
   | +1 | checkstyle | 40 | branch-2.1 passed |
   | +1 | shadedjars | 299 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 76 | branch-2.1 passed |
   | +1 | javadoc | 28 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 282 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -1 | checkstyle | 40 | hbase-client: The patch generated 18 new + 60 
unchanged - 2 fixed = 78 total (was 62) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 289 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 1181 | Patch does not cause any errors with Hadoop 
2.7.7 2.8.5 or 3.0.3 3.1.2. |
   | -1 | findbugs | 89 | hbase-client generated 5 new + 0 unchanged - 0 fixed 
= 5 total (was 0) |
   | +1 | javadoc | 27 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 152 | hbase-client in the patch passed. |
   | +1 | asflicense | 12 | The patch does not generate ASF License warnings. |
   | | | 3281 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 46% of time  Unsynchronized access at BufferedMutatorImpl.java:46% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 239] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 374] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 305] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 97] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux f948f8437b7a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / 60097a6467 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/8/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/8/artifact/out/new-findbugs-hbase-client.html
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/8/testReport/
 |
   | Max. process+thread count | 273 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the 

[jira] [Commented] (HBASE-22627) Port HBASE-22617 (Recovered WAL directories not getting cleaned up) to branch-1

2019-06-27 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874424#comment-16874424
 ] 

Andrew Purtell commented on HBASE-22627:


It’s private so is allowed. 

Expectation of stability of private marked interfaces and classes is not valid. 

> Port HBASE-22617 (Recovered WAL directories not getting cleaned up) to 
> branch-1
> ---
>
> Key: HBASE-22627
> URL: https://issues.apache.org/jira/browse/HBASE-22627
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.5.0, 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22627-branch-1.patch, HBASE-22627-branch-1.patch, 
> HBASE-22627-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20060) Add details of off heap memstore into book.

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874387#comment-16874387
 ] 

Hudson commented on HBASE-20060:


Results for branch branch-2
[build #2030 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2030/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2030//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2030//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2030//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2030//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Add details of off heap memstore into book.
> ---
>
> Key: HBASE-20060
> URL: https://issues.apache.org/jira/browse/HBASE-20060
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Anoop Sam John
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 2.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506438174
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 218 | branch-2.1 passed |
   | +1 | compile | 23 | branch-2.1 passed |
   | +1 | checkstyle | 29 | branch-2.1 passed |
   | +1 | shadedjars | 222 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 60 | branch-2.1 passed |
   | +1 | javadoc | 21 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 222 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | -1 | checkstyle | 30 | hbase-client: The patch generated 18 new + 60 
unchanged - 2 fixed = 78 total (was 62) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 219 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 928 | Patch does not cause any errors with Hadoop 2.7.7 
2.8.5 or 3.0.3 3.1.2. |
   | -1 | findbugs | 73 | hbase-client generated 5 new + 0 unchanged - 0 fixed 
= 5 total (was 0) |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 116 | hbase-client in the patch failed. |
   | +1 | asflicense | 9 | The patch does not generate ASF License warnings. |
   | | | 2565 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 46% of time  Unsynchronized access at BufferedMutatorImpl.java:46% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 236] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 371] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 302] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 97] |
   | Failed junit tests | hadoop.hbase.client.TestBufferedMutator |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 7071126567ec 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / 60097a6467 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/artifact/out/new-findbugs-hbase-client.html
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/artifact/out/patch-unit-hbase-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/testReport/
 |
   | Max. process+thread count | 104 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/7/console |
   | Powered by | Apache Yetus 

[GitHub] [hbase] virajjasani commented on issue #333: [HBASE-22606] : BucketCache additional tests

2019-06-27 Thread GitBox
virajjasani commented on issue #333: [HBASE-22606] : BucketCache additional 
tests
URL: https://github.com/apache/hbase/pull/333#issuecomment-506436207
 
 
   @Apache9 @busbey Could you please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506413897
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 91 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 228 | branch-2.1 passed |
   | +1 | compile | 24 | branch-2.1 passed |
   | +1 | checkstyle | 33 | branch-2.1 passed |
   | +1 | shadedjars | 238 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 75 | branch-2.1 passed |
   | +1 | javadoc | 23 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 245 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -1 | checkstyle | 48 | hbase-client: The patch generated 18 new + 60 
unchanged - 2 fixed = 78 total (was 62) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 318 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 997 | Patch does not cause any errors with Hadoop 2.7.7 
2.8.5 or 3.0.3 3.1.2. |
   | -1 | findbugs | 74 | hbase-client generated 5 new + 0 unchanged - 0 fixed 
= 5 total (was 0) |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 113 | hbase-client in the patch failed. |
   | +1 | asflicense | 10 | The patch does not generate ASF License warnings. |
   | | | 2898 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 46% of time  Unsynchronized access at BufferedMutatorImpl.java:46% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 236] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 371] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 302] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 97] |
   | Failed junit tests | hadoop.hbase.client.TestClientNoCluster |
   |   | hadoop.hbase.client.TestBufferedMutator |
   |   | hadoop.hbase.ipc.TestRpcClientDeprecatedNameMapping |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux fdb87df5a7a2 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / a172b480fe |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/6/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/6/artifact/out/new-findbugs-hbase-client.html
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/6/artifact/out/patch-unit-hbase-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/6/testReport/
 |
   | Max. process+thread count | 94 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 

[jira] [Commented] (HBASE-22403) Balance in RSGroup should consider throttling and a failure affects the whole

2019-06-27 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874259#comment-16874259
 ] 

HBase QA commented on HBASE-22403:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
47s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}259m 
15s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
44s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}333m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/585/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973063/HBASE-22403.master.004.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 290adf634929 4.4.0-145-generic #171-Ubuntu SMP Tue Mar 26 
12:43:40 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 0198868531 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| 

[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506391825
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 103 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 245 | branch-2.1 passed |
   | +1 | compile | 24 | branch-2.1 passed |
   | +1 | checkstyle | 33 | branch-2.1 passed |
   | +1 | shadedjars | 239 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 62 | branch-2.1 passed |
   | +1 | javadoc | 22 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 84 | root in the patch failed. |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -1 | checkstyle | 33 | hbase-client: The patch generated 18 new + 60 
unchanged - 2 fixed = 78 total (was 62) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedjars | 120 | patch has 11 errors when building our shaded 
downstream artifacts. |
   | -1 | hadoopcheck | 76 | The patch causes 11 errors with Hadoop v2.7.7. |
   | -1 | hadoopcheck | 156 | The patch causes 11 errors with Hadoop v2.8.5. |
   | -1 | hadoopcheck | 244 | The patch causes 11 errors with Hadoop v3.0.3. |
   | -1 | hadoopcheck | 327 | The patch causes 11 errors with Hadoop v3.1.2. |
   | -1 | findbugs | 70 | hbase-client generated 5 new + 0 unchanged - 0 fixed 
= 5 total (was 0) |
   | +1 | javadoc | 24 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hbase-client in the patch failed. |
   | +1 | asflicense | 11 | The patch does not generate ASF License warnings. |
   | | | 1515 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 46% of time  Unsynchronized access at BufferedMutatorImpl.java:46% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 236] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 371] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 302] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 94] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 4a148600d077 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / a172b480fe |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | mvninstall | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/5/artifact/out/patch-mvninstall-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/5/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/5/artifact/out/patch-shadedjars.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/5/artifact/out/patch-javac-2.7.7.txt
 |
   | hadoopcheck | 

[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874233#comment-16874233
 ] 

stack commented on HBASE-20368:
---

[~arshiya9414] What version of hbase are you running sir? Thanks.

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506364710
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 237 | branch-2.1 passed |
   | +1 | compile | 23 | branch-2.1 passed |
   | +1 | checkstyle | 29 | branch-2.1 passed |
   | +1 | shadedjars | 223 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 61 | branch-2.1 passed |
   | +1 | javadoc | 22 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 76 | root in the patch failed. |
   | -1 | compile | 22 | hbase-client in the patch failed. |
   | -1 | javac | 22 | hbase-client in the patch failed. |
   | -1 | checkstyle | 34 | hbase-client: The patch generated 161 new + 61 
unchanged - 1 fixed = 222 total (was 62) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedjars | 118 | patch has 14 errors when building our shaded 
downstream artifacts. |
   | -1 | hadoopcheck | 70 | The patch causes 14 errors with Hadoop v2.7.7. |
   | -1 | hadoopcheck | 140 | The patch causes 14 errors with Hadoop v2.8.5. |
   | -1 | hadoopcheck | 214 | The patch causes 14 errors with Hadoop v3.0.3. |
   | -1 | hadoopcheck | 285 | The patch causes 14 errors with Hadoop v3.1.2. |
   | -1 | findbugs | 18 | hbase-client in the patch failed. |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 22 | hbase-client in the patch failed. |
   | +1 | asflicense | 9 | The patch does not generate ASF License warnings. |
   | | | 1288 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 7223070aa198 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / a172b480fe |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | mvninstall | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-compile-hbase-client.txt
 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-compile-hbase-client.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-shadedjars.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-javac-2.7.7.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-javac-2.8.5.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-javac-3.0.3.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-javac-3.1.2.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-findbugs-hbase-client.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/artifact/out/patch-unit-hbase-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/testReport/
 |
   | Max. process+thread count | 93 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298195692
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorThreadPoolExecutor.java
 ##
 @@ -0,0 +1,197 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Threads;
+
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.concurrent.*;
+import java.util.concurrent.atomic.AtomicLong;
+
+@SuppressWarnings("WeakerAccess")
+public class BufferedMutatorThreadPoolExecutor extends ThreadPoolExecutor {
 
 Review comment:
   Exactly. This class can be suppressed (it is used), it is just an 
illustration on how to get some statistics on flush().
   The best way to achieve that, would be that the Future contains all needed 
information. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298189981
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -188,14 +186,6 @@ public AbstractRpcClient(Configuration conf, String 
clusterId, SocketAddress loc
 
 this.connections = new PoolMap<>(getPoolType(conf), getPoolSize(conf));
 
-this.cleanupIdleConnectionTask = IDLE_CONN_SWEEPER.scheduleAtFixedRate(new 
Runnable() {
 
 Review comment:
   Just committed a version that restore that. To avoid having it done twice 
(in the case of Netty), i add hasIdleCleanupSupport() where needed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 merged pull request #344: HBASE-22632 SplitTableRegionProcedure and MergeTableRegionsProcedure …

2019-06-27 Thread GitBox
Apache9 merged pull request #344: HBASE-22632 SplitTableRegionProcedure and 
MergeTableRegionsProcedure …
URL: https://github.com/apache/hbase/pull/344
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874155#comment-16874155
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #160 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/160/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/160//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/160//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/160//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22627) Port HBASE-22617 (Recovered WAL directories not getting cleaned up) to branch-1

2019-06-27 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874147#comment-16874147
 ] 

Pankaj Kumar commented on HBASE-22627:
--

IMO we shouldn't remove HRegion.getRegionDir() APIs. I know HRegion is private 
(InterfaceAudience.Private), but it may be incompatible change for the 
applications. Kindly provide your opinion [~apurtell]. 

> Port HBASE-22617 (Recovered WAL directories not getting cleaned up) to 
> branch-1
> ---
>
> Key: HBASE-22627
> URL: https://issues.apache.org/jira/browse/HBASE-22627
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.5.0, 1.4.10, 1.3.5
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22627-branch-1.patch, HBASE-22627-branch-1.patch, 
> HBASE-22627-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22641) When the Region Server switches the WAL log, the new WAL file created successfully but namenode returns message fails. Then the client retry, but namenode return 'file h

2019-06-27 Thread chenwandong (JIRA)
chenwandong created HBASE-22641:
---

 Summary: When the Region Server switches the WAL log, the new WAL 
file created successfully but namenode returns message fails. Then the client 
retry, but namenode return 'file has an exception', the Region Server  does not 
handle the exception, and abort itself.
 Key: HBASE-22641
 URL: https://issues.apache.org/jira/browse/HBASE-22641
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.3.4
Reporter: chenwandong
 Attachments: image-2019-06-27-21-12-29-757.png

!image-2019-06-27-21-12-29-757.png!

Problem Description
1. HBase's WAL log is full of 128M, switch to write a new WAL file. Region 
server calls HDFS client to create a new WAL log file.
2. The HDFS client sends a CREATE message to the HDFS namenode through the RPC 
channel.
3. Namenode checks and creates the file, and successfully records the metadata 
of the new file.
4. At this time, because the namenode network flashed, the namenode failed to 
respond to the Hdfs client.
5. Since the Hdfs client does not receive a response, wait for a while and try 
again, and send the CREATE request again.
6. The Namenode detects the file that needs to be created already exists.
7. The Namenode returns an existing file exception (IOException) to the Hdfs 
client.
8. After Hbase receives the returned exception, it does not handle it, and 
abort Region server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874137#comment-16874137
 ] 

HBase QA commented on HBASE-20368:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-20368 does not apply to branch-2. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-20368 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918886/HBASE-20368.branch-2.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/586/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-06-27 Thread Syeda Arshiya Tabreen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874130#comment-16874130
 ] 

Syeda Arshiya Tabreen commented on HBASE-20368:
---

Any progress here, we also met this issue in our test environment. However I 
tried the patch and it works fine.

ping [~Xiaolin Ha] [~mdrob]. 

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298150480
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 ##
 @@ -151,6 +157,8 @@ public void 
postStartMaster(ObserverContext c) thr
 .setColumnFamily(ColumnFamilyDescriptorBuilder
 
.newBuilder(SnapshotScannerHDFSAclStorage.HDFS_ACL_FAMILY).build());
 admin.modifyTable(builder.build());
 
 Review comment:
   After modifyTable succesfuly,  the aclTableInitialized should also set to 
true ? IMO, once the acl table contains the HDFS_ACL_FAMILY,  then we should 
init the flag to true ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298151393
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 ##
 @@ -513,12 +518,33 @@ private boolean isHdfsAclSet(Table aclTable, String 
userName, String namespace,
 return isSet;
   }
 
-  private boolean checkInitialized() {
+  @VisibleForTesting
+  boolean checkInitialized(Supplier operation) {
 
 Review comment:
   Just use the string as the argument ? I don't think here we need the 
Supplier wrapper ... 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298151667
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 ##
 @@ -513,12 +518,33 @@ private boolean isHdfsAclSet(Table aclTable, String 
userName, String namespace,
 return isSet;
   }
 
-  private boolean checkInitialized() {
+  @VisibleForTesting
+  boolean checkInitialized(Supplier operation) {
 if (initialized) {
-  return true;
-} else {
-  return false;
+  if (aclTableInitialized) {
+return true;
+  } else {
+LOG.warn("Skip set HDFS acls because acl table is not initialized when 
" + operation.get());
+  }
 }
+return false;
+  }
+
+  private boolean needSetTableHdfsAcl(TablePermission tablePermission) throws 
IOException {
+return needSetTableHdfsAcl(tablePermission.getTableName(), () -> "")
+&& hdfsAclHelper.checkTablePermissionHasNoCfOrCq(tablePermission);
+  }
+
+  private boolean needSetTableHdfsAcl(TableName tableName, Supplier 
operation)
 
 Review comment:
   Also can remoe the Supplier wrapper ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298155797
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclHelper.java
 ##
 @@ -447,28 +458,80 @@ private void setTableAcl(TableName tableName, 
Set users)
 .collect(Collectors.toList());
   }
 
+  /**
+   * Return users with global read permission
+   * @return users with global read permission
+   * @throws IOException if an error occurred
+   */
+  private Set getUsersWithGlobalReadAction() throws IOException {
+return 
getUsersWithReadAction(PermissionStorage.getGlobalPermissions(conf));
+  }
+
   /**
* Return users with namespace read permission
* @param namespace the namespace
+   * @param includeGlobal true if include users with global read action
* @return users with namespace read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithNamespaceReadAction(String namespace) throws 
IOException {
-return PermissionStorage.getNamespacePermissions(conf, 
namespace).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithNamespaceReadAction(String namespace, boolean 
includeGlobal)
+  throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getNamespacePermissions(conf, 
namespace));
+if (includeGlobal) {
+  users.addAll(getUsersWithGlobalReadAction());
+}
+return users;
   }
 
   /**
* Return users with table read permission
* @param tableName the table
+   * @param includeNamespace true if include users with namespace read action
+   * @param includeGlobal true if include users with global read action
* @return users with table read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithTableReadAction(TableName tableName) throws 
IOException {
-return PermissionStorage.getTablePermissions(conf, 
tableName).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithTableReadAction(TableName tableName, boolean 
includeNamespace,
+  boolean includeGlobal) throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getTablePermissions(conf, 
tableName));
+if (includeNamespace) {
+  users
+  
.addAll(getUsersWithNamespaceReadAction(tableName.getNamespaceAsString(), 
includeGlobal));
+}
+return users;
+  }
+
+  private Set
+  getUsersWithReadAction(ListMultimap 
permissionMultimap) {
+return permissionMultimap.entries().stream()
+.filter(entry -> checkUserPermission(entry.getValue())).map(entry -> 
entry.getKey())
+.collect(Collectors.toSet());
+  }
+
+  private boolean checkUserPermission(UserPermission userPermission) {
+boolean result = containReadAction(userPermission);
+if (result && userPermission.getPermission() instanceof TablePermission) {
+  result = checkTablePermissionHasNoCfOrCq((TablePermission) 
userPermission.getPermission());
+}
+return result;
+  }
+
+  boolean containReadAction(UserPermission userPermission) {
+return userPermission.getPermission().implies(Permission.Action.READ);
+  }
+
+  boolean checkTablePermissionHasNoCfOrCq(TablePermission tablePermission) {
+return !tablePermission.hasFamily() && !tablePermission.hasQualifier();
+  }
+
+  boolean isTableUserScanSnapshotEnabled(TableDescriptor tableDescriptor) {
+String value = tableDescriptor.getValue(USER_SCAN_SNAPSHOT_ENABLE);
+if (value != null && value.equals("true")) {
 
 Review comment:
   Just : 
   ```
   return Boolean.valueOf(value) ? 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298162008
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclHelper.java
 ##
 @@ -447,28 +458,80 @@ private void setTableAcl(TableName tableName, 
Set users)
 .collect(Collectors.toList());
   }
 
+  /**
+   * Return users with global read permission
+   * @return users with global read permission
+   * @throws IOException if an error occurred
+   */
+  private Set getUsersWithGlobalReadAction() throws IOException {
+return 
getUsersWithReadAction(PermissionStorage.getGlobalPermissions(conf));
+  }
+
   /**
* Return users with namespace read permission
* @param namespace the namespace
+   * @param includeGlobal true if include users with global read action
* @return users with namespace read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithNamespaceReadAction(String namespace) throws 
IOException {
-return PermissionStorage.getNamespacePermissions(conf, 
namespace).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithNamespaceReadAction(String namespace, boolean 
includeGlobal)
+  throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getNamespacePermissions(conf, 
namespace));
+if (includeGlobal) {
+  users.addAll(getUsersWithGlobalReadAction());
+}
+return users;
   }
 
   /**
* Return users with table read permission
* @param tableName the table
+   * @param includeNamespace true if include users with namespace read action
+   * @param includeGlobal true if include users with global read action
* @return users with table read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithTableReadAction(TableName tableName) throws 
IOException {
-return PermissionStorage.getTablePermissions(conf, 
tableName).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithTableReadAction(TableName tableName, boolean 
includeNamespace,
+  boolean includeGlobal) throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getTablePermissions(conf, 
tableName));
+if (includeNamespace) {
+  users
+  
.addAll(getUsersWithNamespaceReadAction(tableName.getNamespaceAsString(), 
includeGlobal));
+}
+return users;
+  }
+
+  private Set
+  getUsersWithReadAction(ListMultimap 
permissionMultimap) {
+return permissionMultimap.entries().stream()
+.filter(entry -> checkUserPermission(entry.getValue())).map(entry -> 
entry.getKey())
+.collect(Collectors.toSet());
+  }
+
+  private boolean checkUserPermission(UserPermission userPermission) {
+boolean result = containReadAction(userPermission);
+if (result && userPermission.getPermission() instanceof TablePermission) {
+  result = checkTablePermissionHasNoCfOrCq((TablePermission) 
userPermission.getPermission());
+}
+return result;
+  }
+
+  boolean containReadAction(UserPermission userPermission) {
+return userPermission.getPermission().implies(Permission.Action.READ);
+  }
+
+  boolean checkTablePermissionHasNoCfOrCq(TablePermission tablePermission) {
+return !tablePermission.hasFamily() && !tablePermission.hasQualifier();
+  }
+
+  boolean isTableUserScanSnapshotEnabled(TableDescriptor tableDescriptor) {
+String value = tableDescriptor.getValue(USER_SCAN_SNAPSHOT_ENABLE);
 
 Review comment:
   The flag should mean  whether will we sync the table access to HDFS files 
ACL ?  I think we should a more clear  name ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298160417
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclController.java
 ##
 @@ -263,14 +264,32 @@ public void 
postDeleteTable(ObserverContext ctx,
 }
   }
 
+  @Override
+  public void postModifyTable(ObserverContext 
ctx,
+  TableName tableName, TableDescriptor oldDescriptor, TableDescriptor 
currentDescriptor)
+  throws IOException {
+if (needSetTableHdfsAcl(currentDescriptor, () -> "modifyTable " + 
tableName)
+&& !hdfsAclHelper.isTableUserScanSnapshotEnabled(oldDescriptor)) {
+  hdfsAclHelper.createTableDirectories(tableName);
+  Set tableUsers = 
hdfsAclHelper.getUsersWithTableReadAction(tableName, false, false);
+  Set users =
+  
hdfsAclHelper.getUsersWithNamespaceReadAction(tableName.getNamespaceAsString(), 
true);
+  users.addAll(tableUsers);
+  hdfsAclHelper.addTableAcl(tableName, users);
+  
SnapshotScannerHDFSAclStorage.addUserTableHdfsAcl(ctx.getEnvironment().getConnection(),
+tableUsers, tableName);
+} else if (!aclTableInitialized
+&& Bytes.equals(PermissionStorage.ACL_GLOBAL_NAME, 
tableName.getName())) {
+  aclTableInitialized = true;
 
 Review comment:
   Oh, you update the aclTableInitialized here  when add HDFS_ACL_FAMILY in 
hbase:acl ?  Why just update the flag once the modifyTable called successfully 
?  I think that will be easy to understand.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #336: HBASE-22580 Add a table attribute to make user scan snapshot feature configurable for table

2019-06-27 Thread GitBox
openinx commented on a change in pull request #336: HBASE-22580 Add a table 
attribute to make user scan snapshot feature configurable for table
URL: https://github.com/apache/hbase/pull/336#discussion_r298154042
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SnapshotScannerHDFSAclHelper.java
 ##
 @@ -447,28 +458,80 @@ private void setTableAcl(TableName tableName, 
Set users)
 .collect(Collectors.toList());
   }
 
+  /**
+   * Return users with global read permission
+   * @return users with global read permission
+   * @throws IOException if an error occurred
+   */
+  private Set getUsersWithGlobalReadAction() throws IOException {
+return 
getUsersWithReadAction(PermissionStorage.getGlobalPermissions(conf));
+  }
+
   /**
* Return users with namespace read permission
* @param namespace the namespace
+   * @param includeGlobal true if include users with global read action
* @return users with namespace read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithNamespaceReadAction(String namespace) throws 
IOException {
-return PermissionStorage.getNamespacePermissions(conf, 
namespace).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithNamespaceReadAction(String namespace, boolean 
includeGlobal)
+  throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getNamespacePermissions(conf, 
namespace));
+if (includeGlobal) {
+  users.addAll(getUsersWithGlobalReadAction());
+}
+return users;
   }
 
   /**
* Return users with table read permission
* @param tableName the table
+   * @param includeNamespace true if include users with namespace read action
+   * @param includeGlobal true if include users with global read action
* @return users with table read permission
* @throws IOException if an error occurred
*/
-  private Set getUsersWithTableReadAction(TableName tableName) throws 
IOException {
-return PermissionStorage.getTablePermissions(conf, 
tableName).entries().stream()
-.filter(entry -> entry.getValue().getPermission().implies(READ))
-.map(entry -> entry.getKey()).collect(Collectors.toSet());
+  Set getUsersWithTableReadAction(TableName tableName, boolean 
includeNamespace,
+  boolean includeGlobal) throws IOException {
+Set users =
+getUsersWithReadAction(PermissionStorage.getTablePermissions(conf, 
tableName));
+if (includeNamespace) {
+  users
+  
.addAll(getUsersWithNamespaceReadAction(tableName.getNamespaceAsString(), 
includeGlobal));
+}
+return users;
+  }
+
+  private Set
+  getUsersWithReadAction(ListMultimap 
permissionMultimap) {
+return permissionMultimap.entries().stream()
+.filter(entry -> checkUserPermission(entry.getValue())).map(entry -> 
entry.getKey())
+.collect(Collectors.toSet());
+  }
+
+  private boolean checkUserPermission(UserPermission userPermission) {
+boolean result = containReadAction(userPermission);
+if (result && userPermission.getPermission() instanceof TablePermission) {
+  result = checkTablePermissionHasNoCfOrCq((TablePermission) 
userPermission.getPermission());
+}
+return result;
+  }
+
+  boolean containReadAction(UserPermission userPermission) {
+return userPermission.getPermission().implies(Permission.Action.READ);
+  }
+
+  boolean checkTablePermissionHasNoCfOrCq(TablePermission tablePermission) {
 
 Review comment:
   checkTablePermissionHasNoCfOrCq - >  isFamilyOrColumnPermission ? 
   return it as : `tablePermission.hasFamily() || 
tablePermission.hasQualifier()` . 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22016) Rewrite the block reading methods by using hbase.nio.ByteBuff

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874095#comment-16874095
 ] 

Hudson commented on HBASE-22016:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Rewrite the block reading methods by using hbase.nio.ByteBuff
> -
>
> Key: HBASE-22016
> URL: https://issues.apache.org/jira/browse/HBASE-22016
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22016.HBASE-21879.v1.patch, 
> HBASE-22016.HBASE-21879.v2.patch
>
>
> We've some useful discussion in HBASE-22005, so open an new JIRA for the 
> ByteBuffer block reading.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21916) Abstract an ByteBuffAllocator to allocate/free ByteBuffer in ByteBufferPool

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874093#comment-16874093
 ] 

Hudson commented on HBASE-21916:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Abstract an ByteBuffAllocator to allocate/free ByteBuffer in ByteBufferPool
> ---
>
> Key: HBASE-21916
> URL: https://issues.apache.org/jira/browse/HBASE-21916
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21916.HBASE-21879.v1.patch, 
> HBASE-21916.HBASE-21879.v10.patch, HBASE-21916.HBASE-21879.v2.patch, 
> HBASE-21916.HBASE-21879.v3.patch, HBASE-21916.HBASE-21879.v3.patch, 
> HBASE-21916.HBASE-21879.v4.patch, HBASE-21916.HBASE-21879.v5.patch, 
> HBASE-21916.HBASE-21879.v6.patch, HBASE-21916.HBASE-21879.v7.patch, 
> HBASE-21916.HBASE-21879.v8.patch, HBASE-21916.HBASE-21879.v9.patch, 
> HBASE-21916.v1.patch, HBASE-21916.v2.patch, HBASE-21916.v3.patch, 
> HBASE-21916.v4.patch, HBASE-21916.v5.patch
>
>
> Now  our read/write path allocate ByteBuffer from the ByteBufferPool, but we 
> need consider the minSizeForReservoirUse for better utilization, those 
> allocate/free api are some static methods,  not so good to use. 
> For HBASE-21879,  we need an universal ByteBuffer allocator to manage all the 
> ByteBuffers through the entire read path, so create this issue. 
> Will upload a patch to abstract an ByteBufAllocator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21957) Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874100#comment-16874100
 ] 

Hudson commented on HBASE-21957:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Unify refCount of BucketEntry and refCount of hbase.nio.ByteBuff into one
> -
>
> Key: HBASE-21957
> URL: https://issues.apache.org/jira/browse/HBASE-21957
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21957.HBASE-21879.v1.patch, 
> HBASE-21957.HBASE-21879.v10.patch, HBASE-21957.HBASE-21879.v11.patch, 
> HBASE-21957.HBASE-21879.v11.patch, HBASE-21957.HBASE-21879.v2.patch, 
> HBASE-21957.HBASE-21879.v3.patch, HBASE-21957.HBASE-21879.v4.patch, 
> HBASE-21957.HBASE-21879.v5.patch, HBASE-21957.HBASE-21879.v6.patch, 
> HBASE-21957.HBASE-21879.v8.patch, HBASE-21957.HBASE-21879.v9.patch, 
> HBASE-21957.HBASE-21879.v9.patch
>
>
> After HBASE-12295, we have block with MemoryType.SHARED or 
> MemoryType.EXCLUSIVE, the block in offheap BucketCache will be shared, and 
> have an reference count to track its life cycle.  If no rpc reference to the 
> shared block, then the block can be evicted. 
> while after the HBASE-21916,  we introduced an refcount for ByteBuff,  then I 
> think we can unify the two into one.  tried to fix this when preparing patch 
> for HBASE-21879, but seems can be different sub-task, and it won't affect the 
> main logic of HBASE-21879,  so create a seperate one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22159) ByteBufferIOEngine should support write off-heap ByteBuff to the bufferArray

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874098#comment-16874098
 ] 

Hudson commented on HBASE-22159:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBufferIOEngine should support write off-heap ByteBuff to the bufferArray
> 
>
> Key: HBASE-22159
> URL: https://issues.apache.org/jira/browse/HBASE-22159
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22159.HBASE-21879.v1.patch, 
> HBASE-22159.HBASE-21879.v2.patch, HBASE-22159.HBASE-21879.v3.patch, 
> HBASE-22159.HBASE-21879.v4.patch, HBASE-22159.HBASE-21879.v5.patch, 
> HBASE-22159.HBASE-21879.v6.patch, HBASE-22159.HBASE-21879.v7.patch
>
>
> In ByteBufferIOEngine , we have the assert: 
> {code}
>   @Override
>   public void write(ByteBuffer srcBuffer, long offset) throws IOException {
> assert srcBuffer.hasArray();
> bufferArray.putMultiple(offset, srcBuffer.remaining(), srcBuffer.array(),
> srcBuffer.arrayOffset());
>   }
>   @Override
>   public void write(ByteBuff srcBuffer, long offset) throws IOException {
> // When caching block into BucketCache there will be single buffer 
> backing for this HFileBlock.
> // This will work for now. But from the DFS itself if we get DBB then 
> this may not hold true.
> assert srcBuffer.hasArray();
> bufferArray.putMultiple(offset, srcBuffer.remaining(), srcBuffer.array(),
> srcBuffer.arrayOffset());
>   }
> {code}
> Should remove the assert, and allow to write off-heap ByteBuff to bufferArray.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22005) Use ByteBuff's refcnt to track the life cycle of data block

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874096#comment-16874096
 ] 

Hudson commented on HBASE-22005:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Use ByteBuff's refcnt to track the life cycle of data block
> ---
>
> Key: HBASE-22005
> URL: https://issues.apache.org/jira/browse/HBASE-22005
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22005.HBASE-21879.v1.patch, 
> HBASE-22005.HBASE-21879.v2.patch, HBASE-22005.HBASE-21879.v3.patch, 
> HBASE-22005.HBASE-21879.v4.patch, HBASE-22005.HBASE-21879.v5.patch, 
> HBASE-22005.HBASE-21879.v6.patch, HBASE-22005.HBASE-21879.v7.patch, 
> HBASE-22005.HBASE-21879.v8.patch, HBASE-22005.HBASE-21879.v9.patch, 
> HBASE-22005.HBASE-21879.v9.patch, cell-encoding.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22122) Change to release mob hfile's block after rpc server shipped response to client

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874103#comment-16874103
 ] 

Hudson commented on HBASE-22122:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Change to release mob hfile's block  after rpc server shipped response to 
> client   
> ---
>
> Key: HBASE-22122
> URL: https://issues.apache.org/jira/browse/HBASE-22122
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22122.HBASE-21879.v01.patch, 
> HBASE-22122.HBASE-21879.v02.patch, HBASE-22122.HBASE-21879.v03.patch, 
> unit-test.patch
>
>
> In HBASE-22005,  there's an known bug [1], and I just copied the cell's 
> byte[] from block to on-heap directly in HBASE-22005, so can make HBASE-22005 
> forward. 
>  I marked it as an TODO subtask to eliminate the offheap-to-heap copying 
> here. 
> 1. 
> https://issues.apache.org/jira/browse/HBASE-22005?focusedCommentId=16803734=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16803734



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20894) Move BucketCache from java serialization to protobuf

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874099#comment-16874099
 ] 

Hudson commented on HBASE-20894:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Move BucketCache from java serialization to protobuf
> 
>
> Key: HBASE-20894
> URL: https://issues.apache.org/jira/browse/HBASE-20894
> Project: HBase
>  Issue Type: Task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: 
> 0001-Write-the-CacheableDeserializerIdManager-index-into-.patch, 
> HBASE-20894.WIP-2.patch, HBASE-20894.WIP.patch, HBASE-20894.master.001.patch, 
> HBASE-20894.master.002.patch, HBASE-20894.master.003.patch, 
> HBASE-20894.master.004.patch, HBASE-20894.master.005.patch, 
> HBASE-20894.master.006.patch
>
>
> We should use a better serialization format instead of Java Serialization for 
> the BucketCache entry persistence.
> Suggested by Chris McCown, who does not appear to have a JIRA account.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22598) Deprecated the hbase.ipc.server.reservoir.initial.buffer.size & hbase.ipc.server.reservoir.initial.max for HBase2.x compatibility

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874115#comment-16874115
 ] 

Hudson commented on HBASE-22598:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Deprecated the hbase.ipc.server.reservoir.initial.buffer.size & 
> hbase.ipc.server.reservoir.initial.max for HBase2.x compatibility
> -
>
> Key: HBASE-22598
> URL: https://issues.apache.org/jira/browse/HBASE-22598
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>
> In https://github.com/apache/hbase/pull/301/files, we have a doc says: 
> bq. In HBase3.x, the configure hbase.ipc.server.reservoir.initial.buffer.size 
> and hbase.ipc.server.reservoir.initial.max are deprecated now, instead please
> use the hbase.server.allocator.buffer.size and 
> hbase.server.allocator.max.buffer.count.
> While in the current branch HBASE-21879,  the two config 
> hbase.ipc.server.reservoir.initial.buffer.size and 
> hbase.ipc.server.reservoir.initial.max  won't have any effect now,  should 
> make those two deprecated instead of removing for better HBase2.x 
> compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22491) Separate the heap HFileBlock and offheap HFileBlock because the heap block won't need refCnt and save into prevBlocks list before shipping

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874113#comment-16874113
 ] 

Hudson commented on HBASE-22491:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Separate the heap HFileBlock and offheap HFileBlock because the heap block 
> won't need refCnt and save into prevBlocks list before shipping
> --
>
> Key: HBASE-22491
> URL: https://issues.apache.org/jira/browse/HBASE-22491
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22491.HBASE-21879.v01.patch, 
> HBASE-22491.HBASE-21879.v02.patch
>
>
> In here [1], [~anoop.hbase] has a comment: 
> bq. There is a concern here. Even if the block is on an exclusive heap memory 
> area, we will keep ref to that in this list. In a Phoenix Aggregation kind of 
> use case where many blocks might get fetched and not immediately shipped, we 
> are keeping the ref unwantedly here for longer time. This makes the GC not 
> able to reclaim the heap memory area for the blocks. This might be a hidden 
> bomb IMO. Its not good to remove the MemType. Lets create the block with 
> memory type as EXCLUSIVE when the block data is on heap. The block might be 
> coming from LRU cache or by fetching the block data from HDFS into heap 
> memory area. When the block comes from off heap BC or if it is backed by a BB 
> from the pool (While reading from HDFS, read into pooled BB) lets create the 
> block with mem type as SHARED. Every block can have the retain and release 
> method but let the EXCLUSIVE types do a noop here.
> We've a discussion about this, say need to address two thing in this jira: 
> 1.  separate the HFileBlock into shared or non-shared; 
> 2.  Make the retain/release of non-shared block as a noop, say don't  do 
> reference count change for heap block. 
> 1. https://github.com/apache/hbase/pull/257/files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22612) Address the final overview reviewing comments of HBASE-21879

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874117#comment-16874117
 ] 

Hudson commented on HBASE-22612:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Address the final overview reviewing comments of HBASE-21879
> 
>
> Key: HBASE-22612
> URL: https://issues.apache.org/jira/browse/HBASE-22612
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0
>
>
> I've created a big PR(https://github.com/apache/hbase/pull/320) for 
> HBASE-21879,  and [~Apache9] left some minor issues. 
> Will address those comment here.
> If others have some comment about that PR, will also address in this PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22504) Optimize the MultiByteBuff#get(ByteBuffer, offset, len)

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874112#comment-16874112
 ] 

Hudson commented on HBASE-22504:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Optimize the MultiByteBuff#get(ByteBuffer, offset, len)
> ---
>
> Key: HBASE-22504
> URL: https://issues.apache.org/jira/browse/HBASE-22504
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22504.HBASE-21879.v01.patch
>
>
> In HBASE-22483,  we saw that the BucketCacheWriter thread was quite busy 
> [^BucketCacheWriter-is-busy.png],  the flame graph also indicated that the 
> ByteBufferArray#internalTransfer cost ~6% CPU (see 
> [async-prof-pid-25042-cpu-1.svg|https://issues.apache.org/jira/secure/attachment/12970294/async-prof-pid-25042-cpu-1.svg]).
>   because we used the hbase.ipc.server.allocator.buffer.size=64KB, each 
> HFileBlock will be backend  by a MultiByteBuff: one 64KB offheap ByteBuffer 
> and one small heap ByteBuffer.   
> The path is depending on the MultiByteBuff#get(ByteBuffer, offset, len) now: 
> {code:java}
> RAMQueueEntry#writeToCache
> |--> ByteBufferIOEngine#write
> |--> ByteBufferArray#internalTransfer
> |--> ByteBufferArray$WRITER
> |--> MultiByteBuff#get(ByteBuffer, offset, len)
> {code}
> While the MultiByteBuff#get impl is simple and crude now, can optimze this 
> implementation:
> {code:java}
>   @Override
>   public void get(ByteBuffer out, int sourceOffset,
>   int length) {
> checkRefCount();
>   // Not used from real read path actually. So not going with
>   // optimization
> for (int i = 0; i < length; ++i) {
>   out.put(this.get(sourceOffset + i));
> }
>   }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874118#comment-16874118
 ] 

Hudson commented on HBASE-21879:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22127) Ensure that the block cached in the LRUBlockCache offheap is allocated from heap

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874097#comment-16874097
 ] 

Hudson commented on HBASE-22127:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Ensure that the block cached in the LRUBlockCache offheap is allocated from 
> heap
> 
>
> Key: HBASE-22127
> URL: https://issues.apache.org/jira/browse/HBASE-22127
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22127.HBASE-21879.v1.patch, 
> HBASE-22127.HBASE-21879.v2.patch, HBASE-22127.HBASE-21879.v3.patch, 
> HBASE-22127.HBASE-21879.v4.patch, HBASE-22127.HBASE-21879.v5.patch, 
> HBASE-22127.HBASE-21879.v6.patch, HBASE-22127.HBASE-21879.v7.patch, 
> HBASE-22127.HBASE-21879.v8.patch
>
>
> In here [1], [~anoop.hbase] pointed out  an crtial problem , I pasted here: 
> bq. So if we read from HDFS into a pooled BB and then give to LRU cache for 
> caching (ya mostly cache on read might be true) we will cache the block which 
> is backed by this pooled DBB? Unless the block is evicted , this BB wont go 
> back to pool.  I think this is some thing we can not livw with !!  For LRU 
> cache the sizing itself is based on what % of heap size we can grow. But here 
> in effect we are occupying the off heap space for the cached blocks.  All the 
> sizing assumptions and calc going out of control !
> It's indeed an big problem here. so we can only make the block ref to an heap 
> area if we use LRUCache (both LruBlockCache and CombinedBlockCache case). Or 
> we can also  make the lru cache offheap ? 
> I think we can introduce an switch indicate that whether the lru block cache 
> offheap or not, if heap, then coping those bytes from ByteBuff to heap.
> https://reviews.apache.org/r/70153/diff/6?file=2133545#file2133545line398



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22090) The HFileBlock#CacheableDeserializer should pass ByteBuffAllocator to the newly created HFileBlock

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874105#comment-16874105
 ] 

Hudson commented on HBASE-22090:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The HFileBlock#CacheableDeserializer should pass ByteBuffAllocator to the 
> newly created HFileBlock
> --
>
> Key: HBASE-22090
> URL: https://issues.apache.org/jira/browse/HBASE-22090
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22090.HBASE-21879.v01.patch, 
> HBASE-22090.HBASE-21879.v02.patch, HBASE-22090.HBASE-21879.v03.patch
>
>
> In HBASE-22005, we have the following TODO in 
> HFileBlock#CacheableDeserializer:
> {code}
>   public static final class BlockDeserializer implements 
> CacheableDeserializer {
> private BlockDeserializer() {
> }
> @Override
> public HFileBlock deserialize(ByteBuff buf, boolean reuse, MemoryType 
> memType)
> throws IOException {
>// 
>   // TODO make the newly created HFileBlock use the off-heap allocator, 
> Need change the
>   // deserializer or change the deserialize interface.
>   return new HFileBlock(newByteBuff, usesChecksum, memType, offset, 
> nextBlockOnDiskSize, null,
>   ByteBuffAllocator.HEAP);
> }
> {code}
> Should use the global ByteBuffAllocator here rather than HEAP allocator, as 
> the TODO said, we need to adjust the interface of deserializer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874102#comment-16874102
 ] 

Hudson commented on HBASE-21937:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make the Compression#decompress can accept ByteBuff as input 
> -
>
> Key: HBASE-21937
> URL: https://issues.apache.org/jira/browse/HBASE-21937
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21937.HBASE-21879.v1.patch, 
> HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch
>
>
> When decompressing an  compressed block, we are also allocating 
> HeapByteBuffer for the unpacked block.  should allocate ByteBuff from the 
> global ByteBuffAllocator, skimmed the code,  the key point is, we need an  
> ByteBuff decompress interface, not the following: 
> {code}
> # Compression.java
>   public static void decompress(byte[] dest, int destOffset,
>   InputStream bufferedBoundedStream, int compressedSize,
>   int uncompressedSize, Compression.Algorithm compressAlgo)
>   throws IOException {
>   //...
> }
> {code}
> Not very high priority,  let me make the block without compression to be 
> offheap firstly. 
> In HBASE-22005,  I ignored the unit test: 
> 1. TestLoadAndSwitchEncodeOnDisk ; 
> 2. TestHFileBlock#testPreviousOffset; 
> Need to resolve this issue and make those UT works fine. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22211) Remove the returnBlock method because we can just call HFileBlock#release directly

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874101#comment-16874101
 ] 

Hudson commented on HBASE-22211:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the returnBlock  method because we can just call HFileBlock#release 
> directly
> ---
>
> Key: HBASE-22211
> URL: https://issues.apache.org/jira/browse/HBASE-22211
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22211.HBASE-21879.v01.patch, 
> HBASE-22211.HBASE-21879.v02.patch
>
>
> Once HBASE-21957 get resolved,  we can remove the returnBlock in this issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22435) Add a UT to address the HFileBlock#heapSize() in TestHeapSize

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874108#comment-16874108
 ] 

Hudson commented on HBASE-22435:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add a UT to address the HFileBlock#heapSize() in TestHeapSize
> -
>
> Key: HBASE-22435
> URL: https://issues.apache.org/jira/browse/HBASE-22435
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22435.HBASE-21879.v1.patch, 
> HBASE-22435.HBASE-21879.v2.patch
>
>
> In HBASE-22005, I added a ByteBuffAllocator reference in HFileBlock, but no 
> increase relative heapSize in HFileBlock#heapSize(). So I guess we have no UT 
> for the HFileBlock#heapSize(). 
> Will add a UT for this, also fix the heapSize change issue. 
> Other classes also need an UT:
> 1. HFileBlockIndex;
> 2. HFileContext



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21917) Make the HFileBlock#validateChecksum can accept ByteBuff as an input.

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874094#comment-16874094
 ] 

Hudson commented on HBASE-21917:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make the HFileBlock#validateChecksum can accept ByteBuff as an input.
> -
>
> Key: HBASE-21917
> URL: https://issues.apache.org/jira/browse/HBASE-21917
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21917.HBASE-21879.v5.patch, 
> HBASE-21917.HBASE-21879.v6.patch, HBASE-21917.addendum.HBASE-21879.v7.patch, 
> HBASE-21917.v1.patch, HBASE-21917.v2.patch, HBASE-21917.v3.patch, 
> HBASE-21917.v4.patch
>
>
> I've tried to make a patch for HBASE-21879, most of work seems to be fine, 
> but the trouble is: 
> HFileBlock#validateChecksum can only accept ByteBuffer as its input, while 
> after the HBASE-21916, we will use an ourself-defined ByteBuff (which can be 
> SingleByteBuff or MultiByteBuff). 
> Now, need to create our own ByteBuff checksum validation method, should not 
> be so hard but an separate issue will be more clearer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22412) Improve the metrics in ByteBuffAllocator

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874107#comment-16874107
 ] 

Hudson commented on HBASE-22412:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Improve the metrics in ByteBuffAllocator
> 
>
> Key: HBASE-22412
> URL: https://issues.apache.org/jira/browse/HBASE-22412
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22412.HBASE-21879.v1.patch, 
> HBASE-22412.HBASE-21879.v2.patch, HBASE-22412.HBASE-21879.v3.patch, JMX.png, 
> web-UI.png
>
>
> gAddress the comment in HBASE-22387: 
> bq. The ByteBuffAllocator#getFreeBufferCount will be O(N) complexity, because 
> the buffers here is an ConcurrentLinkedQueue. It's worth file an issue for 
> this.
> Also I think we should use the allcated bytes instead of allocation number to 
> evaluate the heap allocation percent , so that we can decide whether the 
> ByteBuffer is too small and whether will have higher GC pressure.  Assume the 
> case:  the buffer size is 64KB, and each time we have a block with 65KB, then 
> it will have one heap allocation (1KB) and one pool allocation (64KB), if 
> only consider the allocation num, then the heap allocation ratio will be 1 / 
> (1 + 1) = 50%, but if consider the allocation bytes, the allocation ratio 
> will be  1KB / 65KB = 1.5%.
> If the heap allocation percent is less than  
> hbase.ipc.server.reservoir.minimal.allocating.size /  
> hbase.ipc.server.allocator.buffer.size,  then the allocator  works fine, 
> otherwise it's overload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22531) The HFileReaderImpl#shouldUseHeap return the incorrect true when disabled BlockCache

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874114#comment-16874114
 ] 

Hudson commented on HBASE-22531:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The HFileReaderImpl#shouldUseHeap return the incorrect true when disabled 
> BlockCache 
> -
>
> Key: HBASE-22531
> URL: https://issues.apache.org/jira/browse/HBASE-22531
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22531.HBASE-21879.v1.patch, 
> async-prof-pid-13311-alloc-4.svg, async-prof-pid-8590-alloc-2.svg
>
>
> I'm having a benchmark with block cache disabled for HBASE-21879 branch.   
> Just caurious about why still so many heap allocation in the heap allocation 
> flame graph [async-prof-pid-13311-alloc-4.svg | 
> https://issues.apache.org/jira/secure/attachment/12970648/async-prof-pid-13311-alloc-4.svg],
>actually, I've set the following config, which means all allocation should 
>  be offheap, while it's not: 
> {code}
> # Disable the block cache
> hfile.block.cache.size=0
> hbase.ipc.server.reservoir.minimal.allocating.size=0   # Let all allocation 
> from pooled allocator. 
> {code}
> Checked the code,  I found the problem here: 
> {code}
>   private boolean shouldUseHeap(BlockType expectedBlockType) {
> if (cacheConf.getBlockCache() == null) {
>   return false;
> } else if (!cacheConf.isCombinedBlockCache()) {
>   // Block to cache in LruBlockCache must be an heap one. So just 
> allocate block memory from
>   // heap for saving an extra off-heap to heap copying.
>   return true;
> }
> return expectedBlockType != null && !expectedBlockType.isData();
>   }
> {code}
> Say, the CacheConfig#getBlockCache  will return a Optional,  
> which is always non-null: 
> {code}
>   /**
>* Returns the block cache.
>*
>* @return the block cache, or null if caching is completely disabled
>*/
>   public Optional getBlockCache() {
> return Optional.ofNullable(this.blockCache);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22422) Retain an ByteBuff with refCnt=0 when getBlock from LRUCache

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874109#comment-16874109
 ] 

Hudson commented on HBASE-22422:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Retain an ByteBuff with refCnt=0 when getBlock from LRUCache
> 
>
> Key: HBASE-22422
> URL: https://issues.apache.org/jira/browse/HBASE-22422
> Project: HBase
>  Issue Type: Sub-task
>  Components: BlockCache
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: 0001-debug2.patch, 0001-debug2.patch, 0001-debug2.patch, 
> 0001-debug3.patch, 0001-debug4.patch, 
> HBASE-22422-qps-after-fix-the-zero-retain-bug.png, 
> HBASE-22422.HBASE-21879.v01.patch, HBASE-22422.HBASE-21879.v02.patch, 
> LRUBlockCache-getBlock.png, debug.patch, 
> failed-to-check-positive-on-web-ui.png, image-2019-05-15-12-00-03-641.png
>
>
> After runing YCSB scan/get benchmark in our XiaoMi cluster,  we found the get 
> QPS dropped from  25000/s to hunderds per second in a cluster with five 
> nodes.  
> After enable the debug log at YCSB client side,  I found the following 
> stacktrace , see 
> https://issues.apache.org/jira/secure/attachment/12968745/image-2019-05-15-12-00-03-641.png.
>  
> After looking into the stractrace, I can ensure that the zero refCnt block is 
> an intermedia index block, see [2] http://hbase.apache.org/images/hfilev2.png
> Need a patch to fix this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22463) Some paths in HFileScannerImpl did not consider block#release which will exhaust the ByteBuffAllocator

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874110#comment-16874110
 ] 

Hudson commented on HBASE-22463:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Some paths in HFileScannerImpl did not consider block#release  which will 
> exhaust the ByteBuffAllocator 
> 
>
> Key: HBASE-22463
> URL: https://issues.apache.org/jira/browse/HBASE-22463
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22463.HBASE-21879.v1.patch, 
> HBASE-22463.HBASE-21879.v1.patch, HBASE-22463.HBASE-21879.v1.patch, 
> HBASE-22463.HBASE-21879.v2.patch, HBASE-22463.HBASE-21879.v3.patch, 
> HBASE-22463.HBASE-21879.v4.patch, allocation-after-applied-patch-v2.png, 
> allocation-after-running-12h-with-patch-v4.png, use-share-type-memory.png
>
>
> When I debug the issue HBASE-22422,  I observed that the 
> ByteBuffAllocator#usedBufCount will was always increasing and all direct 
> ByteBuffers would be exhausted, which lead to may heap allocation happen.   
> The comment here [1] is also related to this problem.
> Check the code path, the HFileScannerImpl is the biggest suspect, so create 
> issue to address this.
> 1. 
> https://issues.apache.org/jira/browse/HBASE-22387?focusedCommentId=16838446=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16838446



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22483) It's better to use 65KB as the default buffer size in ByteBuffAllocator

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874111#comment-16874111
 ] 

Hudson commented on HBASE-22483:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> It's better to use 65KB as the default buffer size in ByteBuffAllocator
> ---
>
> Key: HBASE-22483
> URL: https://issues.apache.org/jira/browse/HBASE-22483
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: 121240.stack, BucketCacheWriter-is-busy.png, 
> checksum-stacktrace.png, with-buffer-size-64KB.png, with-buffer-size-65KB.png
>
>
> There're some reason why it's better to choose 65KB as the default buffer 
> size: 
> 1. Almost all of the data block have a block size: 64KB + delta, whose delta 
> is very small, depends on the size of lastKeyValue. If we use the default 
> hbase.ipc.server.allocator.buffer.size=64KB, then each block will be 
> allocated as a MultiByteBuff: one 64KB DirectByteBuffer and delta bytes 
> HeapByteBuffer, the HeapByteBuffer will increase the GC pressure. Ideally, we 
> should let the data block to be allocated as a SingleByteBuff, it has simpler 
> data structure, faster access speed, less heap usage... 
> 2. In my benchmark, I found some checksum stack traces . (see 
> [checksum-stacktrace.png 
> |https://issues.apache.org/jira/secure/attachment/12969905/checksum-stacktrace.png])
>  
>  Since the block are MultiByteBuff, so we have to calculate the checksum by 
> an temp heap copying ( see HBASE-21917), while if we're a SingleByteBuff, we 
> can speed the checksum by calling the hadoop' checksum in native lib, it's 
> more faster.
> 3. Seems the BucketCacheWriters were always busy because of the higher cost 
> of copying from MultiByteBuff to DirectByteBuffer.  For SingleByteBuff, we 
> can just use the unsafe array copying while for MultiByteBuff we have to copy 
> byte one by one.
> Anyway, I will give a benchmark for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22547) Align the config keys and add document for offheap read in HBase Book.

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874116#comment-16874116
 ] 

Hudson commented on HBASE-22547:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Align the config keys and add document for offheap read in HBase Book.
> --
>
> Key: HBASE-22547
> URL: https://issues.apache.org/jira/browse/HBASE-22547
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22547.HBASE-21879.v1.patch
>
>
> We found many useful tips about offheap reading when developing & testing 
> HBASE-21879,will prepare a doc for this.
> Some of them are in  google doc now : 
> https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21921) Notify users if the ByteBufAllocator is always allocating ByteBuffers from heap which means the increacing GC pressure

2019-06-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874104#comment-16874104
 ] 

Hudson commented on HBASE-21921:


Results for branch branch-2
[build #2029 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Notify users if the ByteBufAllocator is always allocating ByteBuffers from 
> heap which means the increacing GC pressure
> --
>
> Key: HBASE-21921
> URL: https://issues.apache.org/jira/browse/HBASE-21921
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Minor
> Attachments: HBASE-21921.HBASE-21879.v01.patch, 
> HBASE-21921.HBASE-21879.v02.patch, HBASE-21921.HBASE-21879.v03.patch, 
> jmx-metrics.png, web-ui.png
>
>
> As the javadoc of ByteBuffAllocator says: 
> {code}
> There's possible that the desired memory size is large than ByteBufferPool 
> has, we'll downgrade to allocate ByteBuffers from heap which meaning the GC 
> pressure may increase again. Of course, an better way is increasing the 
> ByteBufferPool size if we detected this case. 
> {code}
> So I think we need some messages to remind the user that an larger 
> ByteBufferPool size may be better if the allocator allocate ByteBuffer from 
> heap frequently. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22492) HBase server doesn't preserve SASL sequence number on the network

2019-06-27 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874090#comment-16874090
 ] 

Duo Zhang commented on HBASE-22492:
---

There is no big difference on the rpc level between branch-1 and branch-2 so I 
think branch-2 should also be effected...

Will take a look if I have time...

> HBase server doesn't preserve SASL sequence number on the network
> -
>
> Key: HBASE-22492
> URL: https://issues.apache.org/jira/browse/HBASE-22492
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.1.2
> Environment: HDP 2.6.5.108-1
>  
>Reporter: Sébastien BARNOUD
>Assignee: Sébastien BARNOUD
>Priority: Major
> Fix For: 1.5.0, 1.3.6, 1.4.11
>
> Attachments: HBASE-22492.001.branch-1.patch, 
> HBASE-22492.002.branch-1.patch, HBASE-22492.003.branch-1.patch
>
>
> When auth-conf is enabled on RPC, the server encrypt response in setReponse() 
> using saslServer. The generated cryptogram included a sequence number manage 
> by saslServer. But then, when the response is sent over the network, the 
> sequence number order is not preserved.
> The client receives reply in the wrong order, leading to a log message from 
> DigestMD5Base:
> {code:java}
> sasl:1481  - DIGEST41:Unmatched MACs
> {code}
> Then the message is discarded, leading the client to a timeout.
> I propose a fix here: 
> [https://github.com/sbarnoud/hbase-release/commit/ce9894ffe0e4039deecd1ed51fa135f64b311d41]
> It seems that any HBase 1.x is affected.
> This part of code has been fully rewritten in HBase 2.x, and i haven't do the 
> analysis on HBase 2.x which may be affected.
>  
> Here, an extract of client log that i added to help me to understand:
> {code:java}
> …
> 2019-05-28 12:53:48,644 DEBUG [Default-IPC-NioEventLoopGroup-1-32] 
> NettyRpcDuplexHandler:80  - callId: 5846 /192.163.201.65:58870 -> 
> dtltstap004.fr.world.socgen/192.163.201.72:16020
> 2019-05-28 12:53:48,651 INFO  [Default-IPC-NioEventLoopGroup-1-18] 
> NioEventLoop:101  - SG: Channel ready to read 1315913615 unsafe 1493023957 
> /192.163.201.65:44236 -> dtltstap008.fr.world.socgen/192.163.201.109:16020
> 2019-05-28 12:53:48,651 INFO  [Default-IPC-NioEventLoopGroup-1-18] 
> SaslUnwrapHandler:78  - SG: after unwrap:46 -> 29 for /192.163.201.65:44236 
> -> dtltstap008.fr.world.socgen/192.163.201.109:16020 seqNum 150
> 2019-05-28 12:53:48,652 DEBUG [Default-IPC-NioEventLoopGroup-1-18] 
> NettyRpcDuplexHandler:192  - callId: 5801 received totalSize:25 Message:20 
> scannerSize:(null)/192.163.201.65:44236 -> 
> dtltstap008.fr.world.socgen/192.163.201.109:16020
> 2019-05-28 12:53:48,652 INFO  [Default-IPC-NioEventLoopGroup-1-18] sasl:1481  
> - DIGEST41:Unmatched MACs
> 2019-05-28 12:53:48,652 WARN  [Default-IPC-NioEventLoopGroup-1-18] 
> SaslUnwrapHandler:70  - Sasl error (probably invalid MAC) detected for 
> /192.163.201.65:44236 -> dtltstap008.fr.world.socgen/192.163.201.109:16020 
> saslClient @4ac31121 ctx @14fb001d msg @140313192718406 len 118 
> data:1c^G?^P?3??h?k??"??x?$^_??^D;^]7^Es??Em?c?w^R^BL?x??omG?z?I???45}???dE?^\^S>D?^/4f?^^??
>  ?^Ed?D?kM^@^A^@^@^@? readerIndex 118 writerIndex 118 seqNum 
> 152{code}
>  We can see that the client unwraps the Sasl message with sequence number 152 
> before sequence number 151 and fails with the unmatched MAC.
>  
> I opened a case to Oracle because we should had an error (and not the message 
> ignored). That's because the JDK doesn't controls integrity in the right way.
> [https://github.com/openjdk/jdk/blob/master/src/java.security.sasl/share/classes/com/sun/security/sasl/digest/DigestMD5Base.java]
> The actual JDK controls the HMac before the sequence number and hides the 
> real error (bad sequence number) because SASL is stateful. The JDK should 
> check FIRST the sequence number and THEN the HMac.
> When (and if) the JDK will be patched, and accordingly to 
> [https://www.ietf.org/rfc/rfc2831.txt|https://www.ietf.org/rfc/rfc2831.txt,] 
> , we will get an exception in that case instead of having the message ignored.
> h3.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298157221
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -491,13 +482,13 @@ public void close() {
   connToClose = connections.values();
   connections.clear();
 }
-cleanupIdleConnectionTask.cancel(true);
 for (T conn : connToClose) {
-  conn.shutdown();
+  // conn may be null in case of cancellation
+  if (conn != null) conn.shutdown();
 
 Review comment:
   Indeed, and i didn't succeed to reproduce the NPE which was probably a 
consequence of all my different try when fixing HBASE-22492 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-22620) When a cluster open replication,regionserver will not clean up the walLog references on zk due to no wal entry need to be replicated

2019-06-27 Thread leizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang resolved HBASE-22620.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

> When a cluster open replication,regionserver will not clean up the walLog 
> references on zk due to no wal entry need to be replicated
> 
>
> Key: HBASE-22620
> URL: https://issues.apache.org/jira/browse/HBASE-22620
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.4, 1.4.9
>Reporter: leizhang
>Priority: Major
> Fix For: 2.1.0
>
>
> When I open the replication feature on my hbase cluster (20 regionserver 
> nodes) and added a peer cluster, for example, I create a table with 3 regions 
> with REPLICATION_SCOPE set to 1, which opened on 3 regionservers of 20. Due 
> to no data(entryBatch) to replicate ,the left 17 nodes  accumulate lots of 
> wal references on the zk node 
> "/hbase/replication/rs/\{resionserver}/\{peerId}/"  and will not be cleaned 
> up, which cause lots of wal file on hdfs will not be cleaned up either. When 
> I check my test cluster after about four months, it accumulates about 5w wal 
> files in the oldWal directory on hdfs. The source code shows that only there 
> are data to be replicated, and after some data is replicated in the source 
> endpoint, then it will executed the useless wal file check, and clean their 
> references on zk, and the hdfs useless wal files will be cleaned up normally. 
> So I think do we need other method to trigger the useless wal cleaning job in 
> a replication cluster? May be  in the  replication progress report  schedule 
> task  (just like ReplicationStatisticsTask.class)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22620) When a cluster open replication,regionserver will not clean up the walLog references on zk due to no wal entry need to be replicated

2019-06-27 Thread leizhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874063#comment-16874063
 ] 

leizhang commented on HBASE-22620:
--

thank you very much !  I check the  source code of Hbase2.1.0 ,and find the 

entryReader.take()   has been replaced by entryReader.poll(getEntriesTimeout);

then the tread will not be blocked and will excute the following logic, and the 
problem can be solved !

> When a cluster open replication,regionserver will not clean up the walLog 
> references on zk due to no wal entry need to be replicated
> 
>
> Key: HBASE-22620
> URL: https://issues.apache.org/jira/browse/HBASE-22620
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.4, 1.4.9
>Reporter: leizhang
>Priority: Major
>
> When I open the replication feature on my hbase cluster (20 regionserver 
> nodes) and added a peer cluster, for example, I create a table with 3 regions 
> with REPLICATION_SCOPE set to 1, which opened on 3 regionservers of 20. Due 
> to no data(entryBatch) to replicate ,the left 17 nodes  accumulate lots of 
> wal references on the zk node 
> "/hbase/replication/rs/\{resionserver}/\{peerId}/"  and will not be cleaned 
> up, which cause lots of wal file on hdfs will not be cleaned up either. When 
> I check my test cluster after about four months, it accumulates about 5w wal 
> files in the oldWal directory on hdfs. The source code shows that only there 
> are data to be replicated, and after some data is replicated in the source 
> endpoint, then it will executed the useless wal file check, and clean their 
> references on zk, and the hdfs useless wal files will be cleaned up normally. 
> So I think do we need other method to trigger the useless wal cleaning job in 
> a replication cluster? May be  in the  replication progress report  schedule 
> task  (just like ReplicationStatisticsTask.class)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22640) Random init hstore lastFlushTime

2019-06-27 Thread Bing Xiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Xiao updated HBASE-22640:
--
Attachment: HBASE-22640-master-v1.patch

> Random init  hstore lastFlushTime
> -
>
> Key: HBASE-22640
> URL: https://issues.apache.org/jira/browse/HBASE-22640
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 3.0.0, 2.2.1
>
> Attachments: HBASE-22640-master-v1.patch
>
>
> During with open region use current time as hstore last flush time, and no 
> mush data put cause memstore flush, after flushCheckInterval all memstore 
> will flush together bring concentrated IO and compaction make high request 
> latency;So random init lastFlushTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #344: HBASE-22632 SplitTableRegionProcedure and MergeTableRegionsProcedure …

2019-06-27 Thread GitBox
Apache-HBase commented on issue #344: HBASE-22632 SplitTableRegionProcedure and 
MergeTableRegionsProcedure …
URL: https://github.com/apache/hbase/pull/344#issuecomment-506319217
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 248 | master passed |
   | +1 | compile | 53 | master passed |
   | +1 | checkstyle | 74 | master passed |
   | +1 | shadedjars | 269 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 213 | master passed |
   | +1 | javadoc | 33 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 236 | the patch passed |
   | +1 | compile | 52 | the patch passed |
   | +1 | javac | 52 | the patch passed |
   | +1 | checkstyle | 77 | hbase-server: The patch generated 0 new + 6 
unchanged - 1 fixed = 6 total (was 7) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 271 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 731 | Patch does not cause any errors with Hadoop 2.8.5 
2.9.2 or 3.1.2. |
   | +1 | findbugs | 228 | the patch passed |
   | +1 | javadoc | 32 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 13020 | hbase-server in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 15955 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-344/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/344 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux baff8c16fd33 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 0198868531 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-344/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-344/2/testReport/
 |
   | Max. process+thread count | 4790 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-344/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-22640) Random init hstore lastFlushTime

2019-06-27 Thread Bing Xiao (JIRA)
Bing Xiao created HBASE-22640:
-

 Summary: Random init  hstore lastFlushTime
 Key: HBASE-22640
 URL: https://issues.apache.org/jira/browse/HBASE-22640
 Project: HBase
  Issue Type: Improvement
Reporter: Bing Xiao
Assignee: Bing Xiao
 Fix For: 3.0.0, 2.2.1


During with open region use current time as hstore last flush time, and no mush 
data put cause memstore flush, after flushCheckInterval all memstore will flush 
together bring concentrated IO and compaction make high request latency;So 
random init lastFlushTime



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22639) Unexpected split when a big table has only one region on a regionServer

2019-06-27 Thread Zheng Wang (JIRA)
Zheng Wang created HBASE-22639:
--

 Summary: Unexpected split when a big table has only one region on 
a regionServer 
 Key: HBASE-22639
 URL: https://issues.apache.org/jira/browse/HBASE-22639
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Zheng Wang


I am using the default policy named SteppingSplitPolicy.
If restart some nodes,it may occur,because this policy didnot judge if the 
table is big enough actually.
It brings some unexpected small regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache9 commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache9 commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator
URL: https://github.com/apache/hbase/pull/343#discussion_r298120513
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorThreadPoolExecutor.java
 ##
 @@ -0,0 +1,197 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Threads;
+
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.concurrent.*;
+import java.util.concurrent.atomic.AtomicLong;
+
+@SuppressWarnings("WeakerAccess")
+public class BufferedMutatorThreadPoolExecutor extends ThreadPoolExecutor {
 
 Review comment:
   So without using this pool, can we still get the 10x throughput? Or it will 
still get a better performance but not 10x?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache9 commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator
URL: https://github.com/apache/hbase/pull/343#discussion_r298119918
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -491,13 +482,13 @@ public void close() {
   connToClose = connections.values();
   connections.clear();
 }
-cleanupIdleConnectionTask.cancel(true);
 for (T conn : connToClose) {
-  conn.shutdown();
+  // conn may be null in case of cancellation
+  if (conn != null) conn.shutdown();
 
 Review comment:
   The code you paste here do not set the value to null I think? If you 
actually hit an NPE, then I think there must be something wrong at other 
places. Looking at the code for PoolMap, which pool type do you use?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache9 commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator
URL: https://github.com/apache/hbase/pull/343#discussion_r298118397
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -188,14 +186,6 @@ public AbstractRpcClient(Configuration conf, String 
clusterId, SocketAddress loc
 
 this.connections = new PoolMap<>(getPoolType(conf), getPoolSize(conf));
 
-this.cleanupIdleConnectionTask = IDLE_CONN_SWEEPER.scheduleAtFixedRate(new 
Runnable() {
 
 Review comment:
   It's BlockingRpcClient.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #345: HBASE-22638 : Checkstyle changes for Zookeeper Utility classes

2019-06-27 Thread GitBox
Apache-HBase commented on issue #345: HBASE-22638 : Checkstyle changes for 
Zookeeper Utility classes
URL: https://github.com/apache/hbase/pull/345#issuecomment-506296937
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 153 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 274 | master passed |
   | +1 | compile | 16 | master passed |
   | +1 | checkstyle | 11 | master passed |
   | +1 | shadedjars | 264 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 29 | master passed |
   | +1 | javadoc | 13 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 241 | the patch passed |
   | +1 | compile | 16 | the patch passed |
   | +1 | javac | 16 | the patch passed |
   | +1 | checkstyle | 10 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 267 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 751 | Patch does not cause any errors with Hadoop 2.8.5 
2.9.2 or 3.1.2. |
   | +1 | findbugs | 34 | the patch passed |
   | +1 | javadoc | 13 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 45 | hbase-zookeeper in the patch passed. |
   | +1 | asflicense | 9 | The patch does not generate ASF License warnings. |
   | | | 2445 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/345 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 7d19871d0b19 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 0198868531 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/1/testReport/
 |
   | Max. process+thread count | 294 (vs. ulimit of 1) |
   | modules | C: hbase-zookeeper U: hbase-zookeeper |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506294061
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 295 | branch-2.1 passed |
   | +1 | compile | 30 | branch-2.1 passed |
   | +1 | checkstyle | 41 | branch-2.1 passed |
   | +1 | shadedjars | 297 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 74 | branch-2.1 passed |
   | +1 | javadoc | 26 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 106 | root in the patch failed. |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -1 | checkstyle | 44 | hbase-client: The patch generated 158 new + 51 
unchanged - 1 fixed = 209 total (was 52) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedjars | 161 | patch has 11 errors when building our shaded 
downstream artifacts. |
   | -1 | hadoopcheck | 95 | The patch causes 11 errors with Hadoop v2.7.7. |
   | -1 | hadoopcheck | 189 | The patch causes 11 errors with Hadoop v2.8.5. |
   | -1 | hadoopcheck | 290 | The patch causes 11 errors with Hadoop v3.0.3. |
   | -1 | hadoopcheck | 390 | The patch causes 11 errors with Hadoop v3.1.2. |
   | -1 | findbugs | 92 | hbase-client generated 5 new + 0 unchanged - 0 fixed 
= 5 total (was 0) |
   | +1 | javadoc | 26 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 32 | hbase-client in the patch failed. |
   | +1 | asflicense | 12 | The patch does not generate ASF License warnings. |
   | | | 1784 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 46% of time  Unsynchronized access at BufferedMutatorImpl.java:46% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 236] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 371] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 302] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 94] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux e3789c53aae6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / a172b480fe |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | mvninstall | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/3/artifact/out/patch-mvninstall-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/3/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/3/artifact/out/patch-shadedjars.txt
 |
   | hadoopcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/3/artifact/out/patch-javac-2.7.7.txt
 |
   | hadoopcheck | 

[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298106442
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -491,13 +482,13 @@ public void close() {
   connToClose = connections.values();
   connections.clear();
 }
-cleanupIdleConnectionTask.cancel(true);
 for (T conn : connToClose) {
-  conn.shutdown();
+  // conn may be null in case of cancellation
+  if (conn != null) conn.shutdown();
 
 Review comment:
   removeValue just removes the value (set it as null)
   
   org/apache/hadoop/hbase/util/PoolMap.java
   
   ```
 public boolean removeValue(K key, V value) {
   Pool pool = pools.get(key);
   boolean res = false;
   if (pool != null) {
 res = pool.remove(value);
 if (res && pool.size() == 0) {
   pools.remove(key);
 }
   }
   return res;
 }
   ```
   
   Just one precision: i found it, and added this if because at runtime i get a 
NPE. Not just for fun ...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani opened a new pull request #345: HBASE-22638 : Checkstyle changes for Zookeeper Utility classes

2019-06-27 Thread GitBox
virajjasani opened a new pull request #345: HBASE-22638 : Checkstyle changes 
for Zookeeper Utility classes
URL: https://github.com/apache/hbase/pull/345
 
 
   - final arguments for Constructor with args
   - try with resources
   - removal of redundant null check


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298100176
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorThreadPoolExecutor.java
 ##
 @@ -0,0 +1,197 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Threads;
+
+import java.lang.reflect.Field;
+import java.util.Map;
+import java.util.concurrent.*;
+import java.util.concurrent.atomic.AtomicLong;
+
+@SuppressWarnings("WeakerAccess")
+public class BufferedMutatorThreadPoolExecutor extends ThreadPoolExecutor {
 
 Review comment:
   It's available for the application. I leave the default pool intact ...
   
   ```
   BufferedMutatorThreadPoolExecutor pool = 
BufferedMutatorThreadPoolExecutor.getPoolExecutor(hadoopConf);
   
   BufferedMutatorParams mutatorParms = new 
BufferedMutatorParams(tableName)
   .listener(listener)
   .pool(pool);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache-HBase commented on issue #343: HBASE-22634 : Improve performance of 
BufferedMutator  
URL: https://github.com/apache/hbase/pull/343#issuecomment-506281087
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 161 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ branch-2.1 Compile Tests _ |
   | +1 | mvninstall | 306 | branch-2.1 passed |
   | +1 | compile | 31 | branch-2.1 passed |
   | +1 | checkstyle | 42 | branch-2.1 passed |
   | +1 | shadedjars | 304 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 78 | branch-2.1 passed |
   | +1 | javadoc | 27 | branch-2.1 passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 106 | root in the patch failed. |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -1 | checkstyle | 44 | hbase-client: The patch generated 158 new + 51 
unchanged - 1 fixed = 209 total (was 52) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedjars | 160 | patch has 11 errors when building our shaded 
downstream artifacts. |
   | -1 | hadoopcheck | 96 | The patch causes 11 errors with Hadoop v2.7.7. |
   | -1 | hadoopcheck | 198 | The patch causes 11 errors with Hadoop v2.8.5. |
   | -1 | hadoopcheck | 299 | The patch causes 11 errors with Hadoop v3.0.3. |
   | -1 | hadoopcheck | 401 | The patch causes 11 errors with Hadoop v3.1.2. |
   | -1 | findbugs | 92 | hbase-client generated 6 new + 0 unchanged - 0 fixed 
= 6 total (was 0) |
   | +1 | javadoc | 27 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 33 | hbase-client in the patch failed. |
   | +1 | asflicense | 12 | The patch does not generate ASF License warnings. |
   | | | 1928 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  
org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.sendMultiAction(Map, int, 
List, boolean) calls Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:Thread.sleep() with a lock held  At 
AsyncRequestFutureImpl.java:[line 592] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.writeBufferPeriodicFlushTimer;
 locked 50% of time  Unsynchronized access at BufferedMutatorImpl.java:50% of 
time  Unsynchronized access at BufferedMutatorImpl.java:[line 298] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.close() calls 
Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  At 
BufferedMutatorImpl.java:[line 251] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.doFlush(boolean) 
calls Thread.sleep() with a lock held  At BufferedMutatorImpl.java:lock held  
At BufferedMutatorImpl.java:[line 368] |
   |  |  org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush() does not 
release lock on all paths  At BufferedMutatorImpl.java:on all paths  At 
BufferedMutatorImpl.java:[line 299] |
   |  |  Dead store to f in 
org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At 
BufferedMutatorThreadPoolExecutor.java:org.apache.hadoop.hbase.client.BufferedMutatorThreadPoolExecutor.beforeExecute(Thread,
 Runnable)  At BufferedMutatorThreadPoolExecutor.java:[line 86] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/343 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 734c932d4aec 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-2.1 / a172b480fe |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   | mvninstall | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/2/artifact/out/patch-mvninstall-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/2/artifact/out/diff-checkstyle-hbase-client.txt
 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-343/2/artifact/out/patch-shadedjars.txt
 |
   | 

[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298099152
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -188,14 +186,6 @@ public AbstractRpcClient(Configuration conf, String 
clusterId, SocketAddress loc
 
 this.connections = new PoolMap<>(getPoolType(conf), getPoolSize(conf));
 
-this.cleanupIdleConnectionTask = IDLE_CONN_SWEEPER.scheduleAtFixedRate(new 
Runnable() {
 
 Review comment:
   I didn't find any at the time i was doing the patch


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22403) Balance in RSGroup should consider throttling and a failure affects the whole

2019-06-27 Thread Xiaolin Ha (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873964#comment-16873964
 ] 

Xiaolin Ha commented on HBASE-22403:


I have tried to run UTs in hbase-rsgroup for several times , and all passed.

Other failed UTs are not relevant to this patch.

Reattach and let HBase QA try again. 

> Balance in RSGroup should consider throttling and a failure affects the whole
> -
>
> Key: HBASE-22403
> URL: https://issues.apache.org/jira/browse/HBASE-22403
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.2.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-22403.branch-1.001.patch, 
> HBASE-22403.branch-2.2.001.patch, HBASE-22403.branch-2.2.002.patch, 
> HBASE-22403.master.001.patch, HBASE-22403.master.002.patch, 
> HBASE-22403.master.003.patch, HBASE-22403.master.004.patch
>
>
> balanceRSGroup(groupName) excutes region move plans concurrently, which will 
> affect the availability of relevant tables. And a plan fails will cause the 
> whole balance plan abort.
> As mentioned in master balance issues, HBASE-17178, HBASE-21260



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22403) Balance in RSGroup should consider throttling and a failure affects the whole

2019-06-27 Thread Xiaolin Ha (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-22403:
---
Attachment: HBASE-22403.master.004.patch

> Balance in RSGroup should consider throttling and a failure affects the whole
> -
>
> Key: HBASE-22403
> URL: https://issues.apache.org/jira/browse/HBASE-22403
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.2.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-22403.branch-1.001.patch, 
> HBASE-22403.branch-2.2.001.patch, HBASE-22403.branch-2.2.002.patch, 
> HBASE-22403.master.001.patch, HBASE-22403.master.002.patch, 
> HBASE-22403.master.003.patch, HBASE-22403.master.004.patch
>
>
> balanceRSGroup(groupName) excutes region move plans concurrently, which will 
> affect the availability of relevant tables. And a plan fails will cause the 
> whole balance plan abort.
> As mentioned in master balance issues, HBASE-17178, HBASE-21260



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22638) Checkstyle changes for hbase-zookeeper util classes

2019-06-27 Thread Viraj Jasani (JIRA)
Viraj Jasani created HBASE-22638:


 Summary: Checkstyle changes for hbase-zookeeper util classes
 Key: HBASE-22638
 URL: https://issues.apache.org/jira/browse/HBASE-22638
 Project: HBase
  Issue Type: Improvement
  Components: Zookeeper
Affects Versions: 3.0.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Checkstyle and cosmetic changes for Zookeeper Util classes - ZKUtil, 
MiniZookeeperCluster etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] openinx commented on a change in pull request #341: HBASE-22582 The Compaction writer may access the lastCell whose memor…

2019-06-27 Thread GitBox
openinx commented on a change in pull request #341: HBASE-22582 The Compaction 
writer may access the lastCell whose memor…
URL: https://github.com/apache/hbase/pull/341#discussion_r298089607
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -311,13 +311,9 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 // logging at DEBUG level
 if (LOG.isDebugEnabled()) {
   if ((now - lastMillis) >= COMPACTION_PROGRESS_LOG_INTERVAL) {
-LOG.debug("Compaction progress: "
-+ compactionName
-+ " "
-+ progress
-+ String.format(", rate=%.2f kB/sec", 
(bytesWrittenProgressForLog / 1024.0)
-/ ((now - lastMillis) / 1000.0)) + ", throughputController 
is "
-+ throughputController);
+double rate = (bytesWrittenProgressForLog / 1024.0) / ((now - 
lastMillis) / 1000.0);
 
 Review comment:
   Emm... maybe I need to format the rate first before put it as a arg in 
LOG.debug, thanks .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #341: HBASE-22582 The Compaction writer may access the lastCell whose memor…

2019-06-27 Thread GitBox
openinx commented on a change in pull request #341: HBASE-22582 The Compaction 
writer may access the lastCell whose memor…
URL: https://github.com/apache/hbase/pull/341#discussion_r298090954
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -331,6 +327,7 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
   "Interrupted while control throughput of compacting " + 
compactionName);
 } finally {
   throughputController.finish(compactionName);
+  ((ShipperListener) writer).beforeShipped();
 
 Review comment:
   Yeah,  I will provide a UT to address this bug but am working on other 
things now... will update this patch later. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #341: HBASE-22582 The Compaction writer may access the lastCell whose memor…

2019-06-27 Thread GitBox
anoopsjohn commented on a change in pull request #341: HBASE-22582 The 
Compaction writer may access the lastCell whose memor…
URL: https://github.com/apache/hbase/pull/341#discussion_r298087245
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -311,13 +311,9 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 // logging at DEBUG level
 if (LOG.isDebugEnabled()) {
   if ((now - lastMillis) >= COMPACTION_PROGRESS_LOG_INTERVAL) {
-LOG.debug("Compaction progress: "
-+ compactionName
-+ " "
-+ progress
-+ String.format(", rate=%.2f kB/sec", 
(bytesWrittenProgressForLog / 1024.0)
-/ ((now - lastMillis) / 1000.0)) + ", throughputController 
is "
-+ throughputController);
+double rate = (bytesWrittenProgressForLog / 1024.0) / ((now - 
lastMillis) / 1000.0);
 
 Review comment:
   What abt the String formatting on the Double value which we were doing.  
Missing that in log now.  Below one more place too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] anoopsjohn commented on a change in pull request #341: HBASE-22582 The Compaction writer may access the lastCell whose memor…

2019-06-27 Thread GitBox
anoopsjohn commented on a change in pull request #341: HBASE-22582 The 
Compaction writer may access the lastCell whose memor…
URL: https://github.com/apache/hbase/pull/341#discussion_r298086958
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -331,6 +327,7 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
   "Interrupted while control throughput of compacting " + 
compactionName);
 } finally {
   throughputController.finish(compactionName);
+  ((ShipperListener) writer).beforeShipped();
 
 Review comment:
   Below in Compactor, call beforeShipped() before finishing the 
throughputController. Its ok for any order. Still we can maintain one order.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache9 commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator
URL: https://github.com/apache/hbase/pull/343#discussion_r298086444
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRequestFutureImpl.java
 ##
 @@ -543,22 +572,36 @@ void sendMultiAction(Map 
actionsByServer,
 && numAttempt % HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER != 
0) {
   runnable.run();
 } else {
-  try {
-pool.submit(runnable);
-  } catch (Throwable t) {
-if (t instanceof RejectedExecutionException) {
-  // This should never happen. But as the pool is provided by the 
end user,
-  // let's secure this a little.
-  LOG.warn("id=" + asyncProcess.id + ", task rejected by pool. 
Unexpected." +
-  " Server=" + server.getServerName(), t);
-} else {
-  // see #HBASE-14359 for more details
-  LOG.warn("Caught unexpected exception/error: ", t);
+  boolean completed = false;
+  int nbTry = 0;
+  while(!completed) {
+try {
+  ++nbTry;
+  pool.submit(runnable);
+  completed = true;
+} catch (Throwable t) {
+  if (t instanceof RejectedExecutionException) {
 
 Review comment:
   Oh, for BufferedMutator we are using SynchronousQueue...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
sbarnoud commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator   
URL: https://github.com/apache/hbase/pull/343#discussion_r298086544
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/SimpleRequestController.java
 ##
 @@ -328,6 +346,16 @@ public void waitForFreeSlot(long id, int periodToTrigger, 
Consumer trigger
 waitForMaximumCurrentTasks(maxTotalConcurrentTasks - 1, id, 
periodToTrigger, trigger);
   }
 
+  @Override
+  public void waitForFreeSlot(int numberOfTask,long id, int periodToTrigger, 
Consumer trigger) throws InterruptedIOException {
+waitForMaximumCurrentTasks(maxTotalConcurrentTasks - numberOfTask, id, 
periodToTrigger, trigger);
+  }
+
+  @Override
+  public void waitForAllFreeSlot(long id) throws InterruptedIOException {
+waitForMaximumCurrentTasks(0, id, 100, null);
 
 Review comment:
   I change it to 0. This is unused as the trigger is null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #343: HBASE-22634 : Improve performance of BufferedMutator

2019-06-27 Thread GitBox
Apache9 commented on a change in pull request #343: HBASE-22634 : Improve 
performance of BufferedMutator
URL: https://github.com/apache/hbase/pull/343#discussion_r298084608
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
 ##
 @@ -491,13 +482,13 @@ public void close() {
   connToClose = connections.values();
   connections.clear();
 }
-cleanupIdleConnectionTask.cancel(true);
 for (T conn : connToClose) {
-  conn.shutdown();
+  // conn may be null in case of cancellation
+  if (conn != null) conn.shutdown();
 
 Review comment:
   That's fine, you can just share your ideas on how to get the 10x throughput. 
But for a open source project we have to make sure that all the parts work 
correctly, as you may only use A so you do not care B but some others may use 
B...
   
   And on the removal, it will completely remove the entry from the map, 
instead of setting the value to null I believe?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22403) Balance in RSGroup should consider throttling and a failure affects the whole

2019-06-27 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873935#comment-16873935
 ] 

HBase QA commented on HBASE-22403:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}287m  0s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 36s{color} 
| {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}370m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas |
|   | 
hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint |
|   | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/582/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973019/HBASE-22403.master.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cab2cbb84dbc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |

  1   2   >