[jira] [Commented] (HBASE-22009) Improve RSGroupInfoManagerImpl#getDefaultServers()

2019-03-13 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792406#comment-16792406
 ] 

Xiang Li commented on HBASE-22009:
--

[~stack] Could you please to review the patch when you are available? By the 
way, is there anyone in the community focusing on rsgoup? Or the guy I could go 
to for the JIRA/issue on rsgroup.

> Improve RSGroupInfoManagerImpl#getDefaultServers()
> --
>
> Key: HBASE-22009
> URL: https://issues.apache.org/jira/browse/HBASE-22009
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22009.master.000.patch
>
>
> {code:title=RSGroupInfoManagerImpl.java|borderStyle=solid}
> private SortedSet getDefaultServers() throws IOException {
>   SortedSet defaultServers = Sets.newTreeSet();
>   for (ServerName serverName : getOnlineRS()) {
> Address server = Address.fromParts(serverName.getHostname(), 
> serverName.getPort());
> boolean found = false;
> for (RSGroupInfo rsgi : listRSGroups()) {
>   if (!RSGroupInfo.DEFAULT_GROUP.equals(rsgi.getName()) && 
> rsgi.containsServer(server)) {
> found = true;
> break;
>   }
> }
> if (!found) {
>   defaultServers.add(server);
> }
>   }
>   return defaultServers;
> }
> {code}
> That is a logic of 2 nest loops. And for each server, listRSGroups() 
> allocates a new LinkedList and calls Map#values(), both of which are very 
> heavy operations.
> Maybe the inner loop could be moved out, that is
> # Build a list of servers of other groups than default group
> # Iterate each online servers and check if it is in the list above. If it is 
> not, then it belongs to default group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21563) HBase Get Encounters java.lang.IndexOutOfBoundsException

2019-03-13 Thread tigren (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792401#comment-16792401
 ] 

tigren commented on HBASE-21563:


Same error here, HBase version 1.4.8, occurs when reverse scan on table with 
attribute "DATA_BLOCK_ENCODING": "FAST_DIFF".

Recreate table with attribute "DATA_BLOCK_ENCODING": "NONE" resolves this error.
{quote}java.io.IOException:
    java.io.IOException: java.lang.IndexOutOfBoundsException
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:6153)
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:6123)
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:6086)
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:6091)
    at 
org.apache.hadoop.hbase.regionserver.ReversedRegionScannerImpl.(ReversedRegionScannerImpl.java:48)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2839)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2821)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2803)
    at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2797)
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2697
    at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3012)
    at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2369)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
    
    
Caused by: java.lang.IndexOutOfBoundsException
    at java.nio.Buffer.checkBounds(Buffer.java:567)
    at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:149)
    at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:463)
    at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:520)
    at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.seekToKeyInBlock(BufferedDataBlockEncoder.java:738)
    at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1356)
    at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:657)
    at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:601)
    at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:302)
    at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:497)
    at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.backwardSeek(StoreFileScanner.java:552)
    at 
org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekScanners(ReversedStoreScanner.java:83)
    at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:224)
    at 
org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.(ReversedStoreScanner.java:53)
    at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2201)
    at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:6112)
    ... 14 more
{quote}

> HBase Get Encounters java.lang.IndexOutOfBoundsException
> 
>
> Key: HBASE-21563
> URL: https://issues.apache.org/jira/browse/HBASE-21563
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.2.0
>Reporter: William Shen
>Assignee: Zheng Hu
>Priority: Major
> Attachments: 67a04bc049be4f58afecdcc0a3ba62ca.tar.gz
>
>
> We've recently encountered issue retrieving data from our HBase cluster, and 
> have not had much luck troubleshooting the issue. We narrowed down our issue 
> to a single GET, which appears to be caused by FastDiffDeltaEncoder.java 
> running into java.lang.IndexOutOfBoundsException. 
> Perhaps there is a bug on a corner case for FastDiffDeltaEncoder? 
> We are running 1.2.0-cdh5.9.2, and the GET in question is:
> {noformat}
> hbase(main):004:0> get 'qa2.ADGROUPS', 
> "\x05\x80\x00\x00\x00\x00\x1F\x54\x9C\x80\x00\x00\x00\x00\x1C\x7D\x45\x00\x04\x80\x00\x00\x00\x00\x1D\x0F\x19\x80\x00\x00\x00\x00\x4A\x64\x6F\x80\x00\x00\x00\x01\xD9\xDB\xCE"
> COLUMNCELL
>   
>   

[jira] [Commented] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792396#comment-16792396
 ] 

Hudson commented on HBASE-22022:


Results for branch branch-2.2
[build #108 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/108/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/108//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/108//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/108//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22044) ByteBufferUtils should not be IA.Public API

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792388#comment-16792388
 ] 

Hudson commented on HBASE-22044:


Results for branch branch-1
[build #720 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/720/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/720//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/720//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/720//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> ByteBufferUtils should not be IA.Public API
> ---
>
> Key: HBASE-22044
> URL: https://issues.apache.org/jira/browse/HBASE-22044
> Project: HBase
>  Issue Type: Task
>  Components: compatibility, util
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22044-branch-1.v0.patch, 
> HBASE-22044-branch-2.v0.patch, HBASE-22044-master.v0.patch
>
>
> Came up in 1.5.0RC2 checking that we broke API on ByteBufferUtils during 
> HBASE-20716 by removing a method.
> The whole class looks like internal utility stuff. Not sure why it's 
> IA.Public; has been since we started labeling the API.
> This ticket tracks clean up
> 1) Make it IA.Private in master/3.0
> 2) Mark it deprecated with an explanation that it'll be Private in 3.0 on 
> branch-2 and branch-1
> 3) Add back in the missing method for branches prior to master/3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-22049:
-
Status: Patch Available  (was: Open)

> getReopenStatus() didn't skip counting split parent region
> --
>
> Key: HBASE-22049
> URL: https://issues.apache.org/jira/browse/HBASE-22049
> Project: HBase
>  Issue Type: Bug
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-22049.master.001.patch
>
>
> After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
> check if all region's attributes updated. It will skip opened region and 
> split region as the following code shows.
> {code}
> for (RegionState regionState: states) {
>   if (!regionState.isOpened() && !regionState.isSplit()) {
> ritCount++;
>   }
> }
> {code}
> But since now the split procedure is to unassign the split parent region, 
> thus the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22048) Incorrect email links on website

2019-03-13 Thread Rishabh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792384#comment-16792384
 ] 

Rishabh Jain commented on HBASE-22048:
--

[~psomogyi] I want to take this up. Should I add "mailto:"; as prefix to each 
email id?

> Incorrect email links on website
> 
>
> Key: HBASE-22048
> URL: https://issues.apache.org/jira/browse/HBASE-22048
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Peter Somogyi
>Priority: Minor
>  Labels: beginner
>
> Project members email addresses has incorrect link.
> [https://hbase.apache.org/team.html]
> Instead of [apach...@apache.org|mailto:apach...@apache.org] it points to 
> [https://hbase.apache.org/apach...@apache.org]
> This change might be related to ASF parent pom upgrade which changed the 
> maven-project-info-reports-plugin version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792383#comment-16792383
 ] 

Hudson commented on HBASE-22022:


Results for branch master
[build #859 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/859/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22044) ByteBufferUtils should not be IA.Public API

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792382#comment-16792382
 ] 

Hudson commented on HBASE-22044:


Results for branch master
[build #859 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/859/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/859//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBufferUtils should not be IA.Public API
> ---
>
> Key: HBASE-22044
> URL: https://issues.apache.org/jira/browse/HBASE-22044
> Project: HBase
>  Issue Type: Task
>  Components: compatibility, util
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22044-branch-1.v0.patch, 
> HBASE-22044-branch-2.v0.patch, HBASE-22044-master.v0.patch
>
>
> Came up in 1.5.0RC2 checking that we broke API on ByteBufferUtils during 
> HBASE-20716 by removing a method.
> The whole class looks like internal utility stuff. Not sure why it's 
> IA.Public; has been since we started labeling the API.
> This ticket tracks clean up
> 1) Make it IA.Private in master/3.0
> 2) Mark it deprecated with an explanation that it'll be Private in 3.0 on 
> branch-2 and branch-1
> 3) Add back in the missing method for branches prior to master/3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22038) fix building failures

2019-03-13 Thread Junhong Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792373#comment-16792373
 ] 

Junhong Xu commented on HBASE-22038:


{quote}Dockerfile changes look good to me. What's up with removing all of the 
proto files? Is the expectation that we pull them directly from 
hbase-protocol-shaded instead?{quote}
Yeah. As you see, the proto files will be copied from hbase-protocol instead of 
the right hbase-protocol-shaded for now, but it cann't be compiled successfully 
even the urls are corrected. The error is that the protobuf files are not 
inconsistent and duplicate for some messages.Discussed with [~zghaobac], it 
seems that the hbase-protocol-shaded is latest, so I changed the copy source 
dir from hbase-protocol to hbase-protocol-shaded. If so, it  always keeps 
consistent between the client side and the server side, as the Java client.
  As you suggest, I agree that the native client is moved  to another repo. In 
this situation, we should also keep them consistent in another way. That's 
another story. 

> fix building failures
> -
>
> Key: HBASE-22038
> URL: https://issues.apache.org/jira/browse/HBASE-22038
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Fix For: HBASE-14850
>
> Attachments: HBASE-22038.HBASE-14850.v01.patch, 
> HBASE-22038.HBASE-14850.v02.patch
>
>
> When building the hbase c++ client with Dockerfile, it fails, because of the 
> url resources not found. But this patch just solve  the problem in temporary, 
> cos when some dependent libraries are removed someday, the failure will 
> appear again. Maybe a base docker image which contains all these dependencies 
> maintained by us is required in the long run. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792344#comment-16792344
 ] 

stack commented on HBASE-22029:
---

Ran a debug build... below is prefix from classpath when the hbase-it fails w/ 
above failed compile exception. jersey-core comes before any other Response 
containers. Trying a patch that does excludes of jersey-core in hadoop2 
profile. Interesting is that the hadoop3 profile has jersey-core excludes 
already added on this commit:

{code}
 tree d197ccb1c3c073df109960a67636b3ed7bc3accf
 parent 672c440b9f30670d26b6266529973c61ffe2b5d5
 author Mike Drob  Thu Dec 14 09:19:34 2017 -0600
 committer Mike Drob  Fri Dec 15 13:20:54 2017 -0600

 HBASE-18838 Fix hadoop3 check-shaded-invariants
{code}

{code}
01:39 [DEBUG] -d /opt/hbase-rm/output/hbase/hbase-it/target/test-classes 
-classpath 
/opt/hbase-rm/output/hbase/hbase-it/target/test-classes:/opt/hbase-rm/output/hbase/hbase-it/target/

classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/server/apacheds-core/2.0.0-M15/apacheds-core-2.0.0-M15.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/com/google/protobuf/
   
protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/opt/hbase-rm/output/hbase/hbase-zookeeper/target/classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/hadoop/hadoop-yarn-server-
  
nodemanager/2.7.7/hadoop-yarn-server-nodemanager-2.7.7.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/opt/hbase-rm/output/hbase/hbase-
  
client/target/classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/jamon/jamon-runtime/2.4.1/jamon-runtime-2.4.1.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/eclipse/jetty/jetty-security/
  
9.3.19.v20170502/jetty-security-9.3.19.v20170502.jar:/opt/hbase-rm/output/hbase/hbase-rsgroup/target/classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/api/api-ldap-
  
model/1.0.0-M20/api-ldap-model-1.0.0-M20.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/
 
org/apache/curator/curator-client/4.0.0/curator-client-4.0.0.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/api/api-i18n/1.0.0-M20/api-i18n-1.0.0-M20.jar:/opt/hbase-rm/
   
output/hbase-repo-P1hBf/org/apache/directory/server/apacheds-interceptors-authn/2.0.0-M15/apacheds-interceptors-authn-2.0.0-M15.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/
  
directory/server/apacheds-jdbm-partition/2.0.0-M15/apacheds-jdbm-partition-2.0.0-M15.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.7.jar:/
 
opt/hbase-rm/output/hbase-repo-P1hBf/org/eclipse/jetty/jetty-servlet/9.3.19.v20170502/jetty-servlet-9.3.19.v20170502.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/codehaus/jackson/
   
jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/bouncycastle/
 
bcprov-jdk15/140/bcprov-jdk15-140.jar:/opt/hbase-rm/output/hbase/hbase-rsgroup/target/test-classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/hadoop/hadoop-yarn-server-tests/2.7.7/
 
hadoop-yarn-server-tests-2.7.7-tests.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/../lib/tools.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/hadoop/hadoop-hdfs/2.7.7/hadoop-hdfs-2.7.
 
7-tests.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/commons-net/commons-net/3.1/commons-net-3.1.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/glassfish/hk2/hk2-locator/2.5.0-b32/hk2-
   
locator-2.5.0-b32.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/api/api-ldap-client-api/1.0.0-M20/api-ldap-client-api-1.0.0-M20.jar:/opt/hbase-rm/output/hbase/hbase-
 
common/target/classes:/opt/hbase-rm/output/hbase-repo-P1hBf/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/tukaani/xz/1.0/xz-1.0.jar:/opt/
   
hbase-rm/output/hbase-repo-P1hBf/org/glassfish/jersey/media/jersey-media-jaxb/2.25.1/jersey-media-jaxb-2.25.1.jar:/opt/hbase-rm/output/hbase/hbase-mapreduce/target/classes:/opt/hbase-rm/
 
output/hbase-repo-P1hBf/org/apache/directory/api/api-ldap-extras-util/1.0.0-M20/api-ldap-extras-util-1.0.0-M20.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/api/api-
 
ldap-extras-aci/1.0.0-M20/api-ldap-extras-aci-1.0.0-M20.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/glassfish/hk2/osgi-resource-locator/1.0.1/osgi-resource-locator-1.0.1.jar:/opt/
  
hbase-rm/output/hbase/hbase-common/target/test-classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/directory/jdbm/apacheds-jdbm1/2.0.0-M2/apacheds-jdbm1-2.0.0-M2.jar:/opt/hbase-rm/
  
output/hbase/hbase-zookeeper/target/test-classes:/opt/hbase-rm/output/hbase-repo-P1hBf/org/apache/avro/avro/1.7.7/avro-1.7.7.jar:/opt/hbase-rm/output/hbase-repo-P1hBf/org/mortbay/

[jira] [Commented] (HBASE-21964) unset Quota by Throttle Type

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792329#comment-16792329
 ] 

Hadoop QA commented on HBASE-21964:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
6s{color} | {color:red} The patch generated 15 new + 71 unchanged - 18 fixed = 
86 total (was 89) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  2s{color} | {color:orange} The patch generated 25 new + 133 unchanged - 6 
fixed = 158 total (was 139) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}138m 
44s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
10s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21964 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962417/HBASE-21964.master.004.patch
 |
| Optional Tests |  dupname  asfl

[jira] [Updated] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-22049:
-
Attachment: HBASE-22049.master.001.patch

> getReopenStatus() didn't skip counting split parent region
> --
>
> Key: HBASE-22049
> URL: https://issues.apache.org/jira/browse/HBASE-22049
> Project: HBase
>  Issue Type: Bug
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-22049.master.001.patch
>
>
> After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
> check if all region's attributes updated. It will skip opened region and 
> split region as the following code shows.
> {code}
> for (RegionState regionState: states) {
>   if (!regionState.isOpened() && !regionState.isSplit()) {
> ritCount++;
>   }
> }
> {code}
> But since now the split procedure is to unassign the split parent region, 
> thus the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792300#comment-16792300
 ] 

Jingyun Tian commented on HBASE-22049:
--

The problem is still exist. Because current procedure didn't update state=split 
to META. Once Master restart, this check will still failed until CatalogJanitor 
clean the parent region.

> getReopenStatus() didn't skip counting split parent region
> --
>
> Key: HBASE-22049
> URL: https://issues.apache.org/jira/browse/HBASE-22049
> Project: HBase
>  Issue Type: Bug
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
>
> After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
> check if all region's attributes updated. It will skip opened region and 
> split region as the following code shows.
> {code}
> for (RegionState regionState: states) {
>   if (!regionState.isOpened() && !regionState.isSplit()) {
> ritCount++;
>   }
> }
> {code}
> But since now the split procedure is to unassign the split parent region, 
> thus the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22031) Provide RSGroupInfo with a new constructor using shallow copy

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792275#comment-16792275
 ] 

Hadoop QA commented on HBASE-22031:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962369/HBASE-22031.master.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8e63c8b7d6f7 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision 

[jira] [Updated] (HBASE-22031) Provide RSGroupInfo with a new constructor using shallow copy

2019-03-13 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Status: Patch Available  (was: Open)

> Provide RSGroupInfo with a new constructor using shallow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22031.master.000.patch, 
> HBASE-22031.master.001.patch, HBASE-22031.master.002.patch
>
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shallow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
>   ...
>   groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
>   ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21964) unset Quota by Throttle Type

2019-03-13 Thread yaojingyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yaojingyi updated HBASE-21964:
--
Attachment: HBASE-21964.master.004.patch

> unset Quota by Throttle Type
> 
>
> Key: HBASE-21964
> URL: https://issues.apache.org/jira/browse/HBASE-21964
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 1.4.8
>Reporter: yaojingyi
>Assignee: yaojingyi
>Priority: Major
> Attachments: HBASE-21964.branch-1.4.001.patch, 
> HBASE-21964.master.001.patch, HBASE-21964.master.002.patch, 
> HBASE-21964.master.003.patch, HBASE-21964.master.004.patch, 
> unthrottleByType.patch
>
>
>  
> {code:java}
> //first set_quota to  USER=> 'u1'
> set_quota TYPE => THROTTLE, USER => 'u1', THROTTLE_TYPE => WRITE, LIMIT => 
> '1000req/sec'
> //then 
> hbase(main):004:0> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER 
> => 'u1', LIMIT => NONE
> ERROR: Unexpected arguments: {"THROTTLE_TYPE"=>"WRITE"}
> // or try "THROTTLE_TYPE"=>"WRITE_NUMBER"
> hbase(main):012:0* set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, 
> USER => 'u1', LIMIT => NONE
> NameError: uninitialized constant WRITE_NUMBER
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792203#comment-16792203
 ] 

stack edited comment on HBASE-21935 at 3/14/19 12:44 AM:
-

On release build time, site target makes 13 reports. For javadoc target, we 
make four separate javadoc reports -- a test api and an api each for dev and 
user -- w/ each taking 30-40minutes to prepare (compiles)... this is two hours 
plus.   It then goes on to make the javadoc reports. -- each takes about 
15minutes to make so thats another hour.  Then there is another phase -- attach 
-- that is not done yet that takes yet more time (TODO: comment out generating 
javadoc report in site and see time diff). To get to javadoc reports, its 35 
minutes of cloning and downloading dependencies and building.




was (Author: stack):
On release build time, site target makes 13 reports. For javadoc target, we 
make four separate javadoc reports -- a test api and an api each for dev and 
user -- w/ each taking 30-40minutes to prepare (compiles)... this is two hours 
plus.   It then goes on to make the javadoc reports. -- each takes about 
15minutes to make so thats another hour. To get to javadoc reports, its 35 
minutes of cloning and downloading dependencies and building.

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792203#comment-16792203
 ] 

stack commented on HBASE-21935:
---

On release build time, site target makes 13 reports. For javadoc target, we 
make four separate javadoc reports -- a test api and an api each for dev and 
user -- w/ each taking 30-40minutes to prepare (compiles)... this is two hours 
plus.   It then goes on to make the javadoc reports. -- each takes about 
15minutes to make so thats another hour. To get to javadoc reports, its 35 
minutes of cloning and downloading dependencies and building.

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22044) ByteBufferUtils should not be IA.Public API

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792181#comment-16792181
 ] 

Hudson commented on HBASE-22044:


Results for branch branch-2
[build #1750 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1750/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1750//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1750//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1750//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBufferUtils should not be IA.Public API
> ---
>
> Key: HBASE-22044
> URL: https://issues.apache.org/jira/browse/HBASE-22044
> Project: HBase
>  Issue Type: Task
>  Components: compatibility, util
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22044-branch-1.v0.patch, 
> HBASE-22044-branch-2.v0.patch, HBASE-22044-master.v0.patch
>
>
> Came up in 1.5.0RC2 checking that we broke API on ByteBufferUtils during 
> HBASE-20716 by removing a method.
> The whole class looks like internal utility stuff. Not sure why it's 
> IA.Public; has been since we started labeling the API.
> This ticket tracks clean up
> 1) Make it IA.Private in master/3.0
> 2) Mark it deprecated with an explanation that it'll be Private in 3.0 on 
> branch-2 and branch-1
> 3) Add back in the missing method for branches prior to master/3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792178#comment-16792178
 ] 

stack commented on HBASE-21935:
---

Stock make_rc.sh on gcp fails here for me (two occasions) with IOException: 
Stream closed doing... hbase-shaded-check-invariants 


> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792152#comment-16792152
 ] 

Hadoop QA commented on HBASE-21926:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 16m  
4s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 41s{color} 
| {color:red} root generated 1 new + 1039 unchanged - 1 fixed = 1040 total (was 
1040) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
4s{color} | {color:red} root: The patch generated 2 new + 38 unchanged - 0 
fixed = 40 total (was 38) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m 
19s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hbase-http generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
8s{color} | {color:green} t

[jira] [Resolved] (HBASE-22035) Backup /Incremental backup in HBase version 1.3.1

2019-03-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved HBASE-22035.

Resolution: Fixed

Suggest you start with [u...@hbase.apache.org|mailto:u...@hbase.apache.org] if 
you need help in creating such a backport. It will be a major undertaking. 
Please feel free to reopen this if/when you have some code that works. Thanks.

> Backup /Incremental backup in HBase version 1.3.1
> -
>
> Key: HBASE-22035
> URL: https://issues.apache.org/jira/browse/HBASE-22035
> Project: HBase
>  Issue Type: Wish
>  Components: backup&restore
>Affects Versions: 1.3.1
> Environment: AWS EMR 5.10.0
>Reporter: Sandipan
>Priority: Major
>
> Hi All,
> I am looking for enabling the HBase backup and incremental backup in HBase 
> version 1.3.1. I tried applying the patch HBASE-11085 and HBASE-19000 as per 
> below link.
>  
> https://issues.apache.org/jira/browse/HBASE-11085
>  
> Version 1 ([https://reviews.apache.org/r/21492/])
>  * [^HBASE-11085-trunk-v1.patch]: incremental update/restore code
>  * [^HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch]: contain both 
> [^HBASE-11085-trunk-v1.patch] and [^HBASE-10900-trunk-v4.patch]
>  
> but I could see there are still some classes missing like HLog, HLogUtil etc.
> Can some one help on how to enable backup in version HBase 1.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22035) Backup /Incremental backup in HBase version 1.3.1

2019-03-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved HBASE-22035.

Resolution: Incomplete

> Backup /Incremental backup in HBase version 1.3.1
> -
>
> Key: HBASE-22035
> URL: https://issues.apache.org/jira/browse/HBASE-22035
> Project: HBase
>  Issue Type: Wish
>  Components: backup&restore
>Affects Versions: 1.3.1
> Environment: AWS EMR 5.10.0
>Reporter: Sandipan
>Priority: Major
>
> Hi All,
> I am looking for enabling the HBase backup and incremental backup in HBase 
> version 1.3.1. I tried applying the patch HBASE-11085 and HBASE-19000 as per 
> below link.
>  
> https://issues.apache.org/jira/browse/HBASE-11085
>  
> Version 1 ([https://reviews.apache.org/r/21492/])
>  * [^HBASE-11085-trunk-v1.patch]: incremental update/restore code
>  * [^HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch]: contain both 
> [^HBASE-11085-trunk-v1.patch] and [^HBASE-10900-trunk-v4.patch]
>  
> but I could see there are still some classes missing like HLog, HLogUtil etc.
> Can some one help on how to enable backup in version HBase 1.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-22035) Backup /Incremental backup in HBase version 1.3.1

2019-03-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reopened HBASE-22035:


> Backup /Incremental backup in HBase version 1.3.1
> -
>
> Key: HBASE-22035
> URL: https://issues.apache.org/jira/browse/HBASE-22035
> Project: HBase
>  Issue Type: Wish
>  Components: backup&restore
>Affects Versions: 1.3.1
> Environment: AWS EMR 5.10.0
>Reporter: Sandipan
>Priority: Major
>
> Hi All,
> I am looking for enabling the HBase backup and incremental backup in HBase 
> version 1.3.1. I tried applying the patch HBASE-11085 and HBASE-19000 as per 
> below link.
>  
> https://issues.apache.org/jira/browse/HBASE-11085
>  
> Version 1 ([https://reviews.apache.org/r/21492/])
>  * [^HBASE-11085-trunk-v1.patch]: incremental update/restore code
>  * [^HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch]: contain both 
> [^HBASE-11085-trunk-v1.patch] and [^HBASE-10900-trunk-v4.patch]
>  
> but I could see there are still some classes missing like HLog, HLogUtil etc.
> Can some one help on how to enable backup in version HBase 1.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21736) Remove the server from online servers before scheduling SCP for it in hbck

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792083#comment-16792083
 ] 

Hudson commented on HBASE-21736:


Results for branch branch-2
[build #1749 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1749/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1749//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1749//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1749//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the server from online servers before scheduling SCP for it in hbck
> --
>
> Key: HBASE-21736
> URL: https://issues.apache.org/jira/browse/HBASE-21736
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5
>
> Attachments: HBASE-21736.patch
>
>
> The API is designed for scheduling SCPs for dead servers after we loss all 
> the proc wals, but in TestHbck we use hbck to schedule a SCP for a live 
> server. In order to pass the test, we need to move it from online servers to 
> dead servers otherwise we may retain the assignment on the 'dead' server and 
> cause the UT hang. And it is no harm if the server is not online when we call 
> this method, so it is fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792076#comment-16792076
 ] 

stack commented on HBASE-15560:
---

Ok.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792071#comment-16792071
 ] 

Andrew Purtell commented on HBASE-15560:


It is perhaps not ideal but if an option a potential adopter can do the 
proofing we don’t have the community bandwidth to do now. There is at least 
that possibility. Better than dropping this. I understand the impulse to reduce 
the configuration space rather than increase it but, looking at the code, that 
ship sailed ages ago. On these grounds let me carry this forward with ensuring 
it is optional and a rebase. Maybe there will be objections at next round of 
review, that would end this, and that could be fine. Or not and we have the 
option at least. 

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-22022.
---
   Resolution: Fixed
 Assignee: stack
 Hadoop Flags: Reviewed
Fix Version/s: 2.1.4
   2.3.0
   2.2.0
   3.0.0

Pushed to branch-2.2+.

Thanks for review [~busbey]


> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4
>
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792028#comment-16792028
 ] 

stack commented on HBASE-15560:
---

On it being an option, there is my comment above and others from two years ago 
(if you scroll back)... -0 on commit as an option.



> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22022) nightly fails rat check down in the dev-support/hbase_nightly_source-artifact.sh check

2019-03-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792024#comment-16792024
 ] 

Sean Busbey commented on HBASE-22022:
-

+1

yeah fine for now. if we were gonna dig about why these are here and just now 
failed, we'd be better off doing a full review of the rat stuff.

> nightly fails rat check down in the 
> dev-support/hbase_nightly_source-artifact.sh check
> --
>
> Key: HBASE-22022
> URL: https://issues.apache.org/jira/browse/HBASE-22022
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: HBASE-22022.branch-2.1.001.patch
>
>
> Nightlies include a nice check that runs through the rc-making steps. See 
> dev-support/hbase_nightly_source-artifact.sh. Currently the nightly is 
> failing here which is causing the nightly runs fail though often enough all 
> tests pass. It looks like cause is the rat check. Unfortunately, running the 
> nightly script locally, all comes up smelling sweet -- its a context thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16792003#comment-16792003
 ] 

stack commented on HBASE-22029:
---

Using...
{code}
 for f in `find . -name '*.jar'`;  do echo $f && jar tvf $f | grep -i 
javax.ws.rs.core.Response; done > /tmp/find.txt
{code}

... against build generated repo, here are the jars with Response in them:

{code}
 ./com/sun/jersey/jersey-core/1.17.1/jersey-core-1.17.1.jar
 ./com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar
 ./javax/ws/rs/jsr311-api/1.1.1/jsr311-api-1.1.1.jar
 ./javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar
{code}

mvn dependency/analysis doesn't account for how jsr311 makes it into the repo. 
The jersey-cores are an abstract Response. They are excluded usually but can 
come in in hadoop2 via hadoop-common. Let me try build excluding these 
jersey-core in more places and run with -X to see if i can catch a classpath... 
or why jsr311 is downloaded.

> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22044) ByteBufferUtils should not be IA.Public API

2019-03-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22044:

   Resolution: Fixed
Fix Version/s: 2.3.0
   1.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

thanks for the quick review!

> ByteBufferUtils should not be IA.Public API
> ---
>
> Key: HBASE-22044
> URL: https://issues.apache.org/jira/browse/HBASE-22044
> Project: HBase
>  Issue Type: Task
>  Components: compatibility, util
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22044-branch-1.v0.patch, 
> HBASE-22044-branch-2.v0.patch, HBASE-22044-master.v0.patch
>
>
> Came up in 1.5.0RC2 checking that we broke API on ByteBufferUtils during 
> HBASE-20716 by removing a method.
> The whole class looks like internal utility stuff. Not sure why it's 
> IA.Public; has been since we started labeling the API.
> This ticket tracks clean up
> 1) Make it IA.Private in master/3.0
> 2) Mark it deprecated with an explanation that it'll be Private in 3.0 on 
> branch-2 and branch-1
> 3) Add back in the missing method for branches prior to master/3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791980#comment-16791980
 ] 

Andrew Purtell commented on HBASE-15560:


Ok, but that isn't going to happen as we have no volunteer to do the testing, 
so can we put this in as an option? Otherwise it will get dropped on the floor, 
which is not a good outcome IMHO

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21883) Enhancements to Major Compaction tool from HBASE-19528

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791979#comment-16791979
 ] 

Andrew Purtell commented on HBASE-21883:


Once a master patch is ready, please put it up on reviews.apache.org to 
facilitate a wider review.

> Enhancements to Major Compaction tool from HBASE-19528
> --
>
> Key: HBASE-21883
> URL: https://issues.apache.org/jira/browse/HBASE-21883
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Compaction, tooling
>Affects Versions: 1.5.0
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-21883.branch-1.001.patch, 
> HBASE-21883.branch-1.002.patch
>
>
> I would like to add new compaction tools based on [~churromorales]'s tool at 
> HBASE-19528.
> We internally have tools that pick and compact regions based on multiple 
> criteria. Since Rahul already has a version in community, we would like to 
> build on top of it instead of pushing yet another tool.
> With this jira, I would like to add a tool which looks at regions beyond TTL 
> and compacts them in a rsgroup. We have time series data and those regions 
> will become dead after a while, so we compact those regions to save disk 
> space. We also merge those empty regions to reduce load, but that tool comes 
> later.
> Will prep a patch for 2.x once 1.5 gets in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21883) Enhancements to Major Compaction tool from HBASE-19528

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791975#comment-16791975
 ] 

Andrew Purtell commented on HBASE-21883:


[~thiruvel] I can't commit this to branch-1 without first committing to master 
and branch-2. The branch-1 patch looks good to me, so if you were planning to 
go ahead and make a patch for master please do.

> Enhancements to Major Compaction tool from HBASE-19528
> --
>
> Key: HBASE-21883
> URL: https://issues.apache.org/jira/browse/HBASE-21883
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Compaction, tooling
>Affects Versions: 1.5.0
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-21883.branch-1.001.patch, 
> HBASE-21883.branch-1.002.patch
>
>
> I would like to add new compaction tools based on [~churromorales]'s tool at 
> HBASE-19528.
> We internally have tools that pick and compact regions based on multiple 
> criteria. Since Rahul already has a version in community, we would like to 
> build on top of it instead of pushing yet another tool.
> With this jira, I would like to add a tool which looks at regions beyond TTL 
> and compacts them in a rsgroup. We have time series data and those regions 
> will become dead after a while, so we compact those regions to save disk 
> space. We also merge those empty regions to reduce load, but that tool comes 
> later.
> Will prep a patch for 2.x once 1.5 gets in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21736) Remove the server from online servers before scheduling SCP for it in hbck

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791970#comment-16791970
 ] 

Hudson commented on HBASE-21736:


Results for branch branch-2.2
[build #106 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/106/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/106//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the server from online servers before scheduling SCP for it in hbck
> --
>
> Key: HBASE-21736
> URL: https://issues.apache.org/jira/browse/HBASE-21736
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5
>
> Attachments: HBASE-21736.patch
>
>
> The API is designed for scheduling SCPs for dead servers after we loss all 
> the proc wals, but in TestHbck we use hbck to schedule a SCP for a live 
> server. In order to pass the test, we need to move it from online servers to 
> dead servers otherwise we may retain the assignment on the 'dead' server and 
> cause the UT hang. And it is no harm if the server is not online when we call 
> this method, so it is fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791961#comment-16791961
 ] 

stack commented on HBASE-15560:
---

Pardon if not clear. Yes, wanted this to be perf tested first to ensure no 
regression at least. I thought I could get to it but didn't get the time. I 
wanted to avoid adding this as an option. I wanted this to just be our new 
default. We drown in options currently. If this were optional, my fear would be 
it would go into a hole not to be heard from again -- meantime bulking up the 
codebase and possible a burden the next time a dev comes by this part of the 
codebase.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791948#comment-16791948
 ] 

Andrew Purtell commented on HBASE-21926:


Updated patches fix reported checkstyle and findbugs issues and adds the above 
usage text as adoc. Posted master patch to RB as 
https://reviews.apache.org/r/70207/

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: 1.png, 2.png, 3.png, 4.png, HBASE-21926-branch-1.patch, 
> HBASE-21926-branch-1.patch, HBASE-21926.patch, HBASE-21926.patch
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21926) Profiler servlet

2019-03-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21926:
---
Attachment: HBASE-21926.patch
HBASE-21926-branch-1.patch

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: 1.png, 2.png, 3.png, 4.png, HBASE-21926-branch-1.patch, 
> HBASE-21926-branch-1.patch, HBASE-21926.patch, HBASE-21926.patch
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791941#comment-16791941
 ] 

Andrew Purtell commented on HBASE-15560:


Cancelling the patch is not an expression of disinterest. What do we need to do 
to move this forward?

Did we become stuck because Stack wanted something? What is that exactly? 
Having trouble figuring that out with a quick skim of this issue, but I think 
it was the idea TinyLFU should be the new default, but only after perf testing 
done by some unspecified person. Getting that volunteer effort seems unlikely. 
Can we check this in as a new option? I would be happy to do that to bring this 
over the finish line.

[~stack] [~ben.manes]

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 1.6.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22044) ByteBufferUtils should not be IA.Public API

2019-03-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791940#comment-16791940
 ] 

Andrew Purtell commented on HBASE-22044:


Sounds good to me

> ByteBufferUtils should not be IA.Public API
> ---
>
> Key: HBASE-22044
> URL: https://issues.apache.org/jira/browse/HBASE-22044
> Project: HBase
>  Issue Type: Task
>  Components: compatibility, util
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-22044-branch-1.v0.patch, 
> HBASE-22044-branch-2.v0.patch, HBASE-22044-master.v0.patch
>
>
> Came up in 1.5.0RC2 checking that we broke API on ByteBufferUtils during 
> HBASE-20716 by removing a method.
> The whole class looks like internal utility stuff. Not sure why it's 
> IA.Public; has been since we started labeling the API.
> This ticket tracks clean up
> 1) Make it IA.Private in master/3.0
> 2) Mark it deprecated with an explanation that it'll be Private in 3.0 on 
> branch-2 and branch-1
> 3) Add back in the missing method for branches prior to master/3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide RSGroupInfo with a new constructor using shallow copy

2019-03-13 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Attachment: HBASE-22031.master.002.patch

> Provide RSGroupInfo with a new constructor using shallow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22031.master.000.patch, 
> HBASE-22031.master.001.patch, HBASE-22031.master.002.patch
>
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shallow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
>   ...
>   groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
>   ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21736) Remove the server from online servers before scheduling SCP for it in hbck

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791935#comment-16791935
 ] 

Hudson commented on HBASE-21736:


Results for branch branch-2.1
[build #955 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/955/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/955//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/955//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/955//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove the server from online servers before scheduling SCP for it in hbck
> --
>
> Key: HBASE-21736
> URL: https://issues.apache.org/jira/browse/HBASE-21736
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5
>
> Attachments: HBASE-21736.patch
>
>
> The API is designed for scheduling SCPs for dead servers after we loss all 
> the proc wals, but in TestHbck we use hbck to schedule a SCP for a live 
> server. In order to pass the test, we need to move it from online servers to 
> dead servers otherwise we may retain the assignment on the 'dead' server and 
> cause the UT hang. And it is no harm if the server is not online when we call 
> this method, so it is fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791893#comment-16791893
 ] 

stack edited comment on HBASE-21935 at 3/13/19 5:36 PM:


Stock make_rc.sh doesn't work against tip of branch-2.0. It does this failure:
{code}
09:52:00 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default-cli) on 
project hbase-assembly: Failed to create assembly: Error adding file 'or
g.apache.hbase:hbase-common:jar:tests:2.0.6-SNAPSHOT' to archive: 
/Users/stack/checkouts/hbase.git/hbase-common/target/test-classes isn't a file. 
-> [Help 1]
{code}
... I have to add a 'clean' inline with the site build to get over this hump.

Adding the 'clean' and re-running. Prediction is that I'll then hit HBASE-22029 
issue's exception...

This is the 'clean' addition I'm talking of in the above:
{code}
@@ -78,7 +78,7 @@ function build_bin {
   MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
-  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests \
+  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Dcheckstyle.skip=true site assembly:single \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
{code}


was (Author: stack):
Stock make_rc.sh doesn't work against tip of branch-2.0. It does this failure:
{code}
09:52:00 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default-cli) on 
project hbase-assembly: Failed to create assembly: Error adding file 'or
g.apache.hbase:hbase-common:jar:tests:2.0.6-SNAPSHOT' to archive: 
/Users/stack/checkouts/hbase.git/hbase-common/target/test-classes isn't a file. 
-> [Help 1]
{code}
... I have to add a 'clean' inline with the site build to get over this hump.

Adding the 'clean' and re-running. Prediction is that I'll then hit this 
issue's exception...

This is the 'clean' addition I'm talking of in the above:
{code}
@@ -78,7 +78,7 @@ function build_bin {
   MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
-  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests \
+  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Dcheckstyle.skip=true site assembly:single \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
{code}

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22031) Provide RSGroupInfo with a new constructor using shallow copy

2019-03-13 Thread Xiang Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-22031:
-
Status: Open  (was: Patch Available)

> Provide RSGroupInfo with a new constructor using shallow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22031.master.000.patch, 
> HBASE-22031.master.001.patch
>
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shallow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
>   ...
>   groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
>   ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22031) Provide RSGroupInfo with a new constructor using shallow copy

2019-03-13 Thread Xiang Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791903#comment-16791903
 ] 

Xiang Li commented on HBASE-22031:
--

Upload patch v002 to fix the issues reported by check-style. Triggering Hadoop 
QA

> Provide RSGroupInfo with a new constructor using shallow copy
> -
>
> Key: HBASE-22031
> URL: https://issues.apache.org/jira/browse/HBASE-22031
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-22031.master.000.patch, 
> HBASE-22031.master.001.patch, HBASE-22031.master.002.patch
>
>
> As for org.apache.hadoop.hbase.rsgroup.RSGroupInfo, the following constructor 
> performs deep copies on both servers and tables inputed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.java.java|borderStyle=solid}
> RSGroupInfo(String name, SortedSet servers, SortedSet 
> tables) {
>   this.name = name;
>   this.servers = (servers == null) ? new TreeSet<>() : new TreeSet<>(servers);
>   this.tables  = (tables  == null) ? new TreeSet<>() : new TreeSet<>(tables);
> }
> {code}
> The constructor of TreeSet is heavy and I think it is better to have a new 
> constructor with shallow copy and it could be used at least in
> {code:title=hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java|borderStyle=solid}
> private synchronized void refresh(boolean forceOnline) throws IOException {
>   ...
>   groupList.add(new RSGroupInfo(RSGroupInfo.DEFAULT_GROUP, 
> getDefaultServers(), orphanTables));
>   ...
> {code}
> It is not needed to allocate new TreeSet to deep copy the output of 
> getDefaultServers() and orphanTables, both of which are allocated in the near 
> context and not updated in the code followed. So it is safe to make a shadow 
> copy here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791893#comment-16791893
 ] 

stack commented on HBASE-21935:
---

Stock make_rc.sh doesn't work against tip of branch-2.0. It does this failure:
{code}
09:52:00 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default-cli) on 
project hbase-assembly: Failed to create assembly: Error adding file 'or
g.apache.hbase:hbase-common:jar:tests:2.0.6-SNAPSHOT' to archive: 
/Users/stack/checkouts/hbase.git/hbase-common/target/test-classes isn't a file. 
-> [Help 1]
{code}
... I have to add a 'clean' inline with the site build to get over this hump.

Adding the 'clean' and re-running. Prediction is that I'll then hit this 
issue's exception...

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791893#comment-16791893
 ] 

stack edited comment on HBASE-21935 at 3/13/19 5:09 PM:


Stock make_rc.sh doesn't work against tip of branch-2.0. It does this failure:
{code}
09:52:00 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default-cli) on 
project hbase-assembly: Failed to create assembly: Error adding file 'or
g.apache.hbase:hbase-common:jar:tests:2.0.6-SNAPSHOT' to archive: 
/Users/stack/checkouts/hbase.git/hbase-common/target/test-classes isn't a file. 
-> [Help 1]
{code}
... I have to add a 'clean' inline with the site build to get over this hump.

Adding the 'clean' and re-running. Prediction is that I'll then hit this 
issue's exception...

This is the 'clean' addition I'm talking of in the above:
{code}
@@ -78,7 +78,7 @@ function build_bin {
   MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
-  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests \
+  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests \
 -Dcheckstyle.skip=true site assembly:single \
 -Papache-release -Prelease \
 -Dmaven.repo.local=${output_dir}/repository
{code}


was (Author: stack):
Stock make_rc.sh doesn't work against tip of branch-2.0. It does this failure:
{code}
09:52:00 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.6:single (default-cli) on 
project hbase-assembly: Failed to create assembly: Error adding file 'or
g.apache.hbase:hbase-common:jar:tests:2.0.6-SNAPSHOT' to archive: 
/Users/stack/checkouts/hbase.git/hbase-common/target/test-classes isn't a file. 
-> [Help 1]
{code}
... I have to add a 'clean' inline with the site build to get over this hump.

Adding the 'clean' and re-running. Prediction is that I'll then hit this 
issue's exception...

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791867#comment-16791867
 ] 

Hadoop QA commented on HBASE-22050:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}131m 
57s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962339/0001-fix-HBASE-22050.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux afc9fd9e39b7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8fffaa7778 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16379/testReport/ |
| Max. process+thread count | 4912 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Buil

[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791847#comment-16791847
 ] 

stack commented on HBASE-21935:
---

[~busbey] current runs don't have timestampings. Will do some later today. Gone 
back to make_rc.sh to see if it works still... though the script here is 
copy/paste from make_rc. 

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791808#comment-16791808
 ] 

Sean Busbey commented on HBASE-21935:
-

if you have a maven log with timestamps handy, I'd like to take a look. will 
make my own eventually, but I'm sure you're aware of how painful turn around 
time is.

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791811#comment-16791811
 ] 

Sean Busbey commented on HBASE-21926:
-

Can we put that RN text into a ref guide section and then link to that in the 
RN? that'll make it easier for folks to find when they're reading about 
subsequently released versions.

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: 1.png, 2.png, 3.png, 4.png, HBASE-21926-branch-1.patch, 
> HBASE-21926.patch
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791772#comment-16791772
 ] 

stack commented on HBASE-22025:
---

Thank you [~psomogyi]

> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.5, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21935) Replace make_rc.sh with customized spark/dev/create-release

2019-03-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791771#comment-16791771
 ] 

stack commented on HBASE-21935:
---

Ok.  HBASE-21667 did not help with this issue as I had deluded myself into 
thinking it would. We still fail here. 

> Replace make_rc.sh with customized spark/dev/create-release
> ---
>
> Key: HBASE-21935
> URL: https://issues.apache.org/jira/browse/HBASE-21935
> Project: HBase
>  Issue Type: Task
>  Components: rm
>Reporter: stack
>Assignee: stack
>Priority: Minor
>  Labels: rm
> Attachments: HBASE-21935.branch-2.0.001.patch, 
> HBASE-21935.branch-2.0.002.patch, HBASE-21935.branch-2.0.003.patch, 
> HBASE-21935.branch-2.0.004.patch, HBASE-21935.branch-2.0.005.patch, 
> HBASE-21935.branch-2.0.006.patch, HBASE-21935.branch-2.0.007.patch, 
> HBASE-21935.branch-2.1.001.patch, HBASE-21935.branch-2.1.002.patch, 
> HBASE-21935.branch-2.1.003.patch, HBASE-21935.branch-2.1.004.patch, 
> HBASE-21935.branch-2.1.005.patch, HBASE-21935.branch-2.1.006.patch, 
> HBASE-21935.branch-2.1.007.patch
>
>
> The spark/dev/create-release is more comprehensive than our hokey make_rc.sh 
> script. It codifies the bulk of the RM process from tagging, version-setting, 
> building, signing, and pushing. It does it in a container so environment is 
> same each time. It has a bunch of spark-specifics as is -- naturally -- but 
> should be possible to pull it around to suit hbase RM'ing. It'd save a bunch 
> of time and would allow us to get to a place where RM'ing is canned, 
> evolvable, and consistent.
> I've been hacking on the tooling before the filing of this JIRA and was 
> polluting branch-2.0 w/ tagging and reverts. Let me make a branch named for 
> this JIRA to play with (There is a dry-run flag but it too needs work...).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22012) Space Quota: Policy state is getting changed from disable to Observance after sometime automatically.

2019-03-13 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791769#comment-16791769
 ] 

Josh Elser commented on HBASE-22012:


{quote}When the table is getting Disabled,all the regions will be closed
{quote}
If the table is disabled, why are you concerned about the space quota 
violation/acceptance state?

You seem to be stating how the system works. It's not clear to me what you 
think should be happening instead.

> Space Quota: Policy state is getting changed from disable to Observance after 
> sometime automatically.
> -
>
> Key: HBASE-22012
> URL: https://issues.apache.org/jira/browse/HBASE-22012
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Priority: Minor
>
> pace Quota: Policy state is getting changed from disable to Observance after 
> sometime automatically.
> Steps:
> 1: Create a table with space quota policy as Disable
> 2: Put some data so that table state is in space quota violation
> 3: So observe that table state is in violation
> 4: Now wait for some time
> 5: Observe that after some time table state is changing to to Observance 
> however table is still disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791753#comment-16791753
 ] 

Hadoop QA commented on HBASE-22002:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 93 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 13m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} hbase-client generated 10 new + 76 unchanged - 15 fixed = 86 
total (was 91) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hbase-client: The patch generated 0 new + 213 
unchanged - 56 fixed = 213 total (was 269) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} hbase-server: The patch generated 0 new + 707 
unchanged - 51 fixed = 707 total (was 758) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} The patch passed checkstyle in hbase-mapreduce 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} The patch passed checkstyle in hbase-thrift {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch passed checkstyle in hbase-shell {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch passed checkstyle in hbase-endpoint 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} hbase-backup: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hbase-it: The patch generated 0 new + 100 unchanged 
- 1 fixed = 100 total (was 101) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} hbase-rest: The patch generated 0 new + 33 unchanged 
- 1 fixed = 33 total (was 34) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch passed checkstyle in hbase-client-project 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch passed checkstyle in 
hbase-shaded-client-project {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
26s{color} | {color:red} The pa

[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791723#comment-16791723
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #25 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/25/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/25//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/25//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/25//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22040) Add mergeRegionsAsync with a List of region names method in AsyncAdmin

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791714#comment-16791714
 ] 

Hadoop QA commented on HBASE-22040:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 10m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} hbase-client: The patch generated 0 new + 193 
unchanged - 4 fixed = 193 total (was 197) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} hbase-server: The patch generated 0 new + 50 
unchanged - 2 fixed = 50 total (was 52) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}136m 
30s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22040 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962307/HBASE-22040-v2.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ad863ca2d1e0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/ho

[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791709#comment-16791709
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #134 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/134/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/134//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/134//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/134//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-22050:

Component/s: regionserver

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-22050:
--
Attachment: 0001-fix-HBASE-22050.patch
Status: Patch Available  (was: Open)

Give the simple fix

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: 0001-fix-HBASE-22050.patch
>
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791711#comment-16791711
 ] 

lujie commented on HBASE-22050:
---

 the master(3.0.0), I am not sure whether branch-1 and  branch-2 are affected.

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-22050:
--
Affects Version/s: 3.0.0

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22038) fix building failures

2019-03-13 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791693#comment-16791693
 ] 

Josh Elser commented on HBASE-22038:


Dockerfile changes look good to me. What's up with removing all of the proto 
files? Is the expectation that we pull them directly from hbase-protocol-shaded 
instead?

> fix building failures
> -
>
> Key: HBASE-22038
> URL: https://issues.apache.org/jira/browse/HBASE-22038
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Fix For: HBASE-14850
>
> Attachments: HBASE-22038.HBASE-14850.v01.patch, 
> HBASE-22038.HBASE-14850.v02.patch
>
>
> When building the hbase c++ client with Dockerfile, it fails, because of the 
> url resources not found. But this patch just solve  the problem in temporary, 
> cos when some dependent libraries are removed someday, the failure will 
> appear again. Maybe a base docker image which contains all these dependencies 
> maintained by us is required in the long run. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791698#comment-16791698
 ] 

Sean Busbey commented on HBASE-22050:
-

good catch! what version(s) is this in?

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-22050:
--
Attachment: 0001-fix-HBASE-22050.patch
Status: Patch Available  (was: Open)

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-22050:
--
Attachment: (was: 0001-fix-HBASE-22050.patch)

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-22050:
--
Status: Open  (was: Patch Available)

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie reassigned HBASE-22050:
-

Assignee: lujie

> NPE happens while RS shutdown, due to atomic violation
> --
>
> Key: HBASE-22050
> URL: https://issues.apache.org/jira/browse/HBASE-22050
> Project: HBase
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> while RS shutdown, the RS#abort are called due to
> {code:java}
> handler.AssignRegionHandler: Fatal error occured while opening region 
> hbase:meta,,1.1588230740, aborting...
> {code}
> And in abort:
> {code:java}
> 2428.if (rssStub != null && this.serverName != null) {
> 2429   ReportRSFatalErrorRequest.Builder builder =
> 2430.  ReportRSFatalErrorRequest.newBuilder();
> 2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
> 2432   builder.setErrorMessage(msg);
> 2433   rssStub.reportRSFatalError(null, builder.build());
> 2434 }
> {code}
> 2428-2434 are assumed to be atomic, but if it step in the 2429-2433, 
> meanwhile RS#run:
> {code:java}
> 1149 // Make sure the proxy is down.
> 1150 if (this.rssStub != null) {
> 1151this.rssStub = null;
> 1152 }
> {code}
> So the rssStub == null and NPE happens
> {code:java}
> 2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
> regionserver.HRegionServer: Unable to report fatal error to master
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
> at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22050) NPE happens while RS shutdown, due to atomic violation

2019-03-13 Thread lujie (JIRA)
lujie created HBASE-22050:
-

 Summary: NPE happens while RS shutdown, due to atomic violation
 Key: HBASE-22050
 URL: https://issues.apache.org/jira/browse/HBASE-22050
 Project: HBase
  Issue Type: Bug
Reporter: lujie


while RS shutdown, the RS#abort are called due to
{code:java}
handler.AssignRegionHandler: Fatal error occured while opening region 
hbase:meta,,1.1588230740, aborting...
{code}
And in abort:
{code:java}
2428.if (rssStub != null && this.serverName != null) {
2429   ReportRSFatalErrorRequest.Builder builder =
2430.  ReportRSFatalErrorRequest.newBuilder();
2431.  builder.setServer(ProtobufUtil.toServerName(this.serverName));
2432   builder.setErrorMessage(msg);
2433   rssStub.reportRSFatalError(null, builder.build());
2434 }
{code}
2428-2434 are assumed to be atomic, but if it step in the 2429-2433, meanwhile 
RS#run:
{code:java}
1149 // Make sure the proxy is down.
1150 if (this.rssStub != null) {
1151this.rssStub = null;
1152 }
{code}
So the rssStub == null and NPE happens
{code:java}
2019-03-14 04:49:53,016 WARN [RS_CLOSE_META-regionserver/hadoop12:16020-0] 
regionserver.HRegionServer: Unable to report fatal error to master
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2433)
at 
org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.handleException(AssignRegionHandler.java:154)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:106)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

{code}
I think we should avoid the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22029) RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791621#comment-16791621
 ] 

Hudson commented on HBASE-22029:


Results for branch branch-2.0
[build #1435 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> RESTApiClusterManager.java:[250,48] cannot find symbol in hbase-it
> --
>
> Key: HBASE-22029
> URL: https://issues.apache.org/jira/browse/HBASE-22029
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
>
> I get this doing a RM build. Can't repro elsewhere.
> Picking up an old jaxrs? See 
> https://stackoverflow.com/questions/34679773/extract-string-from-javax-response
> Let me try adding explicit dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791623#comment-16791623
 ] 

Hudson commented on HBASE-22025:


Results for branch branch-2.0
[build #1435 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.5, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21667) Move to latest ASF Parent POM

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791622#comment-16791622
 ] 

Hudson commented on HBASE-21667:


Results for branch branch-2.0
[build #1435 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1435//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Move to latest ASF Parent POM
> -
>
> Key: HBASE-21667
> URL: https://issues.apache.org/jira/browse/HBASE-21667
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21667.branch-2.0.001.patch, 
> HBASE-21667.branch-2.0.002.patch, HBASE-21667.branch-2.0.003.patch, 
> HBASE-21667.branch-2.0.004.patch, HBASE-21667.branch-2.001.patch, 
> HBASE-21667.branch-2.003.patch, HBASE-21667.branch-2.2.003.patch, 
> HBASE-21667.master.001.patch, HBASE-21667.master.002.patch, 
> HBASE-21667.master.003.patch
>
>
> Currently HBase depends on version 18 which was released on 2016-05-18. 
> Version 21 was released in August 2018.
> Relevant dependency upgrades
>  
> ||Name||Currently used version||New version||Notes||
> |surefire.version|2.21.0|2.22.0| |
> |maven-compiler-plugin|3.6.1|3.7| |
> |maven-dependency-plugin|3.0.1|3.1.1| |
> |maven-jar-plugin|3.0.0|3.0.2| |
> |maven-javadoc-plugin|3.0.0|3.0.1| |
> |maven-resources-plugin|2.7|3.1.0| |
> |maven-site-plugin|3.4|3.7.1|Currently not relying on ASF version. See: 
> HBASE-18333|
> |maven-source-plugin|3.0.0|3.0.1| |
> |maven-shade-plugin|3.0.0|3.1.1|Newly added to ASF pom|
> |maven-clean-plugin|3.0.0|3.1.0| |
> |maven-project-info-reports-plugin |2.9|3.0.0| |
> Version 21 added net.nicoulaj.maven.plugins:checksum-maven-plugin which 
> introduced SHA512 checksum instead of SHA1. Should verify if we can rely on 
> that for releases or breaks our current processes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791593#comment-16791593
 ] 

Hadoop QA commented on HBASE-22002:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 93 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hbase-client generated 10 new + 76 unchanged - 15 fixed = 86 
total (was 91) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} hbase-client: The patch generated 0 new + 213 
unchanged - 56 fixed = 213 total (was 269) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} hbase-server: The patch generated 0 new + 707 
unchanged - 51 fixed = 707 total (was 758) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch passed checkstyle in hbase-mapreduce 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} The patch passed checkstyle in hbase-thrift {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch passed checkstyle in hbase-shell {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch passed checkstyle in hbase-endpoint 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hbase-backup: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hbase-it: The patch generated 0 new + 100 unchanged 
- 1 fixed = 100 total (was 101) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hbase-rest: The patch generated 0 new + 33 unchanged 
- 1 fixed = 33 total (was 34) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch passed checkstyle in hbase-client-project 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch passed checkstyle in 
hbase-shaded-client-project {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
16s{color} | {color:red} The pa

[jira] [Commented] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791561#comment-16791561
 ] 

Jingyun Tian commented on HBASE-22049:
--

HBASE-21795, Found this patch, I think we are facing the same problem. But I 
think that patch can not solve the problem. Let me check the code.

> getReopenStatus() didn't skip counting split parent region
> --
>
> Key: HBASE-22049
> URL: https://issues.apache.org/jira/browse/HBASE-22049
> Project: HBase
>  Issue Type: Bug
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
>
> After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
> check if all region's attributes updated. It will skip opened region and 
> split region as the following code shows.
> {code}
> for (RegionState regionState: states) {
>   if (!regionState.isOpened() && !regionState.isSplit()) {
> ritCount++;
>   }
> }
> {code}
> But since now the split procedure is to unassign the split parent region, 
> thus the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791548#comment-16791548
 ] 

Duo Zhang commented on HBASE-22049:
---

I think this is used to fix another hang problem... Could you please try 'git 
blame' to see in which issue we added these code?

> getReopenStatus() didn't skip counting split parent region
> --
>
> Key: HBASE-22049
> URL: https://issues.apache.org/jira/browse/HBASE-22049
> Project: HBase
>  Issue Type: Bug
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
>
> After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
> check if all region's attributes updated. It will skip opened region and 
> split region as the following code shows.
> {code}
> for (RegionState regionState: states) {
>   if (!regionState.isOpened() && !regionState.isSplit()) {
> ritCount++;
>   }
> }
> {code}
> But since now the split procedure is to unassign the split parent region, 
> thus the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22039) Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin

2019-03-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791558#comment-16791558
 ] 

Zheng Hu commented on HBASE-22039:
--

+1 if hadoop QA says OK.

> Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin
> ---
>
> Key: HBASE-22039
> URL: https://issues.apache.org/jira/browse/HBASE-22039
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22039-v1.patch, HBASE-22039-v1.patch, 
> HBASE-22039.patch
>
>
> For now we always pass true to HMaster, maybe the decision is that user just 
> do not need to calling get on the returned Future if it wants asynchronous.
> But the problem here is that, the return value is not void, it is a boolean, 
> which is the previous state of the flag, sometimes users do not need to wait 
> until the previous transitions or split/merge to complete, but they still 
> want to get the previous value of the flag.
> So we still need to provide the synchronous parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22040) Add mergeRegionsAsync with a List of region names method in AsyncAdmin

2019-03-13 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22040:
--
Attachment: HBASE-22040-v2.patch

> Add mergeRegionsAsync with a List of region names method in AsyncAdmin
> --
>
> Key: HBASE-22040
> URL: https://issues.apache.org/jira/browse/HBASE-22040
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22040-v1.patch, HBASE-22040-v2.patch, 
> HBASE-22040.patch
>
>
> Although we only support merging two regions until now, but the rpc protocol 
> does support passing more than two regions to master.
> So I think we should provide the methods, but need to add comments to say 
> that for now we only support merging two regions so do not try to pass more 
> than two regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22049) getReopenStatus() didn't skip counting split parent region

2019-03-13 Thread Jingyun Tian (JIRA)
Jingyun Tian created HBASE-22049:


 Summary: getReopenStatus() didn't skip counting split parent region
 Key: HBASE-22049
 URL: https://issues.apache.org/jira/browse/HBASE-22049
 Project: HBase
  Issue Type: Bug
Reporter: Jingyun Tian
Assignee: Jingyun Tian


After we modify some attributes of table, hbaseAdmin will getAlterStatus to 
check if all region's attributes updated. It will skip opened region and split 
region as the following code shows.
{code}
for (RegionState regionState: states) {
  if (!regionState.isOpened() && !regionState.isSplit()) {
ritCount++;
  }
}
{code}

But since now the split procedure is to unassign the split parent region, thus 
the state is CLOSED, and the check will hang there until timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22039) Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin

2019-03-13 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22039:
--
Attachment: HBASE-22039-v1.patch

> Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin
> ---
>
> Key: HBASE-22039
> URL: https://issues.apache.org/jira/browse/HBASE-22039
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22039-v1.patch, HBASE-22039-v1.patch, 
> HBASE-22039.patch
>
>
> For now we always pass true to HMaster, maybe the decision is that user just 
> do not need to calling get on the returned Future if it wants asynchronous.
> But the problem here is that, the return value is not void, it is a boolean, 
> which is the previous state of the flag, sometimes users do not need to wait 
> until the previous transitions or split/merge to complete, but they still 
> want to get the previous value of the flag.
> So we still need to provide the synchronous parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16832) Reduce the default number of versions in Meta table for branch-1

2019-03-13 Thread binlijin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16832:
-
Affects Version/s: (was: 1.3.0)

> Reduce the default number of versions in Meta table for branch-1
> 
>
> Key: HBASE-16832
> URL: https://issues.apache.org/jira/browse/HBASE-16832
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
>Priority: Major
> Fix For: 1.4.0
>
> Attachments: 16832.branch-1.patch, HBASE-16832.branch-1.patch, 
> rpc-handler.png, rpc_processingCallTime.png, rpc_processingCallTimeV2.png, 
> rpc_qps.png, rpc_queueLength.png, rpc_queueSize.png, rpc_scan_latency.png, 
> rpc_scan_latencyV2.png, rpc_totalcalltime.png, rpc_totalcalltimeV2.png
>
>
> I find the DEFAULT_HBASE_META_VERSIONS is still 10 in branch-1, and in master 
> version DEFAULT_HBASE_META_VERSIONS is 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22039) Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791522#comment-16791522
 ] 

Hadoop QA commented on HBASE-22039:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  2m 10s{color} 
| {color:red} HBASE-22039 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.8.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22039 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16376/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin
> ---
>
> Key: HBASE-22039
> URL: https://issues.apache.org/jira/browse/HBASE-22039
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22039-v1.patch, HBASE-22039.patch
>
>
> For now we always pass true to HMaster, maybe the decision is that user just 
> do not need to calling get on the returned Future if it wants asynchronous.
> But the problem here is that, the return value is not void, it is a boolean, 
> which is the previous state of the flag, sometimes users do not need to wait 
> until the previous transitions or split/merge to complete, but they still 
> want to get the previous value of the flag.
> So we still need to provide the synchronous parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22039) Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin

2019-03-13 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791516#comment-16791516
 ] 

Duo Zhang commented on HBASE-22039:
---

Changed the parameter name, and also add more comments to describe the behavior.

> Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin
> ---
>
> Key: HBASE-22039
> URL: https://issues.apache.org/jira/browse/HBASE-22039
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22039-v1.patch, HBASE-22039.patch
>
>
> For now we always pass true to HMaster, maybe the decision is that user just 
> do not need to calling get on the returned Future if it wants asynchronous.
> But the problem here is that, the return value is not void, it is a boolean, 
> which is the previous state of the flag, sometimes users do not need to wait 
> until the previous transitions or split/merge to complete, but they still 
> want to get the previous value of the flag.
> So we still need to provide the synchronous parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22039) Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin

2019-03-13 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22039:
--
Attachment: HBASE-22039-v1.patch

> Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin
> ---
>
> Key: HBASE-22039
> URL: https://issues.apache.org/jira/browse/HBASE-22039
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22039-v1.patch, HBASE-22039.patch
>
>
> For now we always pass true to HMaster, maybe the decision is that user just 
> do not need to calling get on the returned Future if it wants asynchronous.
> But the problem here is that, the return value is not void, it is a boolean, 
> which is the previous state of the flag, sometimes users do not need to wait 
> until the previous transitions or split/merge to complete, but they still 
> want to get the previous value of the flag.
> So we still need to provide the synchronous parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16832) Reduce the default number of versions in Meta table for branch-1

2019-03-13 Thread binlijin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16832:
-
Affects Version/s: 1.3.0

> Reduce the default number of versions in Meta table for branch-1
> 
>
> Key: HBASE-16832
> URL: https://issues.apache.org/jira/browse/HBASE-16832
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: binlijin
>Assignee: binlijin
>Priority: Major
> Fix For: 1.4.0
>
> Attachments: 16832.branch-1.patch, HBASE-16832.branch-1.patch, 
> rpc-handler.png, rpc_processingCallTime.png, rpc_processingCallTimeV2.png, 
> rpc_qps.png, rpc_queueLength.png, rpc_queueSize.png, rpc_scan_latency.png, 
> rpc_scan_latencyV2.png, rpc_totalcalltime.png, rpc_totalcalltimeV2.png
>
>
> I find the DEFAULT_HBASE_META_VERSIONS is still 10 in branch-1, and in master 
> version DEFAULT_HBASE_META_VERSIONS is 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22040) Add mergeRegionsAsync with a List of region names method in AsyncAdmin

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791488#comment-16791488
 ] 

Hadoop QA commented on HBASE-22040:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
15s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 15s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hbase-client: The patch generated 0 new + 193 
unchanged - 4 fixed = 193 total (was 197) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hbase-server: The patch generated 0 new + 50 
unchanged - 2 fixed = 50 total (was 52) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}336m  9s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}381m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
|   | hadoop.hbase.client.TestAsyncTableAdminApi |
|   | hadoop.hbase.master.TestAssignmentManagerMetrics |
|   | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
|   | hadoop.h

[jira] [Updated] (HBASE-22002) Remove the deprecated methods in Admin interface

2019-03-13 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22002:
--
Attachment: HBASE-22002-v5.patch

> Remove the deprecated methods in Admin interface
> 
>
> Key: HBASE-22002
> URL: https://issues.apache.org/jira/browse/HBASE-22002
> Project: HBase
>  Issue Type: Task
>  Components: Admin, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22002-test.patch, HBASE-22002-test.patch, 
> HBASE-22002-v1.patch, HBASE-22002-v2.patch, HBASE-22002-v3.patch, 
> HBASE-22002-v4.patch, HBASE-22002-v5.patch, HBASE-22002.patch
>
>
> For API cleanup, and will make the work in HBASE-21718 a little easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22048) Incorrect email links on website

2019-03-13 Thread Peter Somogyi (JIRA)
Peter Somogyi created HBASE-22048:
-

 Summary: Incorrect email links on website
 Key: HBASE-22048
 URL: https://issues.apache.org/jira/browse/HBASE-22048
 Project: HBase
  Issue Type: Bug
  Components: website
Reporter: Peter Somogyi


Project members email addresses has incorrect link.

[https://hbase.apache.org/team.html]

Instead of [apach...@apache.org|mailto:apach...@apache.org] it points to 
[https://hbase.apache.org/apach...@apache.org]

This change might be related to ASF parent pom upgrade which changed the 
maven-project-info-reports-plugin version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22038) fix building failures

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791463#comment-16791463
 ] 

Hadoop QA commented on HBASE-22038:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} hadolint {color} | {color:blue}  0m  
1s{color} | {color:blue} hadolint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HBASE-14850 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} HBASE-14850 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
4m 45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}309m 15s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}325m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated |
|   | hadoop.hbase.client.TestRestoreSnapshotFromClientClone |
|   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | hadoop.hbase.master.procedure.TestSCPWithReplicas |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.quotas.TestClusterScopeQuotaThrottle |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962255/HBASE-22038.HBASE-14850.v02.patch
 |
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  hadolint  cc  
unit  hbaseprotoc  |
| uname | Linux 5fe5b3678eaf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-14850 / c97053ab6f |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| shellcheck | v0.4.4 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16369/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16369/testReport/ |
| Max. process+thread count | 5253 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16369/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> fix building failures
> -
>
> Key: HBASE-22038
> URL: https://issues.apache.org/jira/browse/HBASE-22038
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Fix For: HBASE-14850
>
> Attachments: HBASE-22038.HBASE-14850.v01.patch, 
> HBASE-22038.HBASE-14850.v02.patch
>
>
> When building the hbase c++ client with Dockerfile, it fails, bec

[jira] [Resolved] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-13 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-22025.
---
   Resolution: Fixed
Fix Version/s: 2.0.5

Pushed to branch-2.0. Re-resolving.

> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.5, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22005) Use ByteBuff's refcnt to track the life cycle of data block

2019-03-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791442#comment-16791442
 ] 

Zheng Hu commented on HBASE-22005:
--

Seems memory leak happen in my patch, because it easy to throw :
{code}
java.lang.IllegalArgumentException: ReferenceCount: 0 (expected: > 0)
{code}
Let me have a check.

> Use ByteBuff's refcnt to track the life cycle of data block
> ---
>
> Key: HBASE-22005
> URL: https://issues.apache.org/jira/browse/HBASE-22005
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22005.HBASE-21879.v1.patch, 
> HBASE-22005.HBASE-21879.v2.patch, HBASE-22005.HBASE-21879.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21667) Move to latest ASF Parent POM

2019-03-13 Thread Peter Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791427#comment-16791427
 ] 

Peter Somogyi commented on HBASE-21667:
---

[~stack], the 2.0 patch will also introduce the RAT failure: 
[https://github.com/apache/hbase/commit/1a81fa22b691836c7dfcfbc64fdb17ed17ea896e#diff-600376dffeb79835ede4a0b285078036L828]

Let me fix this in HBASE-22025 to be consistent across branches.

> Move to latest ASF Parent POM
> -
>
> Key: HBASE-21667
> URL: https://issues.apache.org/jira/browse/HBASE-21667
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.0.5, 2.3.0, 2.1.4
>
> Attachments: HBASE-21667.branch-2.0.001.patch, 
> HBASE-21667.branch-2.0.002.patch, HBASE-21667.branch-2.0.003.patch, 
> HBASE-21667.branch-2.0.004.patch, HBASE-21667.branch-2.001.patch, 
> HBASE-21667.branch-2.003.patch, HBASE-21667.branch-2.2.003.patch, 
> HBASE-21667.master.001.patch, HBASE-21667.master.002.patch, 
> HBASE-21667.master.003.patch
>
>
> Currently HBase depends on version 18 which was released on 2016-05-18. 
> Version 21 was released in August 2018.
> Relevant dependency upgrades
>  
> ||Name||Currently used version||New version||Notes||
> |surefire.version|2.21.0|2.22.0| |
> |maven-compiler-plugin|3.6.1|3.7| |
> |maven-dependency-plugin|3.0.1|3.1.1| |
> |maven-jar-plugin|3.0.0|3.0.2| |
> |maven-javadoc-plugin|3.0.0|3.0.1| |
> |maven-resources-plugin|2.7|3.1.0| |
> |maven-site-plugin|3.4|3.7.1|Currently not relying on ASF version. See: 
> HBASE-18333|
> |maven-source-plugin|3.0.0|3.0.1| |
> |maven-shade-plugin|3.0.0|3.1.1|Newly added to ASF pom|
> |maven-clean-plugin|3.0.0|3.1.0| |
> |maven-project-info-reports-plugin |2.9|3.0.0| |
> Version 21 added net.nicoulaj.maven.plugins:checksum-maven-plugin which 
> introduced SHA512 checksum instead of SHA1. Should verify if we can rely on 
> that for releases or breaks our current processes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14850) C++ client implementation

2019-03-13 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791430#comment-16791430
 ] 

Guanghao Zhang commented on HBASE-14850:


{quote}Instead of dealing with rebases, would it be better to just move the 
current branch content's over to hbase-connectors and work there instead?
{quote}
Yes. hbase-native-client is independent. I start a DISCUSS thread in hbase-dev 
email. Will move it out after we got a consensus.

> C++ client implementation
> -
>
> Key: HBASE-14850
> URL: https://issues.apache.org/jira/browse/HBASE-14850
> Project: HBase
>  Issue Type: Task
>Reporter: Elliott Clark
>Priority: Major
>
> It's happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-22025) RAT check fails in nightlies; fails on (old) test data files.

2019-03-13 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi reopened HBASE-22025:
---

HBASE-21667 was backported to branch-2.0. Reopening to add this fix there.

> RAT check fails in nightlies; fails on (old) test data files.
> -
>
> Key: HBASE-22025
> URL: https://issues.apache.org/jira/browse/HBASE-22025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.4, 2.2.1
>
> Attachments: HBASE-22025.branch-2.1.001.patch
>
>
> The nightly runs where we check RM steps fails in branch-2.1 because the rat 
> test complains about old test data files not having licenses. See HBASE-22022 
> for how we turned up this issue. This JIRA adds exclusions for these files 
> that cause failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21926) Profiler servlet

2019-03-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791415#comment-16791415
 ] 

Zheng Hu commented on HBASE-21926:
--

[~apurtell],  Great work,  Mind put this patch on review board ? so that we can 
review this patch. 

> Profiler servlet
> 
>
> Key: HBASE-21926
> URL: https://issues.apache.org/jira/browse/HBASE-21926
> Project: HBase
>  Issue Type: New Feature
>  Components: master, Operability, regionserver
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: 1.png, 2.png, 3.png, 4.png, HBASE-21926-branch-1.patch, 
> HBASE-21926.patch
>
>
> HIVE-20202 describes how Hive added a web endpoint for online in production 
> profiling based on async-profiler. The endpoint was added as a servlet to 
> httpserver and supports retrieval of flamegraphs compiled from the profiler 
> trace. Async profiler 
> ([https://github.com/jvm-profiling-tools/async-profiler] ) can also profile 
> heap allocations, lock contention, and HW performance counters in addition to 
> CPU.
> The profiling overhead is pretty low and is safe to run in production. The 
> async-profiler project measured and describes CPU and memory overheads on 
> these issues: 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/14] and 
> [https://github.com/jvm-profiling-tools/async-profiler/issues/131] 
> We have an httpserver based servlet stack so we can use HIVE-20202 as an 
> implementation template for a similar feature for HBase daemons. Ideally we 
> achieve these requirements:
>  * Retrieve flamegraph SVG generated from latest profile trace.
>  * Online enable and disable of profiling activity. (async-profiler does not 
> do instrumentation based profiling so this should not cause the code gen 
> related perf problems of that other approach and can be safely toggled on and 
> off while under production load.)
>  * CPU profiling.
>  * ALLOCATION profiling.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22045) Mutable range histogram reports incorrect outliers

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791416#comment-16791416
 ] 

Hudson commented on HBASE-22045:


Results for branch branch-2
[build #1748 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1748/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1748//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1748//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1748//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mutable range histogram reports incorrect outliers
> --
>
> Key: HBASE-22045
> URL: https://issues.apache.org/jira/browse/HBASE-22045
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0, 1.3.3, 2.0.0, 1.4.9, 2.1.3, 2.2.1
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 1.3.4, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22045.master.001.patch
>
>
> MutableRangeHistogram during the snapshot calculates the outliers (eg. 
> mutate_TimeRange_60-inf) and adds the counter with incorrect calculation 
> by using the overall count of event and not the number of events in the 
> snapshot.
> {code:java}
> long val = histogram.getCount();
> if (val - cumNum > 0) {
>   metricsRecordBuilder.addCounter(
>   Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - 
> 1] + "-inf", desc),
>   val - cumNum);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22045) Mutable range histogram reports incorrect outliers

2019-03-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791403#comment-16791403
 ] 

Hudson commented on HBASE-22045:


Results for branch branch-2.1
[build #954 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/954/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/954//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/954//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/954//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Mutable range histogram reports incorrect outliers
> --
>
> Key: HBASE-22045
> URL: https://issues.apache.org/jira/browse/HBASE-22045
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.5.0, 1.3.3, 2.0.0, 1.4.9, 2.1.3, 2.2.1
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.10, 1.3.4, 2.3.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22045.master.001.patch
>
>
> MutableRangeHistogram during the snapshot calculates the outliers (eg. 
> mutate_TimeRange_60-inf) and adds the counter with incorrect calculation 
> by using the overall count of event and not the number of events in the 
> snapshot.
> {code:java}
> long val = histogram.getCount();
> if (val - cumNum > 0) {
>   metricsRecordBuilder.addCounter(
>   Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - 
> 1] + "-inf", desc),
>   val - cumNum);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22046) add precommit check for hbase native client

2019-03-13 Thread Junhong Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junhong Xu updated HBASE-22046:
---
Attachment: HBASE-22046.HBASE-14850.v03.patch

> add precommit check for hbase native client
> ---
>
> Key: HBASE-22046
> URL: https://issues.apache.org/jira/browse/HBASE-22046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Attachments: HBASE-22046.HBASE-14850.v01.patch, 
> HBASE-22046.HBASE-14850.v02.patch, HBASE-22046.HBASE-14850.v03.patch
>
>
> The UT and other checks for hbase cpp client haven't been added into the 
> Hadoop QA pipeline 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Fwd: Hbase-17978

2019-03-13 Thread Uma
-- Forwarded message -
From: Uma 
Date: Mon 11 Mar, 2019, 3:34 PM
Subject: Hbase-17978
To: 


Hi All,.
 I observed that in case quota policy was enabled
that disallowed compaction, User is able to issue compaction command
and no error is thrown to user. But actually compaction is  not
happening for that table. In debug log below message is printed:

as an active space quota violation  policy disallows compactions.

Is it correct behaviour?


[jira] [Commented] (HBASE-22046) add precommit check for hbase native client

2019-03-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791390#comment-16791390
 ] 

Hadoop QA commented on HBASE-22046:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/16374/console in case of 
problems.


> add precommit check for hbase native client
> ---
>
> Key: HBASE-22046
> URL: https://issues.apache.org/jira/browse/HBASE-22046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Junhong Xu
>Assignee: Junhong Xu
>Priority: Minor
> Attachments: HBASE-22046.HBASE-14850.v01.patch, 
> HBASE-22046.HBASE-14850.v02.patch, HBASE-22046.HBASE-14850.v03.patch
>
>
> The UT and other checks for hbase cpp client haven't been added into the 
> Hadoop QA pipeline 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >