[jira] [Commented] (HBASE-20752) Make sure the regions are truly reopened after ReopenTableRegionsProcedure

2018-06-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520025#comment-16520025
 ] 

stack commented on HBASE-20752:
---

+1

I took a look at it up on rb. It is a bunch of good stuff. I think its worth 
the risk backporting to 2.0.2. It is mostly bug fixing and making the reopen 
resilient. +1.

> Make sure the regions are truly reopened after ReopenTableRegionsProcedure
> --
>
> Key: HBASE-20752
> URL: https://issues.apache.org/jira/browse/HBASE-20752
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20752-v1.patch, HBASE-20752.patch
>
>
> The MRP may fail due to some reason, so we need to add some checks in 
> ReopenTableRegionsProcedure, if we fail to reopen a region, then we should 
> schedule a new MRP for it again.
> And a future improvement maybe that, we can introduce a new type of procedure 
> to reopen a region, where we just need a simple call to RS, and do not need 
> to unassign it and assign it again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520020#comment-16520020
 ] 

stack commented on HBASE-20770:
---

I was running it at scale, yes. Hundreds of nodes and 160k regions + under 
load. Perhaps logs were being removed and added but it seemed odd getting same 
number after a bunch of listings

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
> 2018-06-21 01:19:13,011 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520017#comment-16520017
 ] 

Reid Chan commented on HBASE-20770:
---

Possibly some new files added? {{Removing}} without following error or warn or 
exception should be cleaned, like below.
{code}
2018-06-21 01:19:12,761 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
{code}
Were you running IntegrationTestBigLinkedList? I could try on local.

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520005#comment-16520005
 ] 

stack commented on HBASE-20770:
---

In this case it did not seem to be cleaning any logs. It was running, timing 
out, and overall count was not changing.

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
> 2018-06-21 01:19:13,011 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/1a46700fbc434574a005c0b55879d5ed
> 2018-06-21 

[jira] [Updated] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread chenxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-20740:
---
Release Note: 
Introduce CPRequestCostFunction for StochasticLoadBalancer, which will consider 
CoprocessorService request count when compute region load cost.
the multiplier can be set by hbase.master.balancer.stochastic.cpRequestCost, 
default value is 5.

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch, 
> HBASE-20740-master-v3.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519986#comment-16519986
 ] 

Hadoop QA commented on HBASE-18569:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}133m 
10s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-18569 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928695/HBASE-18569-v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux da1fbec4bfb6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / bc9f9ae080 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 

[jira] [Issue Comment Deleted] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread chenxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-20740:
---
Comment: was deleted

(was: {quote} Please fill out release note. 
{quote}
Has the patch been commited? I can't edit the *Release Note*)

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch, 
> HBASE-20740-master-v3.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread chenxu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519984#comment-16519984
 ] 

chenxu commented on HBASE-20740:


{quote} Please fill out release note. 
{quote}
Has the patch been commited? I can't edit the *Release Note*

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch, 
> HBASE-20740-master-v3.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519982#comment-16519982
 ] 

Hadoop QA commented on HBASE-20403:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}132m 
48s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928693/hbase-20403.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 8b6974dfd586 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / bc9f9ae080 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13338/testReport/ |
| Max. process+thread count | 4328 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13338/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Prefetch sometimes doesn't work with 

[jira] [Commented] (HBASE-20775) TestMultiParallel is flakey

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519972#comment-16519972
 ] 

Duo Zhang commented on HBASE-20775:
---

Seems the problem is that, there is no retry if we failed to locate meta. I 
guess the problem is that the exception in relocateRegion will be thrown out 
directly, so I moved the relocateRegion call to the top which inside the try 
block.

If pre commit is OK then I will push it to master branch first to see if it 
could help.

> TestMultiParallel is flakey
> ---
>
> Key: HBASE-20775
> URL: https://issues.apache.org/jira/browse/HBASE-20775
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20775.patch, 
> org.apache.hadoop.hbase.client.TestMultiParallel-output.txt
>
>
> Seems the problem is related to HBASE-20708, where we will give states other 
> than OPEN to meta region.
> Let me dig more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20775) TestMultiParallel is flakey

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20775:
--
Attachment: org.apache.hadoop.hbase.client.TestMultiParallel-output.txt

> TestMultiParallel is flakey
> ---
>
> Key: HBASE-20775
> URL: https://issues.apache.org/jira/browse/HBASE-20775
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20775.patch, 
> org.apache.hadoop.hbase.client.TestMultiParallel-output.txt
>
>
> Seems the problem is related to HBASE-20708, where we will give states other 
> than OPEN to meta region.
> Let me dig more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20775) TestMultiParallel is flakey

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20775:
--
Attachment: HBASE-20775.patch

> TestMultiParallel is flakey
> ---
>
> Key: HBASE-20775
> URL: https://issues.apache.org/jira/browse/HBASE-20775
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20775.patch
>
>
> Seems the problem is related to HBASE-20708, where we will give states other 
> than OPEN to meta region.
> Let me dig more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20775) TestMultiParallel is flakey

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20775:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> TestMultiParallel is flakey
> ---
>
> Key: HBASE-20775
> URL: https://issues.apache.org/jira/browse/HBASE-20775
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20775.patch
>
>
> Seems the problem is related to HBASE-20708, where we will give states other 
> than OPEN to meta region.
> Let me dig more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread chenxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-20740:
---
Attachment: HBASE-20740-master-v3.patch

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch, 
> HBASE-20740-master-v3.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20767:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-2 and branch-2.0.

Thanks [~allan163] for reviewing.

> Always close hbaseAdmin along with connection in HBTU
> -
>
> Key: HBASE-20767
> URL: https://issues.apache.org/jira/browse/HBASE-20767
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20767.patch
>
>
> The patch for HBASE-20727 breaks TestMaster as it calls the 
> HBTU.restartHBaseCluster, where we close the connection but do not close the 
> hbaseAdmin. And then the next time when we create a table through the 
> hbaseAdmin, we will get a DoNotRetryIOException since the connection it uses 
> has already been closed.
> {noformat}
> org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while 
> trying to get master
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1156)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1384)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1442)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1342)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1318)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1288)
>   at 
> org.apache.hadoop.hbase.master.TestMaster.testMasterOpsWhileSplitting(TestMaster.java:101)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--

[jira] [Commented] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519918#comment-16519918
 ] 

Duo Zhang commented on HBASE-20767:
---

Seems worked. Let commit to branch-2 and branch-2.0.

> Always close hbaseAdmin along with connection in HBTU
> -
>
> Key: HBASE-20767
> URL: https://issues.apache.org/jira/browse/HBASE-20767
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20767.patch
>
>
> The patch for HBASE-20727 breaks TestMaster as it calls the 
> HBTU.restartHBaseCluster, where we close the connection but do not close the 
> hbaseAdmin. And then the next time when we create a table through the 
> hbaseAdmin, we will get a DoNotRetryIOException since the connection it uses 
> has already been closed.
> {noformat}
> org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while 
> trying to get master
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1156)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1384)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1442)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1342)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1318)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1288)
>   at 
> org.apache.hadoop.hbase.master.TestMaster.testMasterOpsWhileSplitting(TestMaster.java:101)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-06-21 Thread zhaoyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519915#comment-16519915
 ] 

zhaoyuan commented on HBASE-20697:
--

I think In RegionLocations constructer , the property numNonNullElements has 
not been calculated right.

Please check the details in the comment above ,thanks~

> Can't cache All region locations of the specify table by calling 
> table.getRegionLocator().getAllRegionLocations()
> -
>
> Key: HBASE-20697
> URL: https://issues.apache.org/jira/browse/HBASE-20697
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-20697.branch-1.2.001.patch, 
> HBASE-20697.branch-1.2.002.patch, HBASE-20697.branch-1.2.003.patch, 
> HBASE-20697.branch-1.2.004.patch, HBASE-20697.master.001.patch, 
> HBASE-20697.master.002.patch
>
>
> When we upgrade and restart  a new version application which will read and 
> write to HBase, we will get some operation timeout. The time out is expected 
> because when the application restarts,It will not hold any region locations 
> cache and do communication with zk and meta regionserver to get region 
> locations.
> We want to avoid these timeouts so we do warmup work and as far as I am 
> concerned,the method table.getRegionLocator().getAllRegionLocations() will 
> fetch all region locations and cache them. However, it didn't work good. 
> There are still a lot of time outs,so it confused me. 
> I dig into the source code and find something below
> {code:java}
> // code placeholder
> public List getAllRegionLocations() throws IOException {
>   TableName tableName = getName();
>   NavigableMap locations =
>   MetaScanner.allTableRegions(this.connection, tableName);
>   ArrayList regions = new ArrayList<>(locations.size());
>   for (Entry entry : locations.entrySet()) {
> regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
>   }
>   if (regions.size() > 0) {
> connection.cacheLocation(tableName, new RegionLocations(regions));
>   }
>   return regions;
> }
> In MetaCache
> public void cacheLocation(final TableName tableName, final RegionLocations 
> locations) {
>   byte [] startKey = 
> locations.getRegionLocation().getRegionInfo().getStartKey();
>   ConcurrentMap tableLocations = 
> getTableLocations(tableName);
>   RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, 
> locations);
>   boolean isNewCacheEntry = (oldLocation == null);
>   if (isNewCacheEntry) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Cached location: " + locations);
> }
> addToCachedServers(locations);
> return;
>   }
> {code}
> It will collect all regions into one RegionLocations object and only cache 
> the first not null region location and then when we put or get to hbase, we 
> do getCacheLocation() 
> {code:java}
> // code placeholder
> public RegionLocations getCachedLocation(final TableName tableName, final 
> byte [] row) {
>   ConcurrentNavigableMap tableLocations =
> getTableLocations(tableName);
>   Entry e = tableLocations.floorEntry(row);
>   if (e == null) {
> if (metrics!= null) metrics.incrMetaCacheMiss();
> return null;
>   }
>   RegionLocations possibleRegion = e.getValue();
>   // make sure that the end key is greater than the row we're looking
>   // for, otherwise the row actually belongs in the next region, not
>   // this one. the exception case is when the endkey is
>   // HConstants.EMPTY_END_ROW, signifying that the region we're
>   // checking is actually the last region in the table.
>   byte[] endKey = 
> possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
>   if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
>   getRowComparator(tableName).compareRows(
>   endKey, 0, endKey.length, row, 0, row.length) > 0) {
> if (metrics != null) metrics.incrMetaCacheHit();
> return possibleRegion;
>   }
>   // Passed all the way through, so we got nothing - complete cache miss
>   if (metrics != null) metrics.incrMetaCacheMiss();
>   return null;
> }
> {code}
> It will choose the first location to be possibleRegion and possibly it will 
> miss match.
> So did I forget something or may be wrong somewhere? If this is indeed a bug 
> I think it can be fixed not very hard.
> Hope commiters and PMC review this !
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519910#comment-16519910
 ] 

Reid Chan edited comment on HBASE-20770 at 6/22/18 2:19 AM:


bq. Was thinking of updating the 60 seconds too... to ten minutes or something.
Sir, there's an issue about making the waiting time configurable, 
https://issues.apache.org/jira/browse/HBASE-20401
But 10mins or else is too long, it will stuck current chore and delay next 
chore, which leads to oldWALs or archive piled up. Just leave it to next chore 
to clean, 1min by default should be fine.


was (Author: reidchan):
bq. Was thinking of updating the 60 seconds too... to ten minutes or something.
Sir, there's an issue about make the waiting time configurable, 
https://issues.apache.org/jira/browse/HBASE-20401
But 10mins or else is too long, it will stuck current chore and delay next 
chore, which leads to oldWALs or archive piled up. Just leave it to next chore 
to clean, 1min by default should be fine.

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> 

[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519910#comment-16519910
 ] 

Reid Chan commented on HBASE-20770:
---

bq. Was thinking of updating the 60 seconds too... to ten minutes or something.
Sir, there's an issue about make the waiting time configurable, 
https://issues.apache.org/jira/browse/HBASE-20401
But 10mins or else is too long, it will stuck current chore and delay next 
chore, which leads to oldWALs or archive piled up. Just leave it to next chore 
to clean, 1min by default should be fine.

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20532) Use try-with-resources in BackupSystemTable

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519894#comment-16519894
 ] 

Hudson commented on HBASE-20532:


Results for branch master
[build #372 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/372/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Use try-with-resources in BackupSystemTable
> ---
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch, 
> HBASE-20532.v2.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519893#comment-16519893
 ] 

Hudson commented on HBASE-20767:


Results for branch master
[build #372 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/372/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/372//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Always close hbaseAdmin along with connection in HBTU
> -
>
> Key: HBASE-20767
> URL: https://issues.apache.org/jira/browse/HBASE-20767
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20767.patch
>
>
> The patch for HBASE-20727 breaks TestMaster as it calls the 
> HBTU.restartHBaseCluster, where we close the connection but do not close the 
> hbaseAdmin. And then the next time when we create a table through the 
> hbaseAdmin, we will get a DoNotRetryIOException since the connection it uses 
> has already been closed.
> {noformat}
> org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while 
> trying to get master
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1156)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1384)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1442)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1342)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1318)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1288)
>   at 
> org.apache.hadoop.hbase.master.TestMaster.testMasterOpsWhileSplitting(TestMaster.java:101)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at 

[jira] [Created] (HBASE-20775) TestMultiParallel is flakey

2018-06-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-20775:
-

 Summary: TestMultiParallel is flakey
 Key: HBASE-20775
 URL: https://issues.apache.org/jira/browse/HBASE-20775
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Reporter: Duo Zhang
 Fix For: 3.0.0, 2.1.0


Seems the problem is related to HBASE-20708, where we will give states other 
than OPEN to meta region.

Let me dig more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20752) Make sure the regions are truly reopened after ReopenTableRegionsProcedure

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519879#comment-16519879
 ] 

Duo Zhang commented on HBASE-20752:
---

Any concerns sir? [~stack] Plan to cut branch-2.1 after this is in.

Thanks.

> Make sure the regions are truly reopened after ReopenTableRegionsProcedure
> --
>
> Key: HBASE-20752
> URL: https://issues.apache.org/jira/browse/HBASE-20752
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20752-v1.patch, HBASE-20752.patch
>
>
> The MRP may fail due to some reason, so we need to add some checks in 
> ReopenTableRegionsProcedure, if we fail to reopen a region, then we should 
> schedule a new MRP for it again.
> And a future improvement maybe that, we can introduce a new type of procedure 
> to reopen a region, where we just need a simple call to RS, and do not need 
> to unassign it and assign it again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18569:
--
Attachment: HBASE-18569-v1.patch

> Add prefetch support for async region locator
> -
>
> Key: HBASE-18569
> URL: https://issues.apache.org/jira/browse/HBASE-18569
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18569-v1.patch, HBASE-18569.patch
>
>
> It will be useful for large tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519874#comment-16519874
 ] 

Todd Lipcon commented on HBASE-20403:
-

OK. New revision fixes the checkstyle. If someone out there knows how to 
reproduce the originally-reported issue and can check that this patch fixes it, 
that would be great confirmation that there isn't another issue lurking.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Todd Lipcon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HBASE-20403:

Attachment: hbase-20403.patch

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch, hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20774) FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems correctly.

2018-06-21 Thread Austin Heyne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Heyne updated HBASE-20774:
-
Description: 
FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
determine if source and destination are on the same filesystem. 
NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem (com.amazon) 
always return null in getCanonicalServiceName() which incorrectly causes 
isSameHdfs to return false even when they could be the same. 

Error encountered while trying to perform bulk load from S3 to HBase on S3 
backed by the same bucket. This is causing bulk loads from S3 to copy all the 
data to the works and back up to S3.

  was:
FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
determine if source and destination are on the same filesystem. 
NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem (com.amazon) 
always return null in getCanonicalServiceName() which incorrectly causes 
isSameHdfs to return false even when they could be the same. 

Error encountered while trying to perform bulk load from S3 to HBase on S3 
backed by the same bucket. 


> FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems correctly.
> ---
>
> Key: HBASE-20774
> URL: https://issues.apache.org/jira/browse/HBASE-20774
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Reporter: Austin Heyne
>Priority: Major
>  Labels: S3, S3Native, s3
>
> FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
> determine if source and destination are on the same filesystem. 
> NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem 
> (com.amazon) always return null in getCanonicalServiceName() which 
> incorrectly causes isSameHdfs to return false even when they could be the 
> same. 
> Error encountered while trying to perform bulk load from S3 to HBase on S3 
> backed by the same bucket. This is causing bulk loads from S3 to copy all the 
> data to the works and back up to S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20774) FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems correctly.

2018-06-21 Thread Austin Heyne (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Heyne updated HBASE-20774:
-
Description: 
FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
determine if source and destination are on the same filesystem. 
NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem (com.amazon) 
always return null in getCanonicalServiceName() which incorrectly causes 
isSameHdfs to return false even when they could be the same. 

Error encountered while trying to perform bulk load from S3 to HBase on S3 
backed by the same bucket. This is causing bulk loads from S3 to copy all the 
data to the workers and back up to S3.

  was:
FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
determine if source and destination are on the same filesystem. 
NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem (com.amazon) 
always return null in getCanonicalServiceName() which incorrectly causes 
isSameHdfs to return false even when they could be the same. 

Error encountered while trying to perform bulk load from S3 to HBase on S3 
backed by the same bucket. This is causing bulk loads from S3 to copy all the 
data to the works and back up to S3.


> FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems correctly.
> ---
>
> Key: HBASE-20774
> URL: https://issues.apache.org/jira/browse/HBASE-20774
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Reporter: Austin Heyne
>Priority: Major
>  Labels: S3, S3Native, s3
>
> FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
> determine if source and destination are on the same filesystem. 
> NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem 
> (com.amazon) always return null in getCanonicalServiceName() which 
> incorrectly causes isSameHdfs to return false even when they could be the 
> same. 
> Error encountered while trying to perform bulk load from S3 to HBase on S3 
> backed by the same bucket. This is causing bulk loads from S3 to copy all the 
> data to the workers and back up to S3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20774) FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems correctly.

2018-06-21 Thread Austin Heyne (JIRA)
Austin Heyne created HBASE-20774:


 Summary: FSHDFSUtils#isSameHdfs doesn't handle S3 filesystems 
correctly.
 Key: HBASE-20774
 URL: https://issues.apache.org/jira/browse/HBASE-20774
 Project: HBase
  Issue Type: Bug
  Components: Filesystem Integration
Reporter: Austin Heyne


FSHDFSUtils#isSameHdfs retrieves the Canonical Service Name from Hadoop to 
determine if source and destination are on the same filesystem. 
NativeS3FileSystem, S3FileSystem and presumably S3NativeFileSystem (com.amazon) 
always return null in getCanonicalServiceName() which incorrectly causes 
isSameHdfs to return false even when they could be the same. 

Error encountered while trying to perform bulk load from S3 to HBase on S3 
backed by the same bucket. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20773) Broken link in hbase-personality.sh

2018-06-21 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-20773:
---
Description: 
In the following lines in hbase-personality.sh:

You'll need a local installation of
[Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
to use this personality.


The Yetus precommit checker link is broken and returns the following when 
attempted:

Not Found
The requested URL /documentation/0.1.0/ was not found on this server.
-

  was:
In the following lines:

You'll need a local installation of
[Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
to use this personality.


The Yetus precommit checker is broken and returns the following when attempted:

Not Found
The requested URL /documentation/0.1.0/ was not found on this server.
-


> Broken link in hbase-personality.sh
> ---
>
> Key: HBASE-20773
> URL: https://issues.apache.org/jira/browse/HBASE-20773
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Priority: Minor
>  Labels: beginner, beginners
>
> In the following lines in hbase-personality.sh:
> 
> You'll need a local installation of
> [Apache Yetus' precommit 
> checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
> to use this personality.
> 
> The Yetus precommit checker link is broken and returns the following when 
> attempted:
> 
> Not Found
> The requested URL /documentation/0.1.0/ was not found on this server.
> -



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20773) Broken link in hbase-personality.sh

2018-06-21 Thread Sakthi (JIRA)
Sakthi created HBASE-20773:
--

 Summary: Broken link in hbase-personality.sh
 Key: HBASE-20773
 URL: https://issues.apache.org/jira/browse/HBASE-20773
 Project: HBase
  Issue Type: Bug
Reporter: Sakthi


In the following lines:

# You'll need a local installation of
# [Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
# to use this personality.


The Yetus precommit checker is broken and returns the following when attempted:

Not Found
The requested URL /documentation/0.1.0/ was not found on this server.
-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20773) Broken link in hbase-personality.sh

2018-06-21 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-20773:
---
Description: 
In the following lines:

You'll need a local installation of
[Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
to use this personality.


The Yetus precommit checker is broken and returns the following when attempted:

Not Found
The requested URL /documentation/0.1.0/ was not found on this server.
-

  was:
In the following lines:

# You'll need a local installation of
# [Apache Yetus' precommit 
checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
# to use this personality.


The Yetus precommit checker is broken and returns the following when attempted:

Not Found
The requested URL /documentation/0.1.0/ was not found on this server.
-


> Broken link in hbase-personality.sh
> ---
>
> Key: HBASE-20773
> URL: https://issues.apache.org/jira/browse/HBASE-20773
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Priority: Minor
>  Labels: beginner, beginners
>
> In the following lines:
> 
> You'll need a local installation of
> [Apache Yetus' precommit 
> checker](http://yetus.apache.org/documentation/0.1.0/#yetus-precommit)
> to use this personality.
> 
> The Yetus precommit checker is broken and returns the following when 
> attempted:
> 
> Not Found
> The requested URL /documentation/0.1.0/ was not found on this server.
> -



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519766#comment-16519766
 ] 

stack commented on HBASE-20770:
---

Fix these too

2018-06-21 13:18:20,076 WARN 
org.apache.hadoop.hbase.master.assignment.AssignProcedure: Skipping update of 
open seqnum with 0 because current seqnum=221061

I think they are useless if seqnum is zero.

And these lines get really big if server was carrying hundreds of regions... 
make smaller: ProcedureExecutor: Initialized subprocedures=[{pid=3449, 
ppid=1607, state=RUNNABLE:REGION_TRANSITION_QUEUE...

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519759#comment-16519759
 ] 

Ted Yu commented on HBASE-20740:


Please fill out release note.

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-06-21 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20642:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: This changes client-side nonce generation to use the same 
nonce for re-submissions of client RPC DDL operations.
  Status: Resolved  (was: Patch Available)

Thanks, Stack

> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20642.001.patch, HBASE-20642.002.patch, 
> HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-06-21 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519750#comment-16519750
 ] 

Andrew Purtell commented on HBASE-20403:


It should be ok for prefetch to be slower. It is meant as a performance 
optimization, for prepopulating the block cache. At preload time we don't know 
which if any blocks are going to be immediately required. 

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: hbase-20403.patch
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519749#comment-16519749
 ] 

Ted Yu commented on HBASE-20740:


Test failure doesn't seem to be related.

Can you address checkstyle warning ?

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519745#comment-16519745
 ] 

Hadoop QA commented on HBASE-20740:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hbase-hadoop-compat passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch hbase-hadoop2-compat passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-protocol passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} hbase-client: The patch generated 1 new + 2 unchanged 
- 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} hbase-server: The patch generated 0 new + 416 
unchanged - 14 fixed = 416 total (was 430) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} hbase-rest: The patch generated 0 new + 14 unchanged 
- 1 fixed = 14 total (was 15) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
3m  1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests 

[jira] [Commented] (HBASE-19064) Synchronous replication for HBase

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519732#comment-16519732
 ] 

Hudson commented on HBASE-19064:


Results for branch HBASE-19064
[build #168 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/168/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/168//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/168//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/168//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSED

2018-06-21 Thread stack (JIRA)
stack created HBASE-20772:
-

 Summary: Controlled shutdown fills Master log with the disturbing 
message "No matching procedure found for rit=OPEN, location=, table=Y, 
region= transition to CLOSED
 Key: HBASE-20772
 URL: https://issues.apache.org/jira/browse/HBASE-20772
 Project: HBase
  Issue Type: Bug
  Components: logging
Reporter: stack
 Fix For: 2.0.2


I just saw this and was disturbed but this is a controlled shutdown. Change 
message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20761) FSReaderImpl#readBlockDataInternal can fail to switch to HDFS checksums in some edge cases

2018-06-21 Thread Esteban Gutierrez (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519721#comment-16519721
 ] 

Esteban Gutierrez commented on HBASE-20761:
---

I think what [~mdrob] is working on HBASE-20674 is also relevant here for 
clarity. When SCRs are enabled, {{HFileSystem}} configures 
{{dfs.client.read.shortcircuit.skip.checksum}} to {{true}} and should skip HDFS 
checksums when enabled. That means that for edge cases like this one the only 
way to recover reading the header in a corrupt block is not just to set 
{{hbase.regionserver.checksum.verify}} to false to fallback to HDFS checksums 
but also to set {{dfs.client.read.shortcircuit.skip.checksum}}  to false.

> FSReaderImpl#readBlockDataInternal can fail to switch to HDFS checksums in 
> some edge cases
> --
>
> Key: HBASE-20761
> URL: https://issues.apache.org/jira/browse/HBASE-20761
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
>
> One of our users reported this problem on HBase 1.2 before and after 
> HBASE-11625:
> {code}
> Caused by: java.io.IOException: On-disk size without header provided is 
> 131131, but block header contains 0. Block offset: 2073954793, data starts 
> with: \x00\x00\x00\x00\x00\x00\x00\x0\
> 0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:526)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:92)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1699)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
> at 
> org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
> {code}
> The problems occurs when we do a read a block without HDFS checksums enabled 
> and due some data corruption we end with an empty headerBuf while trying to 
> read the block before the HDFS checksum failover code. This will cause 
> further attempts to read the block to fail since we will still retry the 
> corrupt replica instead of reporting the corrupt replica and trying a 
> different one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519720#comment-16519720
 ] 

Ted Yu commented on HBASE-20769:


When I ran the new test without fix:
{code}
testWithMockedMapReduceWithSplitsPerRegion(org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat)
  Time elapsed: 21.222 sec  <<< FAILURE!
java.lang.AssertionError: yya >= yyy?
at 
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.verifyWithMockedMapReduce(TestTableSnapshotInputFormat.java:357)
at 
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat.testWithMockedMapReduceWithSplitsPerRegion(TestTableSnapshotInputFormat.java:269)
{code}
{code}
  Assert.assertTrue(Bytes.toStringBinary(startRow) + " <= "+ 
Bytes.toStringBinary(scan.getStartRow()) + "?", Bytes.compareTo(startRow, 
scan.getStartRow()) <= 0);
  Assert.assertTrue(Bytes.toStringBinary(stopRow) + " >= "+ 
Bytes.toStringBinary(scan.getStopRow()) + "?", Bytes.compareTo(stopRow, 
scan.getStopRow()) >= 0);
{code}
First, using '?' doesn't go with assertion - if test fails, the output should 
be definitive.
Second, please wrap long line.
{code}
+public TableSnapshotInputFormatImpl.InputSplit getDelegate() {
{code}
The above can be package private.

> getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl
> ---
>
> Key: HBASE-20769
> URL: https://issues.apache.org/jira/browse/HBASE-20769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20769.master.001.patch
>
>
> When numSplits > 1, getSplits may create split that has start row smaller 
> than user specified scan's start row or stop row larger than user specified 
> scan's stop row.
> {code}
> byte[][] sp = sa.split(hri.getStartKey(), hri.getEndKey(), numSplits, 
> true);
> for (int i = 0; i < sp.length - 1; i++) {
>   if (PrivateCellUtil.overlappingKeys(scan.getStartRow(), 
> scan.getStopRow(), sp[i],
>   sp[i + 1])) {
> List hosts =
> calculateLocationsForInputSplit(conf, htd, hri, tableDir, 
> localityEnabled);
> Scan boundedScan = new Scan(scan);
> boundedScan.setStartRow(sp[i]);
> boundedScan.setStopRow(sp[i + 1]);
> splits.add(new InputSplit(htd, hri, hosts, boundedScan, 
> restoreDir));
>   }
> }
> {code}
> Since we split keys by the range of regions, when sp[i] < scan.getStartRow() 
> or sp[i + 1] > scan.getStopRow(), the created bounded scan may contain range 
> that over user defined scan.
> fix should be simple:
> {code}
> boundedScan.setStartRow(
>  Bytes.compareTo(scan.getStartRow(), sp[i]) > 0 ? scan.getStartRow() : sp[i]);
>  boundedScan.setStopRow(
>  Bytes.compareTo(scan.getStopRow(), sp[i + 1]) < 0 ? scan.getStopRow() : sp[i 
> + 1]);
> {code}
> I will also try to add UTs to help discover this problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519691#comment-16519691
 ] 

Sean Busbey commented on HBASE-20691:
-

{code}
486 if (storagePolicy.equals(HConstants.DEFAULT_WAL_STORAGE_POLICY)) {
487   if (LOG.isTraceEnabled()) {
488 LOG.trace("default policy of " + storagePolicy + " requested, 
exiting early.");
489   }
490   return;
491 }
{code}

This check should be against DEFER_TO_HDFS_STORAGE_POLICY instead.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519687#comment-16519687
 ] 

Sean Busbey commented on HBASE-20691:
-

{quote}
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear. Please note that the 
IOException thrown will be caught and logged as a warning like below (I guess 
you ignored the UT result I pasted above sir, so allow me to repeat):

{code}
2018-06-08 22:59:39,063 WARN  [Time-limited test] util.CommonFSUtils(572): 
Unable to set storagePolicy=HOT for path=non-exist. DEBUG log level might have 
more details.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:563)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:524)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:484)
at 
org.apache.hadoop.hbase.util.TestFSUtils.verifyNoHDFSApiInvocationForDefaultPolicy(TestFSUtils.java:356)
at 
org.apache.hadoop.hbase.util.TestFSUtils.testSetStoragePolicyDefault(TestFSUtils.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.io.IOException: The setStoragePolicy method is invoked 
unexpectedly
at 
org.apache.hadoop.hbase.util.TestFSUtils$AlwaysFailSetStoragePolicyFileSystem.setStoragePolicy(TestFSUtils.java:364)
... 30 more
{code}
{quote}

Oh I see. We need the unit test to fail if the call goes through. Could we 
refactor CommonFSUtils to have a package-private method that allows 
IOExceptions out, have the public access method wrap the new method to do the 
catch/logging, and then have the test use the one that throws?

If the unit test can't fail it will have very limited utility; the vast 
majority of folks aren't going to examine log output.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519624#comment-16519624
 ] 

stack commented on HBASE-20770:
---

I will. Was thinking of updating the 60 seconds too... to ten minutes or 
something. Thanks.

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
> 2018-06-21 01:19:13,011 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/1a46700fbc434574a005c0b55879d5ed
> 2018-06-21 01:19:13,011 DEBUG 
> 

[jira] [Assigned] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-06-21 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York reassigned HBASE-20734:
-

Assignee: Zach York

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519603#comment-16519603
 ] 

Reid Chan edited comment on HBASE-20770 at 6/21/18 5:24 PM:


What about trace instead of debug: 
 {{LOG.debug("Cleaning under {}", dir)}} (line 489 in CleanerChore.class),
 {{LOG.debug("Removing {}", task.filePath)}} (line 238 in HFileCleaner.class).


was (Author: reidchan):
What about make these trace: 
{{LOG.debug("Cleaning under {}", dir)}} (line 489 in CleanerChore.class),
{{LOG.debug("Removing {}", task.filePath)}} (line 238 in HFileCleaner.class)

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519603#comment-16519603
 ] 

Reid Chan commented on HBASE-20770:
---

What about make these trace: 
{{LOG.debug("Cleaning under {}", dir)}} (line 489 in CleanerChore.class),
{{LOG.debug("Removing {}", task.filePath)}} (line 238 in HFileCleaner.class)

> WAL cleaner logs way too much; gets clogged when lots of work to do
> ---
>
> Key: HBASE-20770
> URL: https://issues.apache.org/jira/browse/HBASE-20770
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, 
> Master log is in a continuous spew of logging output fililng disks. It is 
> stuck making no progress but hard to tell because it is logging minutiae 
> rather than a call on whats actually wrong.
> Log is full of this:
> {code}
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
> 2018-06-21 01:19:12,761 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
> 2018-06-21 01:19:12,823 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
>  exit...
> 2018-06-21 01:19:12,823 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
> 2018-06-21 01:19:12,824 WARN 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
> for deleting 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
>  exit...
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
> 2018-06-21 01:19:12,824 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
> 2018-06-21 01:19:12,827 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
> 2018-06-21 01:19:12,844 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
> 2018-06-21 01:19:12,848 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
> 2018-06-21 01:19:12,849 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
> 2018-06-21 01:19:12,927 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
> 2018-06-21 01:19:13,011 DEBUG 
> org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
> 

[jira] [Commented] (HBASE-20710) extra cloneFamily() in Mutation.add(Cell)

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519595#comment-16519595
 ] 

Hadoop QA commented on HBASE-20710:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  3s{color} 
| {color:red} HBASE-20710 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20710 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928647/HBASE-20710-master-v002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13336/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> extra cloneFamily() in Mutation.add(Cell)
> -
>
> Key: HBASE-20710
> URL: https://issues.apache.org/jira/browse/HBASE-20710
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: HBASE-20710-master-v001.patch, 
> HBASE-20710-master-v002.patch, HBASE-20710-master-v002.patch, 
> alloc-put-fix.svg, alloc-put-orig.svg
>
>
> The cpu profiling shows that during PE randomWrite testing, about 1 percent 
> of time is spent in cloneFamily. Reviewing code found that when a cell is DBB 
> backed ByteBuffKeyValueCell (which is default with Netty Rpc), 
> cell.getFamilyArray() will call cloneFamily() and there is again a 
> cloneFamily() in the following line of the code. since this is the critical 
> write path processing, this needs to be optimized.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L791
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L795



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519594#comment-16519594
 ] 

Reid Chan commented on HBASE-20732:
---

{quote}
alwaysDelete.isFileDeletable(); Wanted 1 time: -> at 
org.apache.hadoop.hbase.master.cleaner.TestCleanerChore.testNoExceptionFromDirectoryWithRacyChildren(TestCleanerChore.java:348)
 But was 2 times. Undesired invocation: -> at 
org.apache.hbase.thirdparty.com.google.common.collect.Iterators$4.computeNext(Iterators.java:624)
 at 
org.apache.hadoop.hbase.master.cleaner.TestCleanerChore.testNoExceptionFromDirectoryWithRacyChildren(TestCleanerChore.java:348)
{quote}

Trying to figure out why.

> Shutdown scan pool when master is stopped.
> --
>
> Key: HBASE-20732
> URL: https://issues.apache.org/jira/browse/HBASE-20732
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-20732.master.001.patch, 
> HBASE-20732.master.002.patch, HBASE-20732.master.003.patch, 
> HBASE-20732.master.004.patch
>
>
> If master is stopped, {{DirScanPool}} is kept open. This is found by 
> [~chia7712] when reviewing HBASE-20352.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20710) extra cloneFamily() in Mutation.add(Cell)

2018-06-21 Thread huaxiang sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-20710:
-
Attachment: HBASE-20710-master-v002.patch

> extra cloneFamily() in Mutation.add(Cell)
> -
>
> Key: HBASE-20710
> URL: https://issues.apache.org/jira/browse/HBASE-20710
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: HBASE-20710-master-v001.patch, 
> HBASE-20710-master-v002.patch, HBASE-20710-master-v002.patch, 
> alloc-put-fix.svg, alloc-put-orig.svg
>
>
> The cpu profiling shows that during PE randomWrite testing, about 1 percent 
> of time is spent in cloneFamily. Reviewing code found that when a cell is DBB 
> backed ByteBuffKeyValueCell (which is default with Netty Rpc), 
> cell.getFamilyArray() will call cloneFamily() and there is again a 
> cloneFamily() in the following line of the code. since this is the critical 
> write path processing, this needs to be optimized.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L791
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L795



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20710) extra cloneFamily() in Mutation.add(Cell)

2018-06-21 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519592#comment-16519592
 ] 

huaxiang sun commented on HBASE-20710:
--

reattach the patch as there is no test run against it.

> extra cloneFamily() in Mutation.add(Cell)
> -
>
> Key: HBASE-20710
> URL: https://issues.apache.org/jira/browse/HBASE-20710
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.2
>
> Attachments: HBASE-20710-master-v001.patch, 
> HBASE-20710-master-v002.patch, HBASE-20710-master-v002.patch, 
> alloc-put-fix.svg, alloc-put-orig.svg
>
>
> The cpu profiling shows that during PE randomWrite testing, about 1 percent 
> of time is spent in cloneFamily. Reviewing code found that when a cell is DBB 
> backed ByteBuffKeyValueCell (which is default with Netty Rpc), 
> cell.getFamilyArray() will call cloneFamily() and there is again a 
> cloneFamily() in the following line of the code. since this is the critical 
> write path processing, this needs to be optimized.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L791
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java#L795



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519587#comment-16519587
 ] 

Hadoop QA commented on HBASE-18569:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 0s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 12s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestAsyncTableGetMultiThreaded |
|   | hadoop.hbase.client.TestAsyncClusterAdminApi |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-18569 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928635/HBASE-18569.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6a5ac006de93 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0d784efc37 |
| maven | 

[jira] [Assigned] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-21 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi reassigned HBASE-20649:
-

Assignee: (was: Peter Somogyi)

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-21 Thread Peter Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519570#comment-16519570
 ] 

Peter Somogyi commented on HBASE-20649:
---

Unfortunately I don't have time to continue working on this one. I'm 
unassigning myself.

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-21 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-20649:
--
Status: In Progress  (was: Patch Available)

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20771) PUT operation fail with "No server address listed in hbase:meta for region xxxxx"

2018-06-21 Thread Pankaj Kumar (JIRA)
Pankaj Kumar created HBASE-20771:


 Summary: PUT operation fail with "No server address listed in 
hbase:meta for region x"
 Key: HBASE-20771
 URL: https://issues.apache.org/jira/browse/HBASE-20771
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.5.0
Reporter: Pankaj Kumar
Assignee: Pankaj Kumar


1) Create a table with 1 region

2) AM send RPC to RS to open the region,
{code}
 // invoke assignment (async)
 ArrayList userRegionSet = new ArrayList(regions);
 for (Map.Entry> plan: bulkPlan.entrySet()) {
 if (!assign(plan.getKey(), plan.getValue())) {
 for (HRegionInfo region: plan.getValue()) {
 if (!regionStates.isRegionOnline(region)) {
 invokeAssign(region);
 if (!region.getTable().isSystemTable()) {
 userRegionSet.add(region);
 }
 }
 }
 }
 }
{code}
In above code if assignment fails (due to some problem) then AM will sumbit the 
assign request to the thread pool and wait for some duration (here it will wait 
for 20sec, region count is 1).

3) After 20sec, CreateTableProcedure will set table state as ENABLED in ZK and 
finish the procedure. 
{code}
 // Mark the table as Enabling
 assignmentManager.getTableStateManager().setTableState(tableName,
 ZooKeeperProtos.Table.State.ENABLING);

// Trigger immediate assignment of the regions in round-robin fashion
 ModifyRegionUtils.assignRegions(assignmentManager, regions);

// Enable table
 assignmentManager.getTableStateManager()
 .setTableState(tableName, ZooKeeperProtos.Table.State.ENABLED);
{code}

4) At the client side, CreateTableFuture.waitProcedureResult(...) is waiting 
for the procedure to finish,
{code}
 // If the procedure is no longer running, we should have a result
 if (response != null && response.getState() != 
GetProcedureResultResponse.State.RUNNING) {
 procResultFound = response.getState() != 
GetProcedureResultResponse.State.NOT_FOUND;
 return convertResult(response);
 }
{code}

Here we wait for operation result only when procedure result not found, but in 
this scenario it will be wrong because region assignment didnt complete,
{code}
 // if we don't have a proc result, try the compatibility wait
 if (!procResultFound) {
 result = waitOperationResult(deadlineTs);
 }
{code}

Since HBaseAdmin didn't wait for operation result (successful region 
assignment), so client PUT operation will fail by the time region is 
successfully opened because "info:server" entry wont be there in meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20740) StochasticLoadBalancer should consider CoprocessorService request factor when computing cost

2018-06-21 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20740:
---
Attachment: 20740-master-v2.patch

> StochasticLoadBalancer should consider CoprocessorService request factor when 
> computing cost
> 
>
> Key: HBASE-20740
> URL: https://issues.apache.org/jira/browse/HBASE-20740
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: 20740-master-v2.patch, HBASE-20740-master-v1.patch
>
>
> When compute region load cost, ReadRequest, WriteRequest, MemStoreSize and 
> StoreFileSize are considered, But CoprocessorService requests are ignored. In 
> our KYLIN cluster, there only have CoprocessorService requests, and the 
> cluster sometimes unbalanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20748) HBaseContext bulkLoad: being able to use custom versions

2018-06-21 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20748:
---
Status: Open  (was: Patch Available)

> HBaseContext bulkLoad: being able to use custom versions
> 
>
> Key: HBASE-20748
> URL: https://issues.apache.org/jira/browse/HBASE-20748
> Project: HBase
>  Issue Type: Improvement
>  Components: spark
>Reporter: Charles PORROT
>Assignee: Charles PORROT
>Priority: Major
>  Labels: HBaseContext, bulkload, spark, versions
> Attachments: bulkLoadCustomVersions.scala
>
>
> The _bulkLoad_ methods of _class org.apache.hadoop.hbase.spark.HBaseContext_ 
> use the system's current time for the version of the cells to bulk-load. This 
> makes this method, and its twin _bulkLoadThinRows_, useless if you need to 
> use your own versionning system:
> {code:java}
> //Here is where we finally iterate through the data in this partition of the 
> //RDD that has been sorted and partitioned
> val wl = writeValueToHFile(
>   keyFamilyQualifier.rowKey, 
>   keyFamilyQualifier.family, 
>   keyFamilyQualifier.qualifier, 
>   cellValue, 
>   nowTimeStamp, 
>   fs, 
>   conn, 
>   localTableName, 
>   conf, 
>   familyHFileWriteOptionsMapInternal, 
>   hfileCompression, 
>   writerMap, 
>   stagingDir
> ){code}
>  
> Thus, I propose a third _bulkLoad_ method, based on the original method. 
> Instead of using an _Iterator(KeyFamilyQualifier, Array[Byte])_ as the basis 
> for the writes, this new method would use an _Iterator(KeyFamilyQualifier, 
> Array[Byte], Long_), with the _Long_ being the version.
>  
> Definition of _bulkLoad_:
> {code:java}
> def bulkLoad[T](
> rdd:RDD[T], 
> tableName: TableName, 
> flatMap: (T) => Iterator[(KeyFamilyQualifier, Array[Byte])], 
> stagingDir:String, 
> familyHFileWriteOptionsMap: util.Map[Array[Byte], FamilyHFileWriteOptions] = 
> new util.HashMap[Array[Byte], FamilyHFileWriteOptions],
> compactionExclude: Boolean = false, 
> maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):{code}
> Definition of a _bulkLoadWithCustomVersions_ method:
> {code:java}
> def bulkLoadCustomVersions[T](rdd:RDD[T],
>   tableName: TableName,
>   flatMap: (T) => Iterator[(KeyFamilyQualifier, Array[Byte], 
> Long)],
>   stagingDir:String,
>   familyHFileWriteOptionsMap:
>   util.Map[Array[Byte], FamilyHFileWriteOptions] =
>   new util.HashMap[Array[Byte], FamilyHFileWriteOptions],
>   compactionExclude: Boolean = false,
>   maxSize:Long = HConstants.DEFAULT_MAX_FILE_SIZE):{code}
> In case of illogical version (for instance, a negative version), the method 
> would throw back to the current timestamp.
> {code:java}
> val wl = writeValueToHFile(keyFamilyQualifier.rowKey,
>   keyFamilyQualifier.family,
>   keyFamilyQualifier.qualifier,
>   cellValue,
>   if (version > 0) version else nowTimeStamp,
>   fs,
>   conn,
>   localTableName,
>   conf,
>   familyHFileWriteOptionsMapInternal,
>   hfileCompression,
>   writerMap,
>   stagingDir){code}
> See the attached file for the file with the full proposed method.
>  
> +Edit:+
> The same could be done with bulkLoadThinRows: instead of a:
> {code:java}
> Iterator[Pair[ByteArrayWrapper, FamiliesQualifiersValues]]{code}
> We expect an:
> {code:java}
>  Iterator[Triple[ByteArrayWrapper, FamiliesQualifiersValues, Long]]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20532) Use try-with-resources in BackupSystemTable

2018-06-21 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20532:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch, Andy.

Modified the subject line of your patch to match the title of JIRA.

> Use try-with-resources in BackupSystemTable
> ---
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch, 
> HBASE-20532.v2.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20532) Use try-with-resources in BackupSystemTable

2018-06-21 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20532:
---
Summary: Use try-with-resources in BackupSystemTable  (was: Use try 
-with-resources in BackupSystemTable)

> Use try-with-resources in BackupSystemTable
> ---
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch, 
> HBASE-20532.v2.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519544#comment-16519544
 ] 

Ted Yu edited comment on HBASE-20734 at 6/21/18 4:07 PM:
-

In WALSplitter#finishSplitLogFile :
{code}
Path rootdir = FSUtils.getWALRootDir(conf);
{code}
Probably renaming rootdir as walRootDir would make the code clearer (since 
rootdir means different directory w.r.t. {{FSUtils.getRootDir}})


was (Author: yuzhih...@gmail.com):
In WALSplitter#finishSplitLogFile :

Path rootdir = FSUtils.getWALRootDir(conf);

Probably renaming rootdir as walRootDir would make the code clearer (since 
rootdir means different directory w.r.t. {{FSUtils.getWALRootDir}})

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Priority: Major
> Fix For: 3.0.0
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519544#comment-16519544
 ] 

Ted Yu commented on HBASE-20734:


In WALSplitter#finishSplitLogFile :

Path rootdir = FSUtils.getWALRootDir(conf);

Probably renaming rootdir as walRootDir would make the code clearer (since 
rootdir means different directory w.r.t. {{FSUtils.getWALRootDir}})

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Priority: Major
> Fix For: 3.0.0
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18569:
--
Attachment: HBASE-18569.patch

> Add prefetch support for async region locator
> -
>
> Key: HBASE-18569
> URL: https://issues.apache.org/jira/browse/HBASE-18569
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18569.patch
>
>
> It will be useful for large tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18569:
--
Component/s: Client
 asyncclient

> Add prefetch support for async region locator
> -
>
> Key: HBASE-18569
> URL: https://issues.apache.org/jira/browse/HBASE-18569
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18569.patch
>
>
> It will be useful for large tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18569:
--
 Assignee: Duo Zhang
Fix Version/s: 2.1.0
   3.0.0
   Status: Patch Available  (was: Open)

> Add prefetch support for async region locator
> -
>
> Key: HBASE-18569
> URL: https://issues.apache.org/jira/browse/HBASE-18569
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-18569.patch
>
>
> It will be useful for large tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519414#comment-16519414
 ] 

Allan Yang commented on HBASE-20767:


+1 if passes all UT

> Always close hbaseAdmin along with connection in HBTU
> -
>
> Key: HBASE-20767
> URL: https://issues.apache.org/jira/browse/HBASE-20767
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20767.patch
>
>
> The patch for HBASE-20727 breaks TestMaster as it calls the 
> HBTU.restartHBaseCluster, where we close the connection but do not close the 
> hbaseAdmin. And then the next time when we create a table through the 
> hbaseAdmin, we will get a DoNotRetryIOException since the connection it uses 
> has already been closed.
> {noformat}
> org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while 
> trying to get master
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1156)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1384)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1442)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1342)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1318)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1288)
>   at 
> org.apache.hadoop.hbase.master.TestMaster.testMasterOpsWhileSplitting(TestMaster.java:101)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20727) Persist FlushedSequenceId to speed up WAL split after cluster restart

2018-06-21 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519413#comment-16519413
 ] 

Allan Yang commented on HBASE-20727:


{quote}
And I think the problem is HBTU itself. It will set the connection to null when 
restarting hbase cluster, but the hbase admin is still there. Let me open a new 
issue to address it.
{quote}
Yes, it is HBTU's problem, I see you have opened HBASE-20767. Let's discuss 
there

> Persist FlushedSequenceId to speed up WAL split after cluster restart
> -
>
> Key: HBASE-20727
> URL: https://issues.apache.org/jira/browse/HBASE-20727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20727.002.patch, HBASE-20727.003.patch, 
> HBASE-20727.004.patch, HBASE-20727.005.patch, HBASE-20727.patch
>
>
> We use flushedSequenceIdByRegion and storeFlushedSequenceIdsByRegion in 
> ServerManager to record the latest flushed seqids of regions and stores. So 
> during log split, we can use seqids stored in those maps to filter out the 
> edits which do not need to be replayed. But, those maps are not persisted. 
> After cluster restart or master restart, info of flushed seqids are all lost. 
> Here I offer a way to persist those info to HDFS, even if master restart, we 
> can still use those info to filter WAL edits and then to speed up replay.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20770) WAL cleaner logs way too much; gets clogged when lots of work to do

2018-06-21 Thread stack (JIRA)
stack created HBASE-20770:
-

 Summary: WAL cleaner logs way too much; gets clogged when lots of 
work to do
 Key: HBASE-20770
 URL: https://issues.apache.org/jira/browse/HBASE-20770
 Project: HBase
  Issue Type: Bug
  Components: logging
Reporter: stack
 Fix For: 2.0.2


Been here before (HBASE-7214 and HBASE-19652). Testing on large cluster, Master 
log is in a continuous spew of logging output fililng disks. It is stuck making 
no progress but hard to tell because it is logging minutiae rather than a call 
on whats actually wrong.

Log is full of this:
{code}
2018-06-21 01:19:12,761 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/bd49572de3914b66985fff5ea2ca7403
2018-06-21 01:19:12,761 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/meta/fad01294c6ca421db209e89b5b97d364
2018-06-21 01:19:12,823 WARN 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
for deleting 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/d3f759d0495257fc1d33ae780b634455/tiny/b72bac4036444dcf9265c7b5664fd403,
 exit...
2018-06-21 01:19:12,823 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9
2018-06-21 01:19:12,824 WARN 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
for deleting 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/2425053ad86823081b368e00bc471e56/tiny/6ea3cb1174434aecbc448e322e2a062c,
 exit...
2018-06-21 01:19:12,824 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/big
2018-06-21 01:19:12,824 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/e98cdb817bb3af5fa26e2b885a0b2ec6/tiny
2018-06-21 01:19:12,827 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta
2018-06-21 01:19:12,844 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0
2018-06-21 01:19:12,844 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c
2018-06-21 01:19:12,844 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/90f21dec28d140cda48d37eeb44d37e8
2018-06-21 01:19:12,844 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/665bfa38c86a28d641ce08f8fea0a7f9/meta/8a4cf6410d5a4201963bc1415945f877
2018-06-21 01:19:12,848 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta
2018-06-21 01:19:12,849 DEBUG 
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Cleaning under 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta
2018-06-21 01:19:12,927 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/6043fce5761e4479819b15405183f193
2018-06-21 01:19:12,927 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/69e6bf4650124859b2bc7ddf134be642
2018-06-21 01:19:13,011 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/17f85c98389104b19358f6751da577d0/meta/1a46700fbc434574a005c0b55879d5ed
2018-06-21 01:19:13,011 DEBUG 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Removing 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/c98e276423813aaa74d848983c47d93c/meta/44beca2adfb544999dd82e9cf8151be1
2018-06-21 01:19:13,048 WARN 
org.apache.hadoop.hbase.master.cleaner.HFileCleaner: Wait more than 6 ms 
for deleting 
hdfs://ns1/hbase/archive/data/default/IntegrationTestBigLinkedList/1f60d5dcf666b1e47de3445c369b6411/tiny/29b934a9c8f74ec1a5806f3fabeee094,
 exit...
{code}

It 

[jira] [Commented] (HBASE-20642) IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519350#comment-16519350
 ] 

Hudson commented on HBASE-20642:


Results for branch branch-2.0
[build #453 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/453/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/453//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/453//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/453//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> IntegrationTestDDLMasterFailover throws 'InvalidFamilyOperationException 
> -
>
> Key: HBASE-20642
> URL: https://issues.apache.org/jira/browse/HBASE-20642
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20642.001.patch, HBASE-20642.002.patch, 
> HBASE-20642.patch
>
>
> [~romil.choksi] reported that IntegrationTestDDLMasterFailover is failing 
> while adding column family during the time master is restarting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519342#comment-16519342
 ] 

Hadoop QA commented on HBASE-20532:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
42s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20532 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928621/HBASE-20532.v2.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 795bda3574b3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 72784c2d83 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1/testReport/ |
| Max. process+thread count | 4140 (vs. ulimit of 1) |
| modules | C: hbase-backup U: hbase-backup |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1/console |
| Powered by | Apache 

[jira] [Commented] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519338#comment-16519338
 ] 

Duo Zhang commented on HBASE-20767:
---

Pushed to master to see if it works.

> Always close hbaseAdmin along with connection in HBTU
> -
>
> Key: HBASE-20767
> URL: https://issues.apache.org/jira/browse/HBASE-20767
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20767.patch
>
>
> The patch for HBASE-20727 breaks TestMaster as it calls the 
> HBTU.restartHBaseCluster, where we close the connection but do not close the 
> hbaseAdmin. And then the next time when we create a table through the 
> hbaseAdmin, we will get a DoNotRetryIOException since the connection it uses 
> has already been closed.
> {noformat}
> org.apache.hadoop.hbase.DoNotRetryIOException: Connection was closed while 
> trying to get master
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionImplementation.java:1156)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getKeepAliveMasterService(ConnectionImplementation.java:1213)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getMaster(ConnectionImplementation.java:1202)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:57)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3084)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3076)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:647)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:620)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1408)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1384)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1442)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1342)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1318)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1288)
>   at 
> org.apache.hadoop.hbase.master.TestMaster.testMasterOpsWhileSplitting(TestMaster.java:101)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519286#comment-16519286
 ] 

Hadoop QA commented on HBASE-20732:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 22s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.cleaner.TestCleanerChore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928616/HBASE-20732.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a8143817cf16 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 72784c2d83 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13332/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13332/testReport/ |
| Max. process+thread count | 702 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Updated] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Andy Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Lin updated HBASE-20532:
-
Attachment: HBASE-20532.v2.patch

> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch, 
> HBASE-20532.v2.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Andy Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519282#comment-16519282
 ] 

Andy Lin commented on HBASE-20532:
--

Ok, I will fix it.

> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519274#comment-16519274
 ] 

Ted Yu commented on HBASE-20532:


lgtm

Please fix checkstyle warnings


> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-21 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-20769:
-
Attachment: HBASE-20769.master.001.patch

> getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl
> ---
>
> Key: HBASE-20769
> URL: https://issues.apache.org/jira/browse/HBASE-20769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0, 2.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20769.master.001.patch
>
>
> When numSplits > 1, getSplits may create split that has start row smaller 
> than user specified scan's start row or stop row larger than user specified 
> scan's stop row.
> {code}
> byte[][] sp = sa.split(hri.getStartKey(), hri.getEndKey(), numSplits, 
> true);
> for (int i = 0; i < sp.length - 1; i++) {
>   if (PrivateCellUtil.overlappingKeys(scan.getStartRow(), 
> scan.getStopRow(), sp[i],
>   sp[i + 1])) {
> List hosts =
> calculateLocationsForInputSplit(conf, htd, hri, tableDir, 
> localityEnabled);
> Scan boundedScan = new Scan(scan);
> boundedScan.setStartRow(sp[i]);
> boundedScan.setStopRow(sp[i + 1]);
> splits.add(new InputSplit(htd, hri, hosts, boundedScan, 
> restoreDir));
>   }
> }
> {code}
> Since we split keys by the range of regions, when sp[i] < scan.getStartRow() 
> or sp[i + 1] > scan.getStopRow(), the created bounded scan may contain range 
> that over user defined scan.
> fix should be simple:
> {code}
> boundedScan.setStartRow(
>  Bytes.compareTo(scan.getStartRow(), sp[i]) > 0 ? scan.getStartRow() : sp[i]);
>  boundedScan.setStopRow(
>  Bytes.compareTo(scan.getStopRow(), sp[i + 1]) < 0 ? scan.getStopRow() : sp[i 
> + 1]);
> {code}
> I will also try to add UTs to help discover this problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519241#comment-16519241
 ] 

Reid Chan edited comment on HBASE-20732 at 6/21/18 11:32 AM:
-

Simplified {{TestCleanChore}}.
 ps, two methods,
 {{TestCleanChore#testCleanerDoesNotDeleteDirectoryWithLateAddedFiles}} and 
{{TestCleanChore#testNoExceptionFromDirectoryWithRacyChildren}} almost the 
same, but two results were different on below assertion on local test,
{code:java}
Mockito.verify(spy, Mockito.times(1)).isFileDeletable(Mockito.any());{code}
The former was 2, the latter was 1.

Waiting QA.


was (Author: reidchan):
Simplified {{TestCleanChore}}.
ps, two methods,
{{TestCleanChore#testCleanerDoesNotDeleteDirectoryWithLateAddedFiles}} and 
{{TestCleanChore#testNoExceptionFromDirectoryWithRacyChildren}} almost the 
same, but results was different on below assertion on local test,
{code}Mockito.verify(spy, 
Mockito.times(1)).isFileDeletable(Mockito.any());{code}
The former was 2, the latter was 1.

Waiting QA.

> Shutdown scan pool when master is stopped.
> --
>
> Key: HBASE-20732
> URL: https://issues.apache.org/jira/browse/HBASE-20732
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-20732.master.001.patch, 
> HBASE-20732.master.002.patch, HBASE-20732.master.003.patch, 
> HBASE-20732.master.004.patch
>
>
> If master is stopped, {{DirScanPool}} is kept open. This is found by 
> [~chia7712] when reviewing HBASE-20352.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519241#comment-16519241
 ] 

Reid Chan edited comment on HBASE-20732 at 6/21/18 11:29 AM:
-

Simplified {{TestCleanChore}}.
ps, two methods,
{{TestCleanChore#testCleanerDoesNotDeleteDirectoryWithLateAddedFiles}} and 
{{TestCleanChore#testNoExceptionFromDirectoryWithRacyChildren}} almost the 
same, but results was different on below assertion on local test,
{code}Mockito.verify(spy, 
Mockito.times(1)).isFileDeletable(Mockito.any());{code}
The former was 2, the latter was 1.

Waiting QA.


was (Author: reidchan):
SimplifiedTestCleanChore}}.
ps, two methods,
{{TestCleanChore#testCleanerDoesNotDeleteDirectoryWithLateAddedFiles}} and 
{{TestCleanChore#testNoExceptionFromDirectoryWithRacyChildren}} almost the 
same, but results was different on below assertion on local test,
{code}Mockito.verify(spy, 
Mockito.times(1)).isFileDeletable(Mockito.any());{code}
The former was 2, the latter was 1.

Waiting QA.

> Shutdown scan pool when master is stopped.
> --
>
> Key: HBASE-20732
> URL: https://issues.apache.org/jira/browse/HBASE-20732
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-20732.master.001.patch, 
> HBASE-20732.master.002.patch, HBASE-20732.master.003.patch, 
> HBASE-20732.master.004.patch
>
>
> If master is stopped, {{DirScanPool}} is kept open. This is found by 
> [~chia7712] when reviewing HBASE-20352.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519241#comment-16519241
 ] 

Reid Chan commented on HBASE-20732:
---

SimplifiedTestCleanChore}}.
ps, two methods,
{{TestCleanChore#testCleanerDoesNotDeleteDirectoryWithLateAddedFiles}} and 
{{TestCleanChore#testNoExceptionFromDirectoryWithRacyChildren}} almost the 
same, but results was different on below assertion on local test,
{code}Mockito.verify(spy, 
Mockito.times(1)).isFileDeletable(Mockito.any());{code}
The former was 2, the latter was 1.

Waiting QA.

> Shutdown scan pool when master is stopped.
> --
>
> Key: HBASE-20732
> URL: https://issues.apache.org/jira/browse/HBASE-20732
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-20732.master.001.patch, 
> HBASE-20732.master.002.patch, HBASE-20732.master.003.patch, 
> HBASE-20732.master.004.patch
>
>
> If master is stopped, {{DirScanPool}} is kept open. This is found by 
> [~chia7712] when reviewing HBASE-20352.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519238#comment-16519238
 ] 

Hadoop QA commented on HBASE-20691:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}153m  
7s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 0s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20691 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928588/HBASE-20691.v4.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3cb25f386069 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 
12:16:42 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 72784c2d83 |
| maven | 

[jira] [Updated] (HBASE-20732) Shutdown scan pool when master is stopped.

2018-06-21 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-20732:
--
Attachment: HBASE-20732.master.004.patch

> Shutdown scan pool when master is stopped.
> --
>
> Key: HBASE-20732
> URL: https://issues.apache.org/jira/browse/HBASE-20732
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-20732.master.001.patch, 
> HBASE-20732.master.002.patch, HBASE-20732.master.003.patch, 
> HBASE-20732.master.004.patch
>
>
> If master is stopped, {{DirScanPool}} is kept open. This is found by 
> [~chia7712] when reviewing HBASE-20352.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519224#comment-16519224
 ] 

Hadoop QA commented on HBASE-20532:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
29s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hbase-backup: The patch generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
49s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20532 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928601/HBASE-20532.v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux bf2c6e26e39c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 72784c2d83 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13331/artifact/patchprocess/diff-checkstyle-hbase-backup.txt
 |
| whitespace | 

[jira] [Commented] (HBASE-20767) Always close hbaseAdmin along with connection in HBTU

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519219#comment-16519219
 ] 

Hadoop QA commented on HBASE-20767:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 6s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} hbase-server: The patch generated 0 new + 276 
unchanged - 2 fixed = 276 total (was 278) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
53s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}117m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928587/HBASE-20767.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux bd9de8181c77 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 72784c2d83 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13330/testReport/ |
| Max. process+thread count | 4482 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13330/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message 

[jira] [Created] (HBASE-20769) getSplits() has a out of bounds problem in TableSnapshotInputFormatImpl

2018-06-21 Thread Jingyun Tian (JIRA)
Jingyun Tian created HBASE-20769:


 Summary: getSplits() has a out of bounds problem in 
TableSnapshotInputFormatImpl
 Key: HBASE-20769
 URL: https://issues.apache.org/jira/browse/HBASE-20769
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.4.0, 1.3.0
Reporter: Jingyun Tian
Assignee: Jingyun Tian
 Fix For: 2.0.0


When numSplits > 1, getSplits may create split that has start row smaller than 
user specified scan's start row or stop row larger than user specified scan's 
stop row.

{code}

byte[][] sp = sa.split(hri.getStartKey(), hri.getEndKey(), numSplits, 
true);
for (int i = 0; i < sp.length - 1; i++) {
  if (PrivateCellUtil.overlappingKeys(scan.getStartRow(), 
scan.getStopRow(), sp[i],
  sp[i + 1])) {
List hosts =
calculateLocationsForInputSplit(conf, htd, hri, tableDir, 
localityEnabled);

Scan boundedScan = new Scan(scan);
boundedScan.setStartRow(sp[i]);
boundedScan.setStopRow(sp[i + 1]);

splits.add(new InputSplit(htd, hri, hosts, boundedScan, 
restoreDir));
  }
}

{code}

Since we split keys by the range of regions, when sp[i] < scan.getStartRow() or 
sp[i + 1] > scan.getStopRow(), the created bounded scan may contain range that 
over user defined scan.

fix should be simple:

{code}

boundedScan.setStartRow(
 Bytes.compareTo(scan.getStartRow(), sp[i]) > 0 ? scan.getStartRow() : sp[i]);
 boundedScan.setStopRow(
 Bytes.compareTo(scan.getStopRow(), sp[i + 1]) < 0 ? scan.getStopRow() : sp[i + 
1]);

{code}

I will also try to add UTs to help discover this problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-06-21 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519217#comment-16519217
 ] 

Guanghao Zhang commented on HBASE-20697:


The reason of RegionLocations's change is?

> Can't cache All region locations of the specify table by calling 
> table.getRegionLocator().getAllRegionLocations()
> -
>
> Key: HBASE-20697
> URL: https://issues.apache.org/jira/browse/HBASE-20697
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-20697.branch-1.2.001.patch, 
> HBASE-20697.branch-1.2.002.patch, HBASE-20697.branch-1.2.003.patch, 
> HBASE-20697.branch-1.2.004.patch, HBASE-20697.master.001.patch, 
> HBASE-20697.master.002.patch
>
>
> When we upgrade and restart  a new version application which will read and 
> write to HBase, we will get some operation timeout. The time out is expected 
> because when the application restarts,It will not hold any region locations 
> cache and do communication with zk and meta regionserver to get region 
> locations.
> We want to avoid these timeouts so we do warmup work and as far as I am 
> concerned,the method table.getRegionLocator().getAllRegionLocations() will 
> fetch all region locations and cache them. However, it didn't work good. 
> There are still a lot of time outs,so it confused me. 
> I dig into the source code and find something below
> {code:java}
> // code placeholder
> public List getAllRegionLocations() throws IOException {
>   TableName tableName = getName();
>   NavigableMap locations =
>   MetaScanner.allTableRegions(this.connection, tableName);
>   ArrayList regions = new ArrayList<>(locations.size());
>   for (Entry entry : locations.entrySet()) {
> regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
>   }
>   if (regions.size() > 0) {
> connection.cacheLocation(tableName, new RegionLocations(regions));
>   }
>   return regions;
> }
> In MetaCache
> public void cacheLocation(final TableName tableName, final RegionLocations 
> locations) {
>   byte [] startKey = 
> locations.getRegionLocation().getRegionInfo().getStartKey();
>   ConcurrentMap tableLocations = 
> getTableLocations(tableName);
>   RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, 
> locations);
>   boolean isNewCacheEntry = (oldLocation == null);
>   if (isNewCacheEntry) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Cached location: " + locations);
> }
> addToCachedServers(locations);
> return;
>   }
> {code}
> It will collect all regions into one RegionLocations object and only cache 
> the first not null region location and then when we put or get to hbase, we 
> do getCacheLocation() 
> {code:java}
> // code placeholder
> public RegionLocations getCachedLocation(final TableName tableName, final 
> byte [] row) {
>   ConcurrentNavigableMap tableLocations =
> getTableLocations(tableName);
>   Entry e = tableLocations.floorEntry(row);
>   if (e == null) {
> if (metrics!= null) metrics.incrMetaCacheMiss();
> return null;
>   }
>   RegionLocations possibleRegion = e.getValue();
>   // make sure that the end key is greater than the row we're looking
>   // for, otherwise the row actually belongs in the next region, not
>   // this one. the exception case is when the endkey is
>   // HConstants.EMPTY_END_ROW, signifying that the region we're
>   // checking is actually the last region in the table.
>   byte[] endKey = 
> possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
>   if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
>   getRowComparator(tableName).compareRows(
>   endKey, 0, endKey.length, row, 0, row.length) > 0) {
> if (metrics != null) metrics.incrMetaCacheHit();
> return possibleRegion;
>   }
>   // Passed all the way through, so we got nothing - complete cache miss
>   if (metrics != null) metrics.incrMetaCacheMiss();
>   return null;
> }
> {code}
> It will choose the first location to be possibleRegion and possibly it will 
> miss match.
> So did I forget something or may be wrong somewhere? If this is indeed a bug 
> I think it can be fixed not very hard.
> Hope commiters and PMC review this !
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18569) Add prefetch support for async region locator

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519202#comment-16519202
 ] 

Duo Zhang commented on HBASE-18569:
---

OK the prefetch code has already been purged from the sync client in 
HBASE-10018. And since now we are using the reverse scan to get the region 
location, I think we can add it back. For random read/write it would help a lot.

For a typical MR this will not help as we will always scan forward, not 
backward, but since the it could still finish in one rpc call, there is no down 
side either.

> Add prefetch support for async region locator
> -
>
> Key: HBASE-18569
> URL: https://issues.apache.org/jira/browse/HBASE-18569
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Priority: Major
>
> It will be useful for large tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20532) Use try -with-resources in BackupSystemTable

2018-06-21 Thread Andy Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Lin updated HBASE-20532:
-
Attachment: HBASE-20532.v1.patch

> Use try -with-resources in BackupSystemTable
> 
>
> Key: HBASE-20532
> URL: https://issues.apache.org/jira/browse/HBASE-20532
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andy Lin
>Assignee: Andy Lin
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-20532.v0.patch, HBASE-20532.v1.patch
>
>
> Use try -with-resources in BackupSystemTable for describeBackupSet and 
> listBackupSets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20768) should we bypass the RegionLocationFinder#scheduleFullRefresh execution when master has not initialized

2018-06-21 Thread chenxu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519120#comment-16519120
 ] 

chenxu commented on HBASE-20768:


ping [~yuzhih...@gmail.com] [~stack] 

> should we bypass the RegionLocationFinder#scheduleFullRefresh execution when 
> master has not initialized
> ---
>
> Key: HBASE-20768
> URL: https://issues.apache.org/jira/browse/HBASE-20768
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Priority: Major
> Attachments: HBASE-20768-master-v1.patch
>
>
> Our KYLIN cluster has about 300K Regions, when master startup, It takes a 
> long time to do RegionLocationFinder#scheduleFullRefresh.
> During this period, CUBE cannot be built, because master has not initialized.
> Although we can ignore locality factor through HBASE-18478, but i think it 
> would be better if we bypass the RegionLocationFinder#scheduleFullRefresh 
> until the master initialized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20768) should we bypass the RegionLocationFinder#scheduleFullRefresh execution when master has not initialized

2018-06-21 Thread chenxu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-20768:
---
Attachment: HBASE-20768-master-v1.patch

> should we bypass the RegionLocationFinder#scheduleFullRefresh execution when 
> master has not initialized
> ---
>
> Key: HBASE-20768
> URL: https://issues.apache.org/jira/browse/HBASE-20768
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: chenxu
>Priority: Major
> Attachments: HBASE-20768-master-v1.patch
>
>
> Our KYLIN cluster has about 300K Regions, when master startup, It takes a 
> long time to do RegionLocationFinder#scheduleFullRefresh.
> During this period, CUBE cannot be built, because master has not initialized.
> Although we can ignore locality factor through HBASE-18478, but i think it 
> would be better if we bypass the RegionLocationFinder#scheduleFullRefresh 
> until the master initialized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20768) should we bypass the RegionLocationFinder#scheduleFullRefresh execution when master has not initialized

2018-06-21 Thread chenxu (JIRA)
chenxu created HBASE-20768:
--

 Summary: should we bypass the 
RegionLocationFinder#scheduleFullRefresh execution when master has not 
initialized
 Key: HBASE-20768
 URL: https://issues.apache.org/jira/browse/HBASE-20768
 Project: HBase
  Issue Type: Improvement
  Components: Balancer
Reporter: chenxu


Our KYLIN cluster has about 300K Regions, when master startup, It takes a long 
time to do RegionLocationFinder#scheduleFullRefresh.
During this period, CUBE cannot be built, because master has not initialized.
Although we can ignore locality factor through HBASE-18478, but i think it 
would be better if we bypass the RegionLocationFinder#scheduleFullRefresh until 
the master initialized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20701) too much logging when balancer runs from BaseLoadBalancer

2018-06-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519104#comment-16519104
 ] 

Hadoop QA commented on HBASE-20701:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 12s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionServerAbort |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:1f3957d |
| JIRA Issue | HBASE-20701 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928580/HBASE-20701.branch-1.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  

[jira] [Commented] (HBASE-20631) B: Merge command enhancements

2018-06-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519088#comment-16519088
 ] 

Ted Yu commented on HBASE-20631:


Please fix checkstyle warnings.

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20631-v1.patch, HBASE-20631-v2.patch
>
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20697) Can't cache All region locations of the specify table by calling table.getRegionLocator().getAllRegionLocations()

2018-06-21 Thread zhaoyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519062#comment-16519062
 ] 

zhaoyuan commented on HBASE-20697:
--

ping [~zghaobac] [~busbey]

Please take a look about the patch for master and help me review this,thanks!

> Can't cache All region locations of the specify table by calling 
> table.getRegionLocator().getAllRegionLocations()
> -
>
> Key: HBASE-20697
> URL: https://issues.apache.org/jira/browse/HBASE-20697
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-20697.branch-1.2.001.patch, 
> HBASE-20697.branch-1.2.002.patch, HBASE-20697.branch-1.2.003.patch, 
> HBASE-20697.branch-1.2.004.patch, HBASE-20697.master.001.patch, 
> HBASE-20697.master.002.patch
>
>
> When we upgrade and restart  a new version application which will read and 
> write to HBase, we will get some operation timeout. The time out is expected 
> because when the application restarts,It will not hold any region locations 
> cache and do communication with zk and meta regionserver to get region 
> locations.
> We want to avoid these timeouts so we do warmup work and as far as I am 
> concerned,the method table.getRegionLocator().getAllRegionLocations() will 
> fetch all region locations and cache them. However, it didn't work good. 
> There are still a lot of time outs,so it confused me. 
> I dig into the source code and find something below
> {code:java}
> // code placeholder
> public List getAllRegionLocations() throws IOException {
>   TableName tableName = getName();
>   NavigableMap locations =
>   MetaScanner.allTableRegions(this.connection, tableName);
>   ArrayList regions = new ArrayList<>(locations.size());
>   for (Entry entry : locations.entrySet()) {
> regions.add(new HRegionLocation(entry.getKey(), entry.getValue()));
>   }
>   if (regions.size() > 0) {
> connection.cacheLocation(tableName, new RegionLocations(regions));
>   }
>   return regions;
> }
> In MetaCache
> public void cacheLocation(final TableName tableName, final RegionLocations 
> locations) {
>   byte [] startKey = 
> locations.getRegionLocation().getRegionInfo().getStartKey();
>   ConcurrentMap tableLocations = 
> getTableLocations(tableName);
>   RegionLocations oldLocation = tableLocations.putIfAbsent(startKey, 
> locations);
>   boolean isNewCacheEntry = (oldLocation == null);
>   if (isNewCacheEntry) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Cached location: " + locations);
> }
> addToCachedServers(locations);
> return;
>   }
> {code}
> It will collect all regions into one RegionLocations object and only cache 
> the first not null region location and then when we put or get to hbase, we 
> do getCacheLocation() 
> {code:java}
> // code placeholder
> public RegionLocations getCachedLocation(final TableName tableName, final 
> byte [] row) {
>   ConcurrentNavigableMap tableLocations =
> getTableLocations(tableName);
>   Entry e = tableLocations.floorEntry(row);
>   if (e == null) {
> if (metrics!= null) metrics.incrMetaCacheMiss();
> return null;
>   }
>   RegionLocations possibleRegion = e.getValue();
>   // make sure that the end key is greater than the row we're looking
>   // for, otherwise the row actually belongs in the next region, not
>   // this one. the exception case is when the endkey is
>   // HConstants.EMPTY_END_ROW, signifying that the region we're
>   // checking is actually the last region in the table.
>   byte[] endKey = 
> possibleRegion.getRegionLocation().getRegionInfo().getEndKey();
>   if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
>   getRowComparator(tableName).compareRows(
>   endKey, 0, endKey.length, row, 0, row.length) > 0) {
> if (metrics != null) metrics.incrMetaCacheHit();
> return possibleRegion;
>   }
>   // Passed all the way through, so we got nothing - complete cache miss
>   if (metrics != null) metrics.incrMetaCacheMiss();
>   return null;
> }
> {code}
> It will choose the first location to be possibleRegion and possibly it will 
> miss match.
> So did I forget something or may be wrong somewhere? If this is indeed a bug 
> I think it can be fixed not very hard.
> Hope commiters and PMC review this !
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19064) Synchronous replication for HBase

2018-06-21 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519041#comment-16519041
 ] 

Duo Zhang commented on HBASE-19064:
---

Most sub-tasks have been resolved, the only one is for performance testing, 
which is not a blocker I think. Let me start a merge vote to get some feedbacks.

> Synchronous replication for HBase
> -
>
> Key: HBASE-19064
> URL: https://issues.apache.org/jira/browse/HBASE-19064
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> The guys from Alibaba made a presentation on HBaseCon Asia about the 
> synchronous replication for HBase. We(Xiaomi) think this is a very useful 
> feature for HBase so we want to bring it into the community version.
> This is a big feature so we plan to do it in a feature branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20393) Operational documents for synchronous replication.

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20393:
--
Component/s: Replication
 documentation

> Operational documents for synchronous replication. 
> ---
>
> Key: HBASE-20393
> URL: https://issues.apache.org/jira/browse/HBASE-20393
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, Replication
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-20393-HBASE-19064-v4.patch, 
> HBASE-20393.HBASE-19064.v1.patch, HBASE-20393.HBASE-19064.v2.patch, 
> HBASE-20393.HBASE-19064.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20393) Operational documents for synchronous replication.

2018-06-21 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20393:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-19064
   Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-19064. Thanks [~openinx] for contributing.

> Operational documents for synchronous replication. 
> ---
>
> Key: HBASE-20393
> URL: https://issues.apache.org/jira/browse/HBASE-20393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-20393-HBASE-19064-v4.patch, 
> HBASE-20393.HBASE-19064.v1.patch, HBASE-20393.HBASE-19064.v2.patch, 
> HBASE-20393.HBASE-19064.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519029#comment-16519029
 ] 

Yu Li edited comment on HBASE-20691 at 6/21/18 7:46 AM:


bq. Right now the code in CommonFSUtils just checks against the default passed 
into the call to setStoragePolicy, not against any constant. That's incorrect
Ok, got your point, it follows this mode since introduced by HBASE-12848. Let 
me remove the {{setStoragePolicy}} method with the *defaultPolicy* parameter 
and directly check against constant in the {{setStoragePolicy}} method with 3 
parameters.

bq. I don't mean in the region server, I mean just here in this test.
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear. Please note that the 
IOException thrown will be caught and *logged as a warning* like below (I guess 
you ignored the UT result I pasted above sir, so allow me to repeat):
{noformat}
2018-06-08 22:59:39,063 WARN  [Time-limited test] util.CommonFSUtils(572): 
Unable to set storagePolicy=HOT for path=non-exist. DEBUG log level might have 
more details.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:563)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:524)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:484)
at 
org.apache.hadoop.hbase.util.TestFSUtils.verifyNoHDFSApiInvocationForDefaultPolicy(TestFSUtils.java:356)
at 
org.apache.hadoop.hbase.util.TestFSUtils.testSetStoragePolicyDefault(TestFSUtils.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.io.IOException: The setStoragePolicy method is invoked 
unexpectedly
at 
org.apache.hadoop.hbase.util.TestFSUtils$AlwaysFailSetStoragePolicyFileSystem.setStoragePolicy(TestFSUtils.java:364)
... 30 more
{noformat}


was (Author: carp84):
bq. Right now the code in CommonFSUtils just checks against the default passed 
into the call to setStoragePolicy, not against any constant. That's incorrect
Ok, got your point, it follows this mode since introduced by HBASE-12848. Let 
me remove the {{setStoragePolicy}} method with the *defaultPolicy* parameter 
and directly check against constant in the {{setStoragePolicy}} method with 3 
parameters.

bq. I don't mean in the region server, I mean just here in this test.
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear. Please note that the 
IOException thrown will be caught and logged as a warning like below (I guess 
you ignored the UT result I pasted above sir, so allow me to repeat):
{noformat}
2018-06-08 22:59:39,063 WARN  [Time-limited test] util.CommonFSUtils(572): 
Unable to set storagePolicy=HOT for path=non-exist. DEBUG log level might have 
more details.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:563)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:524)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:484)
at 
org.apache.hadoop.hbase.util.TestFSUtils.verifyNoHDFSApiInvocationForDefaultPolicy(TestFSUtils.java:356)
at 
org.apache.hadoop.hbase.util.TestFSUtils.testSetStoragePolicyDefault(TestFSUtils.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.io.IOException: The setStoragePolicy method is invoked 
unexpectedly
at 
org.apache.hadoop.hbase.util.TestFSUtils$AlwaysFailSetStoragePolicyFileSystem.setStoragePolicy(TestFSUtils.java:364)
... 30 more
{noformat}

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: 

[jira] [Comment Edited] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519029#comment-16519029
 ] 

Yu Li edited comment on HBASE-20691 at 6/21/18 7:45 AM:


bq. Right now the code in CommonFSUtils just checks against the default passed 
into the call to setStoragePolicy, not against any constant. That's incorrect
Ok, got your point, it follows this mode since introduced by HBASE-12848. Let 
me remove the {{setStoragePolicy}} method with the *defaultPolicy* parameter 
and directly check against constant in the {{setStoragePolicy}} method with 3 
parameters.

bq. I don't mean in the region server, I mean just here in this test.
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear. Please note that the 
IOException thrown will be caught and logged as a warning like below (I guess 
you ignored the UT result I pasted above sir, so allow me to repeat):
{noformat}
2018-06-08 22:59:39,063 WARN  [Time-limited test] util.CommonFSUtils(572): 
Unable to set storagePolicy=HOT for path=non-exist. DEBUG log level might have 
more details.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:563)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:524)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:484)
at 
org.apache.hadoop.hbase.util.TestFSUtils.verifyNoHDFSApiInvocationForDefaultPolicy(TestFSUtils.java:356)
at 
org.apache.hadoop.hbase.util.TestFSUtils.testSetStoragePolicyDefault(TestFSUtils.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.io.IOException: The setStoragePolicy method is invoked 
unexpectedly
at 
org.apache.hadoop.hbase.util.TestFSUtils$AlwaysFailSetStoragePolicyFileSystem.setStoragePolicy(TestFSUtils.java:364)
... 30 more
{noformat}


was (Author: carp84):
bq. Right now the code in CommonFSUtils just checks against the default passed 
into the call to setStoragePolicy, not against any constant. That's incorrect
Ok, got your point, it follows this mode since introduced by HBASE-12848. Let 
me remove the {{setStoragePolicy}} method with the *defaultPolicy* parameter 
and directly check against constant in the {{setStoragePolicy}} method with 3 
parameters.

bq. I don't mean in the region server, I mean just here in this test.
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-20691:
--
Attachment: HBASE-20691.v4.patch

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >