[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372521#comment-16372521
 ] 

Hadoop QA commented on HBASE-19767:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  9m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} hbase-server: The patch generated 0 new + 178 
unchanged - 1 fixed = 178 total (was 179) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
25s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}111m 
26s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911495/hbase-19767.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux dfbc2257c55f 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2440f807bf |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11613/testReport/ |
| Max. process+thread count | 5129 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11613/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Master web UI shows negative values for Remaining KVs
> 

[jira] [Updated] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used

2018-02-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-19863:

Attachment: HBASE-19863.v4-master.patch

> java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter 
> is used
> -
>
> Key: HBASE-19863
> URL: https://issues.apache.org/jira/browse/HBASE-19863
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, 
> HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, 
> HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch, 
> HBASE-19863.v4-master.patch
>
>
> Under some circumstances scan with SingleColumnValueFilter may fail with an 
> exception
> {noformat} 
> java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, 
> qualifier=C2, timestamp=1516433595543, comparison result: 1 
> at 
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> {noformat}
> Conditions:
> table T with a single column family 0 that uses ROWCOL bloom filter 
> (important)  and column qualifiers C1,C2,C3,C4,C5. 
> When we fill the table for every row we put deleted cell for C3.
> The table has a single region with two HStore:
> A: start row: 0, stop row: 99 
> B: start row: 10 stop row: 99
> B has newer versions of rows 10-99. Store files have several blocks each 
> (important). 
> Store A is the result of major compaction,  so it doesn't have any deleted 
> cells (important).
> So, we are running a scan like:
> {noformat}
> scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter 
> ('0','C5',=,'binary:whatever')"}
> {noformat}  
> How the scan performs:
> First, we iterate A for rows 0 and 1 without any problems. 
> Next, we start to iterate A for row 10, so read the first cell and set hfs 
> scanner to A :
> 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : 
> 10:0/C1/1/Put/x, 
> so we make B as our current store scanner. Since we are looking for 
> particular columns 
> C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn 
> which 
> would run reseek for all store scanners.
> For store A the following magic would happen in requestSeek:
>   1. bloom filter check passesGeneralBloomFilter would set haveToSeek to 
> false because row 10 doesn't have C3 qualifier in store A.  
>   2. Since we don't have to seek we just create a fake row 
> 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for 
> us and it commented with :
> {noformat}
>  // Multi-column Bloom filter optimization.
> // Create a fake key/value, so that this scanner only bubbles up to the 
> top
> // of the KeyValueHeap in StoreScanner after we scanned this row/column in
> // all other store files. The query matcher will then just skip this fake
> // key/value and the store scanner will progress to the next column. This
> // is obviously not a "real real" seek, but unlike the fake KV earlier in
> // this method, we want this to be propagated to ScanQueryMatcher.
> {noformat}
> 
> For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum 
> to skip C3 entirely. 
> After that we start searching for qualifier C5 using seekOrSkipToNextColumn 
> which run first trySkipToNextColumn:
> {noformat}
>   protected boolean trySkipToNextColumn(Cell cell) throws IOException {
> Cell nextCell = null;
> do {
>   Cell 

[jira] [Updated] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used

2018-02-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-19863:

Attachment: HBASE-19863.v4-branch-2.patch

> java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter 
> is used
> -
>
> Key: HBASE-19863
> URL: https://issues.apache.org/jira/browse/HBASE-19863
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Attachments: HBASE-19863-branch-2.patch, HBASE-19863-branch1.patch, 
> HBASE-19863-test.patch, HBASE-19863.v2-branch-2.patch, 
> HBASE-19863.v3-branch-2.patch, HBASE-19863.v4-branch-2.patch
>
>
> Under some circumstances scan with SingleColumnValueFilter may fail with an 
> exception
> {noformat} 
> java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, 
> qualifier=C2, timestamp=1516433595543, comparison result: 1 
> at 
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> {noformat}
> Conditions:
> table T with a single column family 0 that uses ROWCOL bloom filter 
> (important)  and column qualifiers C1,C2,C3,C4,C5. 
> When we fill the table for every row we put deleted cell for C3.
> The table has a single region with two HStore:
> A: start row: 0, stop row: 99 
> B: start row: 10 stop row: 99
> B has newer versions of rows 10-99. Store files have several blocks each 
> (important). 
> Store A is the result of major compaction,  so it doesn't have any deleted 
> cells (important).
> So, we are running a scan like:
> {noformat}
> scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter 
> ('0','C5',=,'binary:whatever')"}
> {noformat}  
> How the scan performs:
> First, we iterate A for rows 0 and 1 without any problems. 
> Next, we start to iterate A for row 10, so read the first cell and set hfs 
> scanner to A :
> 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : 
> 10:0/C1/1/Put/x, 
> so we make B as our current store scanner. Since we are looking for 
> particular columns 
> C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn 
> which 
> would run reseek for all store scanners.
> For store A the following magic would happen in requestSeek:
>   1. bloom filter check passesGeneralBloomFilter would set haveToSeek to 
> false because row 10 doesn't have C3 qualifier in store A.  
>   2. Since we don't have to seek we just create a fake row 
> 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for 
> us and it commented with :
> {noformat}
>  // Multi-column Bloom filter optimization.
> // Create a fake key/value, so that this scanner only bubbles up to the 
> top
> // of the KeyValueHeap in StoreScanner after we scanned this row/column in
> // all other store files. The query matcher will then just skip this fake
> // key/value and the store scanner will progress to the next column. This
> // is obviously not a "real real" seek, but unlike the fake KV earlier in
> // this method, we want this to be propagated to ScanQueryMatcher.
> {noformat}
> 
> For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum 
> to skip C3 entirely. 
> After that we start searching for qualifier C5 using seekOrSkipToNextColumn 
> which run first trySkipToNextColumn:
> {noformat}
>   protected boolean trySkipToNextColumn(Cell cell) throws IOException {
> Cell nextCell = null;
> do {
>   Cell nextIndexedKey = 

[jira] [Commented] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372497#comment-16372497
 ] 

Duo Zhang commented on HBASE-20044:
---

{noformat}
commit 3a3994223c5d634bdd7ef01ef7f31ff860849575
Author: Michael Stack 
Date:   Wed Feb 21 14:52:10 2018 -0800

HBASE-2004 TestClientClusterStatus is flakey
{noformat}

Should be HBASE-20044?

> TestClientClusterStatus is flakey
> -
>
> Key: HBASE-20044
> URL: https://issues.apache.org/jira/browse/HBASE-20044
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey
>Reporter: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20044.branch-2.001.patch
>
>
> It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
> slightest variance. Here are two failiures... one because the previous test 
> had not finished putting back a Region that had been offlined, and the other 
> because the count of requests was off slightly. Let me make the compare 
> coarser 
> {code}
> Test set: org.apache.hadoop.hbase.TestClientClusterStatus
> ---
> Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
> FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 
> 0.236 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.25
> Number of requests: 17
> Number of regions: 1
> Number of regions in transition: 0> but was: asf903.gq1.ygridcore.net,45687,1519246533030
> Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}
> and 
> {code}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 
> s <<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> [ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
> elapsed: 0.065 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0> but was: 9845c79afe69,46509,1519227084385
> Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 19
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372487#comment-16372487
 ] 

Anoop Sam John commented on HBASE-20045:


When cache on write config is ON, we will cache all the blocks from the newly 
written file. The file can be new flushed one or a new compacted result file.  
(Am I correct only?)   So what you suggest is selectively cache the blocks of 
these newly compacted result file?  Only some recent data is getting cached.  
If  the block contain only very old data (like older than 24 hrs) dont cache it 
on write.  But if some newer data in block, cache it.  Am I reading it correct 
JMS?

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372461#comment-16372461
 ] 

Hudson commented on HBASE-19767:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4628 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4628/])
Revert "HBASE-19767 Fix for Master web UI shows negative values for (stack: rev 
2440f807bf7d077def819c616d4afa97a4e2539e)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionProgress.java


> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20046) Reconsider the implementation for serial replication

2018-02-21 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372451#comment-16372451
 ] 

Duo Zhang commented on HBASE-20046:
---

And I'm afraid there will be something wrong with the region replica feature... 
For rep_meta we use encoded region name as row key, but for other family meta, 
we use the region name for default replica as row key...

> Reconsider the implementation for serial replication
> 
>
> Key: HBASE-20046
> URL: https://issues.apache.org/jira/browse/HBASE-20046
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
>
> When implementing HBASE-9465 we added two new families to meta table, one is 
> rep meta and the other is rep_position. In general I think rep_meta is OK to 
> put into meta table since it records the open and close sequence id for a 
> region, but for rep_position, I think it should be put into another system 
> table instead of meta table.
> This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20048) Revert serial replication feature from branch-2 and branch-1

2018-02-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-20048:
-

 Summary: Revert serial replication feature from branch-2 and 
branch-1
 Key: HBASE-20048
 URL: https://issues.apache.org/jira/browse/HBASE-20048
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Duo Zhang
 Fix For: 1.5.0, 2.0.0-beta-2






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20046) Reconsider the implementation for serial replication

2018-02-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20046:
--
 Priority: Major  (was: Blocker)
Fix Version/s: (was: 1.5.0)
   (was: 2.0.0)
   2.1.0
   3.0.0
   Issue Type: New Feature  (was: Bug)

> Reconsider the implementation for serial replication
> 
>
> Key: HBASE-20046
> URL: https://issues.apache.org/jira/browse/HBASE-20046
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
>
> When implementing HBASE-9465 we added two new families to meta table, one is 
> rep meta and the other is rep_position. In general I think rep_meta is OK to 
> put into meta table since it records the open and close sequence id for a 
> region, but for rep_position, I think it should be put into another system 
> table instead of meta table.
> This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20046) Reconsider the implementation for serial replication

2018-02-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20046:
--
Summary: Reconsider the implementation for serial replication  (was: 
Reconsider the meta schema change in HBASE-9465)

> Reconsider the implementation for serial replication
> 
>
> Key: HBASE-20046
> URL: https://issues.apache.org/jira/browse/HBASE-20046
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0
>
>
> When implementing HBASE-9465 we added two new families to meta table, one is 
> rep meta and the other is rep_position. In general I think rep_meta is OK to 
> put into meta table since it records the open and close sequence id for a 
> region, but for rep_position, I think it should be put into another system 
> table instead of meta table.
> This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20046) Reconsider the meta schema change in HBASE-9465

2018-02-21 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372439#comment-16372439
 ] 

Duo Zhang commented on HBASE-20046:
---

Just do not want to mess up meta since it does not need to be store in meta I 
think. And also, in HBASE-19397, we have introduced a storage layer for 
replication, I think the rep_position should also go this way. And now when 
updating position, we need to update two places, one is the position for log 
file on zk, and the other is the sequence id for region, which is in meta. We 
need to deal with the inconsistency between these two places... So I think we 
should store these two type of positions in one place, so we can make sure that 
they are always consistent, if on zk, use multi, if in a system table, make it 
not splittable and use atomic multi row mutations...

So maybe we can revert the feature from branch-2 and branch-1, and after we cut 
branch-2.0, I will merge HBASE-19397-branch-2 back to branch-2, and reimplement 
it based on the new replication interfaces and target the feature to 2.1. For 
branch-1, as [~yangzhe1991] has already moved on so I'm afraid no one can take 
care of the code so just give up supporting it on branch-1...

Thanks.

> Reconsider the meta schema change in HBASE-9465
> ---
>
> Key: HBASE-20046
> URL: https://issues.apache.org/jira/browse/HBASE-20046
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0, 1.5.0
>
>
> When implementing HBASE-9465 we added two new families to meta table, one is 
> rep meta and the other is rep_position. In general I think rep_meta is OK to 
> put into meta table since it records the open and close sequence id for a 
> region, but for rep_position, I think it should be put into another system 
> table instead of meta table.
> This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20047) AuthenticationTokenIdentifier should provide a toString

2018-02-21 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20047:
---

 Summary: AuthenticationTokenIdentifier should provide a toString
 Key: HBASE-20047
 URL: https://issues.apache.org/jira/browse/HBASE-20047
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: Sean Busbey


It'd be easier to debug things like MapReduce and Spark jobs if our 
AuthenticationTokenIdentifier provided a toString method.

For comparison, here's an example of a MapReduce job that has both an HDFS 
delegation token and our delegation token:

{code}

18/02/21 20:40:06 INFO mapreduce.JobSubmitter: Kind: HBASE_AUTH_TOKEN, Service: 
92a63bd8-9e00-4c04-ab61-da8e606068e1, Ident: 
(org.apache.hadoop.hbase.security.token.AuthenticationTokenIdentifier@17)
18/02/21 20:40:06 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, 
Service: 172.31.118.118:8020, Ident: (token for some_user: 
HDFS_DELEGATION_TOKEN owner=some_u...@example.com, renewer=yarn, realUser=, 
issueDate=1519274405003, maxDate=1519879205003, sequenceNumber=23, 
masterKeyId=9)

{code}

Stuff in TokenIdentifier is supposed to be public, so we should be fine to dump 
everything, similar to Hadoop's AbstractDelegationTokenIdentifier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-13823) Procedure V2: unnecessaery operations on AssignmentManager#recoverTableInDisablingState() and recoverTableInEnablingState()

2018-02-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-13823:
-
Summary: Procedure V2: unnecessaery operations on 
AssignmentManager#recoverTableInDisablingState() and 
recoverTableInEnablingState()  (was: Procedure V2: unnecessaery operaions on 
AssignmentManager#recoverTableInDisablingState() and 
recoverTableInEnablingState())

> Procedure V2: unnecessaery operations on 
> AssignmentManager#recoverTableInDisablingState() and 
> recoverTableInEnablingState()
> ---
>
> Key: HBASE-13823
> URL: https://issues.apache.org/jira/browse/HBASE-13823
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-13823-v0.patch, HBASE-13823-v1.patch, 
> HBASE-13823-v2.patch, HBASE-13823-v3.patch, HBASE-13823-v4.patch, 
> HBASE-13823.v5-master.patch
>
>
> AssignmentManager#recoverTableInDisablingState() and 
> AssignmentManager#recoverTableInEnablingState try to complete unfinished 
> enable/disable table operations.  In the past, it is necessary, as master 
> failure could leave table in bad state.  With HBASE-13211, enable/disable 
> operations would be auto-recover by Procedure-V2 logic.  Those recovery 
> operation is not necessary: we can either remove those recovery operation or 
> not replay enable/disable operations in procedure queue.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372421#comment-16372421
 ] 

Mike Drob commented on HBASE-20045:
---

What to do when the newly compacted blocks are larger? You're combining cached 
data with non-cached and create new file that is larger than the old entry. Am 
I missing fundamentals of the block cache that makes this possible?

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20046) Reconsider the meta schema change in HBASE-9465

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372411#comment-16372411
 ] 

stack commented on HBASE-20046:
---

Ouch. We good at getting hbase:meta up. Not so good making sure other tables 
are up. Why you think the new table? Write rates? Being able to split it? 
Thanks.

> Reconsider the meta schema change in HBASE-9465
> ---
>
> Key: HBASE-20046
> URL: https://issues.apache.org/jira/browse/HBASE-20046
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0, 1.5.0
>
>
> When implementing HBASE-9465 we added two new families to meta table, one is 
> rep meta and the other is rep_position. In general I think rep_meta is OK to 
> put into meta table since it records the open and close sequence id for a 
> region, but for rep_position, I think it should be put into another system 
> table instead of meta table.
> This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372406#comment-16372406
 ] 

stack commented on HBASE-20045:
---

What you thinking [~jmspaggi] ? We'd have to cache the block from the compacted 
file. We'd map the evicted block locations to new blocks in the compacted file 
and then preemptively cache these? They'd be coming in on first hit anyways but 
you'd like to save on that bit of latency reading in the block from HDFS?

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372403#comment-16372403
 ] 

stack commented on HBASE-19767:
---

INFRA doesn't like your patches [~uagashe] Does the random fail on them.

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19767:
--
Attachment: hbase-19767.master.001.patch

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372400#comment-16372400
 ] 

stack commented on HBASE-20041:
---

+1 Thanks [~mdrob]

> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372393#comment-16372393
 ] 

Hadoop QA commented on HBASE-19767:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} hbase-server: The patch generated 0 new + 178 
unchanged - 1 fixed = 178 total (was 179) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 52s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911479/hbase-19767.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6c3fe71f3cd9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2440f807bf |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11612/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11612/testReport/ |
| Max. process+thread count | 5222 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11612/console |

[jira] [Commented] (HBASE-17825) Backup: further optimizations

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372384#comment-16372384
 ] 

Hadoop QA commented on HBASE-17825:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hbase-mapreduce: The patch generated 15 new + 20 
unchanged - 2 fixed = 35 total (was 22) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hbase-backup: The patch generated 5 new + 0 unchanged 
- 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
1s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m  
7s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
28s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 44s{color} 
| {color:red} hbase-mapreduce in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
9s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionOpen |
|   | hadoop.hbase.mapreduce.TestWALPlayer |
\\
\\
|| Subsystem || 

[jira] [Updated] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20045:

Issue Type: New Feature  (was: Bug)

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20045:

Component/s: Compaction
 BlockCache

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-19761) Fix Checkstyle errors in hbase-zookeeper

2018-02-21 Thread maoling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maoling updated HBASE-19761:

Comment: was deleted

(was: I give this issue a [github pull 
request|https://github.com/apache/hbase/pull/72])

> Fix Checkstyle errors in hbase-zookeeper
> 
>
> Key: HBASE-19761
> URL: https://issues.apache.org/jira/browse/HBASE-19761
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
>
> Fix the remaining Checkstyle errors in the *hbase-zookeeper* module and 
> enable Checkstyle to fail on violations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372364#comment-16372364
 ] 

Mike Drob commented on HBASE-20041:
---

bq. No one depends on hadoop-yarn-server-nodemanager, etc., running against h3 
but hbase-rest?
hbase-rest is the only place where we need to completely kill the 
com.sun.jersey deps, everywhere else can keep the yarn jersey it doesn't hurt 
anything.

bq. This all to support MR PE against REST? If we killed the latter facility 
would that help?
I haven't tried it yet, but I don't think it will work. All of this gymnastics 
is to get disparate things working - haven't even imagined the intersection yet.

bq. You think the purge of the above from dependency management causes 
HBASE-20043? I've not tried it. Did it work before this change?
I spot checked some older commits and it didn't work there either. Haven't had 
time to run bisect.



> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20046) Reconsider the meta schema change in HBASE-9465

2018-02-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-20046:
-

 Summary: Reconsider the meta schema change in HBASE-9465
 Key: HBASE-20046
 URL: https://issues.apache.org/jira/browse/HBASE-20046
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.0.0, 1.5.0


When implementing HBASE-9465 we added two new families to meta table, one is 
rep meta and the other is rep_position. In general I think rep_meta is OK to 
put into meta table since it records the open and close sequence id for a 
region, but for rep_position, I think it should be put into another system 
table instead of meta table.

This should be done before we finally release 2.0.0 or 1.5.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19364) Truncate_preserve fails with table when replica region > 1

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372357#comment-16372357
 ] 

Hadoop QA commented on HBASE-19364:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
53s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hbase-client in branch-1 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-1 passed with JDK v1.8.0_162 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} hbase-client: The patch generated 5 new + 83 unchanged 
- 1 fixed = 88 total (was 84) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
16s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m  
4s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m 
52s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_162 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m  1s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372355#comment-16372355
 ] 

Hadoop QA commented on HBASE-20041:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 15m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
42s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}175m 
34s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911444/HBASE-20041.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  |
| uname | Linux 66ad60822e5b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3a3994223c |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11607/testReport/ |
| Max. process+thread count | 5226 (vs. ulimit of 1) |
| modules | C: hbase-rest . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11607/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
> 

[jira] [Commented] (HBASE-2004) Client javadoc suggestion: add code examples about obtaining historical values

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372347#comment-16372347
 ] 

Hudson commented on HBASE-2004:
---

FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4627 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4627/])
HBASE-20042 TestRegionServerAbort flakey (stack: rev 
3f82098d4b7ae595aa6702d3fb7cc2fac682691b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAbort.java
HBASE-2004 TestClientClusterStatus is flakey (stack: rev 
3a3994223c5d634bdd7ef01ef7f31ff860849575)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestClientClusterStatus.java
HBASE-20042 TestRegionServerAbort flakey; ADDENDUM, RETRY (stack: rev 
13223c217ca6cb84a96f3c70b8c38ec19eca729f)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAbort.java


> Client javadoc suggestion:  add code examples about obtaining historical 
> values
> ---
>
> Key: HBASE-2004
> URL: https://issues.apache.org/jira/browse/HBASE-2004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.20.1
>Reporter: Doug Meil
>Priority: Minor
>
> The implicit support HBase provides for versioning of values is very 
> powerful, but it's not all that obvious for application programmers to use it 
> to obtain the historical values.
> I would like to suggest adding some comments and sample code to the Result 
> class (org.apache.hadoop.hbase.client.Result) Javadoc.  I know this seems 
> sort of obvious to people that regularly use HBase, but I think that for new 
> folks having code examples available in Javadoc is helpful because it's "one 
> stop shopping" for documentation (i.e., as opposed to looking to an external 
> writeup).  Arguably, this type of example could also go in the HTable class 
> javadoc.
> e.g.,
> HTable table = new HTable(config, "mytable");
> Scan scan = new Scan();   // no arguments indicates will scan all rows
> scan.setMaxVersions( 5 ); // setting this to 1 only returns 
> current version
> ResultScanner rs = table.getScanner(scan);
> for (Iterator i = rs.iterator(); i.hasNext(); ) {
>   Result r = i.next();
> // obtains current value from 'family:column'
>   byte b[] = r.getValue( Bytes.toBytes("family"), Bytes.toBytes("column") 
> );
>   KeyValue kv[] = r.raw();
>   for (int j = 0; j < kv.length; j++) {
>   
>byte bv[] = kv[j].getValue();
>   // this loop returns both current and historical values
>byte bc[] = kv[j].getColumn();
>// returns 'family:column'
>  }
>}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372349#comment-16372349
 ] 

Hudson commented on HBASE-19391:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4627 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4627/])
HBASE-19391 Calling HRegion#initializeRegionInternals from a region 
(huaxiangsun: rev a0900857c7d58e03a39794e96224bf6213307ce7)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java


> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 1.4.2
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Fix For: 1.5.0, 2.0.0-beta-2
>
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372348#comment-16372348
 ] 

Hudson commented on HBASE-19767:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4627 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4627/])
HBASE-19767 Fix for Master web UI shows negative values for Remaining (stack: 
rev 61b55166bf7fe9edc4e8105f217463ed6e693d17)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionProgress.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20038) TestLockProcedure.testTimeout is flakey

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372350#comment-16372350
 ] 

Hudson commented on HBASE-20038:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4627 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4627/])
HBASE-20038 TestLockProcedure.testTimeout is flakey (stack: rev 
b328807d25ec0d2537371dd51d8ad79c841c3cec)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/locking/TestLockProcedure.java


> TestLockProcedure.testTimeout is flakey
> ---
>
> Key: HBASE-20038
> URL: https://issues.apache.org/jira/browse/HBASE-20038
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20038.patch, HBASE-20038.patch, HBASE-20038.patch
>
>
> The test is simple so it is easy to find out the problem.
> {noformat}
> 2018-02-21 04:53:32,230 INFO  [Time-limited test] hbase.ResourceChecker(148): 
> before: master.locking.TestLockProcedure#testTimeout Thread=218, 
> OpenFileDescriptor=853, MaxFileDescriptor=6, SystemLoadAverage=5075, 
> ProcessCount=312, AvailableMemoryMB=5373
> 2018-02-21 04:53:32,234 WARN  [Time-limited test] 
> procedure2.ProcedureTestingUtility(146): Set Kill before store update to: 
> false
> 2018-02-21 04:53:32,278 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(866): Stored pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,285 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,286 DEBUG [PEWorker-1] locking.LockProcedure(312): LOCKED 
> pid=14, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, 
> namespace=namespace, type=EXCLUSIVE
> 2018-02-21 04:53:32,303 DEBUG [Time-limited test] 
> locking.TestLockProcedure(204): Proc id 14 acquired lock.
> 2018-02-21 04:53:32,298 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188814298
> 2018-02-21 04:53:33,303 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:33,304 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188816299
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,766 WARN  [HBase-Metrics2-1] impl.MetricsConfig(125): 
> Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, 

[jira] [Commented] (HBASE-20042) TestRegionServerAbort flakey

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372346#comment-16372346
 ] 

Hudson commented on HBASE-20042:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4627 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4627/])
HBASE-20042 TestRegionServerAbort flakey (stack: rev 
3f82098d4b7ae595aa6702d3fb7cc2fac682691b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAbort.java
HBASE-20042 TestRegionServerAbort flakey; ADDENDUM, RETRY (stack: rev 
13223c217ca6cb84a96f3c70b8c38ec19eca729f)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerAbort.java


> TestRegionServerAbort flakey
> 
>
> Key: HBASE-20042
> URL: https://issues.apache.org/jira/browse/HBASE-20042
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20042-TestRegionServerAbort-flakey-ADDENDUM-RE.patch, 
> HBASE-20042.branch-2.001.patch
>
>
> Failed with a hang and an index out of bounds in last 30 runs. The timeout 
> has no logs. The indexoutofbounds seems basic... Looking at logs all seems to 
> be working... eventually... as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372318#comment-16372318
 ] 

stack commented on HBASE-19767:
---

Retry. I'd applied this patch prematurely so had to revert.

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19767:
--
Attachment: hbase-19767.master.001.patch

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372314#comment-16372314
 ] 

Chia-Ping Tsai commented on HBASE-20044:


+1

> TestClientClusterStatus is flakey
> -
>
> Key: HBASE-20044
> URL: https://issues.apache.org/jira/browse/HBASE-20044
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey
>Reporter: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20044.branch-2.001.patch
>
>
> It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
> slightest variance. Here are two failiures... one because the previous test 
> had not finished putting back a Region that had been offlined, and the other 
> because the count of requests was off slightly. Let me make the compare 
> coarser 
> {code}
> Test set: org.apache.hadoop.hbase.TestClientClusterStatus
> ---
> Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
> FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 
> 0.236 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.25
> Number of requests: 17
> Number of regions: 1
> Number of regions in transition: 0> but was: asf903.gq1.ygridcore.net,45687,1519246533030
> Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}
> and 
> {code}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 
> s <<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> [ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
> elapsed: 0.065 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0> but was: 9845c79afe69,46509,1519227084385
> Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 19
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20044:
---
Fix Version/s: 2.0.0

> TestClientClusterStatus is flakey
> -
>
> Key: HBASE-20044
> URL: https://issues.apache.org/jira/browse/HBASE-20044
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey
>Reporter: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20044.branch-2.001.patch
>
>
> It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
> slightest variance. Here are two failiures... one because the previous test 
> had not finished putting back a Region that had been offlined, and the other 
> because the count of requests was off slightly. Let me make the compare 
> coarser 
> {code}
> Test set: org.apache.hadoop.hbase.TestClientClusterStatus
> ---
> Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
> FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 
> 0.236 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.25
> Number of requests: 17
> Number of regions: 1
> Number of regions in transition: 0> but was: asf903.gq1.ygridcore.net,45687,1519246533030
> Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}
> and 
> {code}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 
> s <<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> [ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
> elapsed: 0.065 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0> but was: 9845c79afe69,46509,1519227084385
> Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 19
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372310#comment-16372310
 ] 

Hadoop QA commented on HBASE-19767:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-19767 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-19767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911475/hbase-19767.master.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11611/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20016) TestCatalogJanitorInMemoryStates#testInMemoryForReplicaParentCleanup is flaky

2018-02-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20016:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~yuzhih...@gmail.com] for the reviews.

> TestCatalogJanitorInMemoryStates#testInMemoryForReplicaParentCleanup is flaky
> -
>
> Key: HBASE-20016
> URL: https://issues.apache.org/jira/browse/HBASE-20016
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 1.5.0, 1.4.3
>
> Attachments: HBASE-20016.branch-1.v0.patch.patch, 
> HBASE-20016.branch-1.v1.patch.patch
>
>
> It is a time-based test. RegionStates#isRegionOnline will return false if the 
> target region is in transition. The list of region assignment may not updated 
> yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19767:
--
Attachment: hbase-19767.master.001.patch

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch, hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20038) TestLockProcedure.testTimeout is flakey

2018-02-21 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372304#comment-16372304
 ] 

Duo Zhang commented on HBASE-20038:
---

The TIMEOUT is 2 seconds. Since the asf build machines are usually slow, I 
prefer using a longer sleep time for safety...

> TestLockProcedure.testTimeout is flakey
> ---
>
> Key: HBASE-20038
> URL: https://issues.apache.org/jira/browse/HBASE-20038
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20038.patch, HBASE-20038.patch, HBASE-20038.patch
>
>
> The test is simple so it is easy to find out the problem.
> {noformat}
> 2018-02-21 04:53:32,230 INFO  [Time-limited test] hbase.ResourceChecker(148): 
> before: master.locking.TestLockProcedure#testTimeout Thread=218, 
> OpenFileDescriptor=853, MaxFileDescriptor=6, SystemLoadAverage=5075, 
> ProcessCount=312, AvailableMemoryMB=5373
> 2018-02-21 04:53:32,234 WARN  [Time-limited test] 
> procedure2.ProcedureTestingUtility(146): Set Kill before store update to: 
> false
> 2018-02-21 04:53:32,278 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(866): Stored pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,285 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,286 DEBUG [PEWorker-1] locking.LockProcedure(312): LOCKED 
> pid=14, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, 
> namespace=namespace, type=EXCLUSIVE
> 2018-02-21 04:53:32,303 DEBUG [Time-limited test] 
> locking.TestLockProcedure(204): Proc id 14 acquired lock.
> 2018-02-21 04:53:32,298 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188814298
> 2018-02-21 04:53:33,303 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:33,304 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188816299
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,766 WARN  [HBase-Metrics2-1] impl.MetricsConfig(125): 
> Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:36,299 INFO  [PEWorker-1] 
> 

[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372302#comment-16372302
 ] 

Hadoop QA commented on HBASE-19767:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
53s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} hbase-server: The patch generated 0 new + 178 
unchanged - 1 fixed = 178 total (was 179) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 45s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestAsyncTableBatch |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911438/hbase-19767.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d922eab9ea09 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3f82098d4b |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11606/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11606/testReport/ |
| Max. process+thread count | 5102 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11606/console 

[jira] [Commented] (HBASE-20027) Add test TestClusterPortAssignment

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372299#comment-16372299
 ] 

Hudson commented on HBASE-20027:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4626/])
HBASE-20027 Add test TestClusterPortAssignment (apurtell: rev 
173a5bf1f1ff2f60ea9ef92bdef7a3026597ec0d)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterPortAssignment.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java


> Add test TestClusterPortAssignment
> --
>
> Key: HBASE-20027
> URL: https://issues.apache.org/jira/browse/HBASE-20027
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.4.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 2.0.0, 1.5.0, 1.4.2
>
> Attachments: HBASE-20027-branch-1.patch, HBASE-20027.patch
>
>
> Port assignments for master ports in site configuration appear to be ignored.
> We are not catching this in tests because there appears to be no positive 
> test for port assignment and the only fixed information we require is the 
> zookeeper quorum and client port. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20031) Unable to run integration test using mvn due to missing HBaseClassTestRule

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372296#comment-16372296
 ] 

Hudson commented on HBASE-20031:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4626/])
HBASE-20031 Unable to run integration test using mvn due to missing (tedyu: rev 
401227ba6aeb3c7767e88d41a7fa2990b3717648)
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseClassTestRule.java
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseClassTestRuleChecker.java
* (edit) src/main/asciidoc/_chapters/developer.adoc


> Unable to run integration test using mvn due to missing HBaseClassTestRule
> --
>
> Key: HBASE-20031
> URL: https://issues.apache.org/jira/browse/HBASE-20031
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0-beta-1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 20031.v1.txt, 20031.v2.txt, 20031.v3.txt
>
>
> In branch-1, the following command works:
> {code}
> mvn test -Dtest=org.apache.hadoop.hbase.IntegrationTestIngest
> {code}
> For hbase2, we have the following error:
> {code}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.249 
> s <<< FAILURE! - in org.apache.hadoop.hbase.IntegrationTestIngest
> [ERROR] org.apache.hadoop.hbase.IntegrationTestIngest  Time elapsed: 0.01 s  
> <<< FAILURE!
> java.lang.AssertionError: No HBaseClassTestRule ClassRule for 
> org.apache.hadoop.hbase.IntegrationTestIngest
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19166) AsyncProtobufLogWriter persists ProtobufLogWriter as class name for backward compatibility

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372298#comment-16372298
 ] 

Hudson commented on HBASE-19166:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4626/])
HBASE-19166 AsyncProtobufLogWriter persists ProtobufLogWriter as class (tedyu: 
rev bf5f034463d357f31e2c7d02c6477c2fcd93d7f4)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncProtobufLogWriter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureAsyncProtobufLogWriter.java


> AsyncProtobufLogWriter persists ProtobufLogWriter as class name for backward 
> compatibility
> --
>
> Key: HBASE-19166
> URL: https://issues.apache.org/jira/browse/HBASE-19166
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 2.0.0-beta-2
>
> Attachments: 19166-async-log-writer.v1.txt, 
> 19166-async-log-writer.v2.txt
>
>
> For hlog generated by 2.x, log splitting from hbase1 would result in:
> {code}
> 1134720 2018-02-13 10:43:57,590 WARN  [RS_LOG_REPLAY_OPS-ve0530:16020-0] 
> regionserver.SplitLogWorker: log splitting of 
> WALs/ve0534.halxg.cloudera.com,16020,1518546984742-splitting/ve0534.halxg.cloudera.com%2C16020%2C1518546984742.meta.1518546993545.meta
>  failed, returning error
> 1134721 java.io.IOException: Got unknown writer class: AsyncProtobufLogWriter
> 1134722   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:220)
> 1134723   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:169)
> 1134724   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:66)
> 1134725   at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:164)
> 1134726   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:303)
> 1134727   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
> 1134728   at 
> org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:853)
> 1134729   at 
> org.apache.hadoop.hbase.wal.WALSplitter.getReader(WALSplitter.java:777)
> 1134730   at 
> org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:298)
> 1134731   at 
> org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:236)
> {code}
> AsyncProtobufLogWriter didn't change WAL format and hence can use 
> ProtobufLogWriter as the persisted class name so that we avoid the above 
> during rolling upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20039) move testhbasetestingutility mr tests to hbase-mapreduce

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372297#comment-16372297
 ] 

Hudson commented on HBASE-20039:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4626/])
HBASE-20039 MR tests out to hbase-mapreduce mobile (mdrob: rev 
5d994a24fc570ee5156df0d1356cb7b12305fb78)
* (add) 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHBaseMRTestingUtility.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java


> move testhbasetestingutility mr tests to hbase-mapreduce
> 
>
> Key: HBASE-20039
> URL: https://issues.apache.org/jira/browse/HBASE-20039
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20039.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19953) Avoid calling post* hook when procedure fails

2018-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372295#comment-16372295
 ] 

Hudson commented on HBASE-19953:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4626 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4626/])
HBASE-19953 Ensure post DDL hooks are only called after successful (elserj: rev 
d9b8dcc1d300ae114febc22dbc71866088387111)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineNamespaceProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteNamespaceProcedure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyNamespaceProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchema.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterSchemaServiceImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterObserverPostCalls.java


> Avoid calling post* hook when procedure fails
> -
>
> Key: HBASE-19953
> URL: https://issues.apache.org/jira/browse/HBASE-19953
> Project: HBase
>  Issue Type: Bug
>  Components: master, proc-v2
>Reporter: Ramesh Mani
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19952.001.branch-2.patch, 
> HBASE-19953.002.branch-2.patch, HBASE-19953.003.branch-2.patch
>
>
> Ramesh pointed out a case where I think we're mishandling some post\* 
> MasterObserver hooks. Specifically, I'm looking at the deleteNamespace.
> We synchronously execute the DeleteNamespace procedure. When the user 
> provides a namespace that isn't empty, the procedure does a rollback (which 
> is just a no-op), but this doesn't propagate an exception up to the 
> NonceProcedureRunnable in {{HMaster#deleteNamespace}}. It took Ramesh 
> pointing it out a bit better to me that the code executes a bit differently 
> than we actually expect.
> I think we need to double-check our post hooks and make sure we aren't 
> invoking them when the procedure actually failed. cc/ [~Apache9], [~stack].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20038) TestLockProcedure.testTimeout is flakey

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372291#comment-16372291
 ] 

stack commented on HBASE-20038:
---

Pushed to master and branch-2. This just failed in a nightly. [~Apache9] see 
[~busbey] comment above sir.  Leaving open for a while to see if this fixes 
stuff and in case an addendum

> TestLockProcedure.testTimeout is flakey
> ---
>
> Key: HBASE-20038
> URL: https://issues.apache.org/jira/browse/HBASE-20038
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20038.patch, HBASE-20038.patch, HBASE-20038.patch
>
>
> The test is simple so it is easy to find out the problem.
> {noformat}
> 2018-02-21 04:53:32,230 INFO  [Time-limited test] hbase.ResourceChecker(148): 
> before: master.locking.TestLockProcedure#testTimeout Thread=218, 
> OpenFileDescriptor=853, MaxFileDescriptor=6, SystemLoadAverage=5075, 
> ProcessCount=312, AvailableMemoryMB=5373
> 2018-02-21 04:53:32,234 WARN  [Time-limited test] 
> procedure2.ProcedureTestingUtility(146): Set Kill before store update to: 
> false
> 2018-02-21 04:53:32,278 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(866): Stored pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,285 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:32,286 DEBUG [PEWorker-1] locking.LockProcedure(312): LOCKED 
> pid=14, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, 
> namespace=namespace, type=EXCLUSIVE
> 2018-02-21 04:53:32,303 DEBUG [Time-limited test] 
> locking.TestLockProcedure(204): Proc id 14 acquired lock.
> 2018-02-21 04:53:32,298 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188814298
> 2018-02-21 04:53:33,303 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:33,304 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:34,299 INFO  [PEWorker-1] 
> procedure2.TimeoutExecutorThread(82): ADDED pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE; timeout=2000, timestamp=1519188816299
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] locking.LockProcedure(195): 
> Heartbeat pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE
> 2018-02-21 04:53:34,306 DEBUG [Time-limited test] 
> locking.TestLockProcedure(225): Proc id 14 : LOCKED.
> 2018-02-21 04:53:34,766 WARN  [HBase-Metrics2-1] impl.MetricsConfig(125): 
> Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(207): 
> Timeout failure ProcedureEvent for pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=WAITING_TIMEOUT; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE]
> 2018-02-21 04:53:36,299 DEBUG [ProcExecTimeout] locking.LockProcedure(210): 
> Calling wake on ProcedureEvent for pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> type=EXCLUSIVE, ready=false, [pid=14, state=RUNNABLE; 
> org.apache.hadoop.hbase.master.locking.LockProcedure, namespace=namespace, 
> 

[jira] [Created] (HBASE-20045) When running compaction, cache recent blocks.

2018-02-21 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-20045:
---

 Summary: When running compaction, cache recent blocks.
 Key: HBASE-20045
 URL: https://issues.apache.org/jira/browse/HBASE-20045
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0-beta-1
Reporter: Jean-Marc Spaggiari


HBase already allows to cache blocks on flush. This is very useful for usecases 
where most queries are against recent data. However, as soon as their is a 
compaction, those blocks are evicted. It will be interesting to have a table 
level parameter to say "When compacting, cache blocks less than 24 hours old". 
That way, when running compaction, all blocks where some data are less than 24h 
hold, will be automatically cached. 

 

Very useful for table design where there is TS in the key but a long history 
(Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17825) Backup: further optimizations

2018-02-21 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-17825:
--
Attachment: HBASE-17825-v3.patch

> Backup: further optimizations
> -
>
> Key: HBASE-17825
> URL: https://issues.apache.org/jira/browse/HBASE-17825
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: backup
> Fix For: 3.0.0
>
> Attachments: HBASE-17825-v1.patch, HBASE-17825-v2.patch, 
> HBASE-17825-v3.patch
>
>
> Some phases of backup and restore can be optimized:
> # WALPlayer support for multiple tables
> # Run DistCp once per all tables during backup/restore
> The eventual goal:
> # 2 M/R jobs per backup/restore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19553) Old replica regions should be cleared from AM memory after primary region split or merge

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19553:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Commited v4 to branch-1, resolve it. This does not apply to branch-2+ with 
AMv2, thanks [~pankaj2461] for the patch.

> Old replica regions should be cleared from AM memory after primary region 
> split or merge
> 
>
> Key: HBASE-19553
> URL: https://issues.apache.org/jira/browse/HBASE-19553
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: huaxiang sun
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 1.5.0
>
> Attachments: HBASE-19553-branch-1-v2.patch, 
> HBASE-19553-branch-1-v3.patch, HBASE-19553-branch-1-v4.patch, 
> HBASE-19553-branch-1-v4.patch, HBASE-19553-branch-1.patch
>
>
> Similar to HBASE-18025, the replica parent's info is not removed from master. 
> Actually I think it can be removed after replica region is split or merged, I 
> will check the logic and apply one patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19364) Truncate_preserve fails with table when replica region > 1

2018-02-21 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372272#comment-16372272
 ] 

huaxiang sun commented on HBASE-19364:
--

reattach the patch to trigger the error for check-style and findbug.

> Truncate_preserve fails with table when replica region > 1
> --
>
> Key: HBASE-19364
> URL: https://issues.apache.org/jira/browse/HBASE-19364
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-19364-branch-1.patch, HBASE-19364-branch-1.patch
>
>
> Root cause is same as HBASE-17319, here we need to exclude secondary regions 
> while reading meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19364) Truncate_preserve fails with table when replica region > 1

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19364:
-
Attachment: HBASE-19364-branch-1.patch

> Truncate_preserve fails with table when replica region > 1
> --
>
> Key: HBASE-19364
> URL: https://issues.apache.org/jira/browse/HBASE-19364
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-19364-branch-1.patch, HBASE-19364-branch-1.patch
>
>
> Root cause is same as HBASE-17319, here we need to exclude secondary regions 
> while reading meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun resolved HBASE-19391.
--
Resolution: Fixed

Resolve as the fix was pushed to branch-2.

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 1.4.2
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Fix For: 1.5.0, 2.0.0-beta-2
>
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372263#comment-16372263
 ] 

Hadoop QA commented on HBASE-20035:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m  
3s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911436/HBASE-20035.001.branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7d48724cb4f4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / baec532aa2 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11605/testReport/ |
| Max. process+thread count | 5504 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11605/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> 

[jira] [Updated] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19391:
-
Fix Version/s: 2.0.0-beta-2
   1.5.0

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 1.4.2
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Fix For: 1.5.0, 2.0.0-beta-2
>
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19391:
-
Affects Version/s: 2.0.0-alpha-4
   1.4.2

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 1.4.2
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun reopened HBASE-19391:
--

reopen to push fix to branch-2.

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 1.4.2
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18133) Low-latency space quota size reports

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372256#comment-16372256
 ] 

Hadoop QA commented on HBASE-18133:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hbase-hadoop2-compat: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
13s{color} | {color:red} hbase-server: The patch generated 17 new + 580 
unchanged - 3 fixed = 597 total (was 583) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  4s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestBlocksScanned |
|   | hadoop.hbase.coprocessor.TestCoprocessorInterface |
|   | hadoop.hbase.quotas.TestRegionSizeStoreImpl |
|   | hadoop.hbase.quotas.TestRegionSizeImpl |
|   | hadoop.hbase.filter.TestColumnPrefixFilter |
|   | hadoop.hbase.filter.TestFilterFromRegionSide |
|   | hadoop.hbase.regionserver.TestScanner |
|   | hadoop.hbase.filter.TestDependentColumnFilter |
|   | hadoop.hbase.regionserver.TestResettingCounters 

[jira] [Updated] (HBASE-19391) Calling HRegion#initializeRegionInternals from a region replica can still re-create a region directory

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19391:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

pushed the patch to master and branch-1, thanks [~esteban] for the patch.

> Calling HRegion#initializeRegionInternals from a region replica can still 
> re-create a region directory
> --
>
> Key: HBASE-19391
> URL: https://issues.apache.org/jira/browse/HBASE-19391
> Project: HBase
>  Issue Type: Bug
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>Priority: Major
> Attachments: HBASE-19391.master.v0.patch
>
>
> This is a follow up from HBASE-18024. There stills a chance that attempting 
> to open a region that is not the default region replica can still create a 
> GC'd region directory by the CatalogJanitor causing inconsistencies with hbck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20038) TestLockProcedure.testTimeout is flakey

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372220#comment-16372220
 ] 

Hadoop QA commented on HBASE-20038:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
16s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m 15s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}115m 
48s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911423/HBASE-20038.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux dad347901de9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 5d994a24fc |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11604/testReport/ |
| Max. process+thread count | 5346 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11604/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> TestLockProcedure.testTimeout is flakey
> ---
>
> Key: HBASE-20038
> URL: 

[jira] [Commented] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region

2018-02-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372215#comment-16372215
 ] 

Andrew Purtell commented on HBASE-20001:


This is going to miss 1.4.2 but can certainly should go in to 1.4.3 next month.

> cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
> -
>
> Key: HBASE-20001
> URL: https://issues.apache.org/jira/browse/HBASE-20001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7
>Reporter: Francis Liu
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20001.branch-1.4.001.patch, 
> HBASE-20001.branch-1.4.002.patch
>
>
> In RegionStates.cleanIfNoMetaEntry()
> {{if (MetaTableAccessor.getRegion(server.getConnection(), 
> hri.getEncodedNameAsBytes()) == null) {}}
> {{regionOffline(hri);}}
> {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}}
> }
> But api expects regionname
> {{public static Pair getRegion(Connection 
> connection, byte [] regionName)}}
> So we might end up cleaning good regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20001) cleanIfNoMetaEntry() uses encoded instead of region name to lookup region

2018-02-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372215#comment-16372215
 ] 

Andrew Purtell edited comment on HBASE-20001 at 2/21/18 11:35 PM:
--

This is going to miss 1.4.2 but certainly should go in to 1.4.3 next month.


was (Author: apurtell):
This is going to miss 1.4.2 but can certainly should go in to 1.4.3 next month.

> cleanIfNoMetaEntry() uses encoded instead of region name to lookup region
> -
>
> Key: HBASE-20001
> URL: https://issues.apache.org/jira/browse/HBASE-20001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.1.7
>Reporter: Francis Liu
>Assignee: Thiruvel Thirumoolan
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20001.branch-1.4.001.patch, 
> HBASE-20001.branch-1.4.002.patch
>
>
> In RegionStates.cleanIfNoMetaEntry()
> {{if (MetaTableAccessor.getRegion(server.getConnection(), 
> hri.getEncodedNameAsBytes()) == null) {}}
> {{regionOffline(hri);}}
> {{FSUtils.deleteRegionDir(server.getConfiguration(), hri);}}
> }
> But api expects regionname
> {{public static Pair getRegion(Connection 
> connection, byte [] regionName)}}
> So we might end up cleaning good regions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372216#comment-16372216
 ] 

stack commented on HBASE-20035:
---

Do we have a bloat problem in hbase2? [~elserj]

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
>   at 
> 

[jira] [Commented] (HBASE-20012) Backport filesystem quotas (HBASE-16961) to branch-1

2018-02-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372214#comment-16372214
 ] 

Josh Elser commented on HBASE-20012:


[~apurtell], just an FYI in case you don't see it: this backport reminded me 
about HBASE-18133 and HBASE-18135. Just put a rebased patch for 18133, and will 
do the same for 18135 soon. They're both implementation improvements (not 
user-facing), so they'll probably miss 2.0.x, but will go into a 2.1.x 
eventually.

> Backport filesystem quotas (HBASE-16961) to branch-1
> 
>
> Key: HBASE-20012
> URL: https://issues.apache.org/jira/browse/HBASE-20012
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
>
> Filesystem quotas (HBASE-16961) is an experimental feature committed to 
> branch-2 and up. We are thinking about chargeback and share-back models at 
> work and this begins to look compelling. I wish this meant then we'd give 
> HBase 2 a spin but that's unfortunately not realistic. It is very likely we 
> will want to make use of this before we are up on HBase 2. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18133) Low-latency space quota size reports

2018-02-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372212#comment-16372212
 ] 

Josh Elser commented on HBASE-18133:


Rebase'd this old patch and cleaned up all of the deprecated API changes (e.g. 
HRegionInfo, HTableDescriptor)

> Low-latency space quota size reports
> 
>
> Key: HBASE-18133
> URL: https://issues.apache.org/jira/browse/HBASE-18133
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18133.001.patch, HBASE-18133.002.patch, 
> HBASE-18133.003.patch
>
>
> Presently space quota enforcement relies on RegionServers sending reports to 
> the master about each Region that they host. This is done by periodically, 
> reading the cached size of each HFile in each Region (which was ultimately 
> computed from HDFS).
> This means that the Master is unaware of Region size growth until the the 
> next time this chore in a RegionServer fires which is a fair amount of 
> latency (a few minutes, by default). Operations like flushes, compactions, 
> and bulk-loads are delayed even though the RegionServer is running those 
> operations locally.
> Instead, we can create an API which these operations could invoke that would 
> automatically update the size of the Region being operated on. For example, a 
> successful flush can report that the size of a Region increased by the size 
> of the flush. A compaction can subtract the size of the input files of the 
> compaction and add in the size of the resulting file.
> This de-couples the computation of a Region's size from sending the Region 
> sizes to the Master, allowing us to send reports more frequently, increasing 
> the responsiveness of the cluster to size changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20042) TestRegionServerAbort flakey

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372213#comment-16372213
 ] 

stack commented on HBASE-20042:
---

First patch did not work. It just failed on flakies dashboard. 
https://builds.apache.org/job/HBASE-Flaky-Tests-branch2.0/2481/  It avoided 
null region but then the subsequent test that the region had aborted failed 
because we'd done a servercrashprocedure in the meantime...   I pushed the 
ADDENDUM that reorders statements so we have a Region to before we kill the 
RegionServer. Pushed it to master and branch-2.

> TestRegionServerAbort flakey
> 
>
> Key: HBASE-20042
> URL: https://issues.apache.org/jira/browse/HBASE-20042
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20042-TestRegionServerAbort-flakey-ADDENDUM-RE.patch, 
> HBASE-20042.branch-2.001.patch
>
>
> Failed with a hang and an index out of bounds in last 30 runs. The timeout 
> has no logs. The indexoutofbounds seems basic... Looking at logs all seems to 
> be working... eventually... as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18133) Low-latency space quota size reports

2018-02-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18133:
---
Attachment: HBASE-18133.003.patch

> Low-latency space quota size reports
> 
>
> Key: HBASE-18133
> URL: https://issues.apache.org/jira/browse/HBASE-18133
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18133.001.patch, HBASE-18133.002.patch, 
> HBASE-18133.003.patch
>
>
> Presently space quota enforcement relies on RegionServers sending reports to 
> the master about each Region that they host. This is done by periodically, 
> reading the cached size of each HFile in each Region (which was ultimately 
> computed from HDFS).
> This means that the Master is unaware of Region size growth until the the 
> next time this chore in a RegionServer fires which is a fair amount of 
> latency (a few minutes, by default). Operations like flushes, compactions, 
> and bulk-loads are delayed even though the RegionServer is running those 
> operations locally.
> Instead, we can create an API which these operations could invoke that would 
> automatically update the size of the Region being operated on. For example, a 
> successful flush can report that the size of a Region increased by the size 
> of the flush. A compaction can subtract the size of the input files of the 
> compaction and add in the size of the resulting file.
> This de-couples the computation of a Region's size from sending the Region 
> sizes to the Master, allowing us to send reports more frequently, increasing 
> the responsiveness of the cluster to size changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20042) TestRegionServerAbort flakey

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20042:
--
Attachment: 0001-HBASE-20042-TestRegionServerAbort-flakey-ADDENDUM-RE.patch

> TestRegionServerAbort flakey
> 
>
> Key: HBASE-20042
> URL: https://issues.apache.org/jira/browse/HBASE-20042
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20042-TestRegionServerAbort-flakey-ADDENDUM-RE.patch, 
> HBASE-20042.branch-2.001.patch
>
>
> Failed with a hang and an index out of bounds in last 30 runs. The timeout 
> has no logs. The indexoutofbounds seems basic... Looking at logs all seems to 
> be working... eventually... as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2018-02-21 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372204#comment-16372204
 ] 

huaxiang sun commented on HBASE-18223:
--

HBASE-19934 is a dup of HBASE-19281, the patch for HBASE-19934 got committed 
first.

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19281) Snapshot creation failed after splitting table (replica region > 1)

2018-02-21 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-19281:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

the fix was committed through HBASE-19934, resolving it.

> Snapshot creation failed after splitting table (replica region > 1)
> ---
>
> Key: HBASE-19281
> URL: https://issues.apache.org/jira/browse/HBASE-19281
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 1.3.1
>Reporter: Chandra Sekhar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-19281-branch-1.patch
>
>
> Snapshot creation failed with below error when tried on table with multiple 
> replica region,
> {noformat}
> hbase(main):025:0> snapshot 't1','t1_snap'
> 2017-11-16 18:04:27,930 DEBUG [main] client.HBaseAdmin: Waiting a max of 
> 30 ms for snapshot '{ ss=t1_snap table=t1 type=FLUSH }'' to complete. 
> (max 42857 ms per retry)
> 2017-11-16 18:04:27,930 DEBUG [main] client.HBaseAdmin: (#1) Sleeping: 100ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,030 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> 2017-11-16 18:04:28,035 DEBUG [main] client.HBaseAdmin: (#2) Sleeping: 200ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,236 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> 2017-11-16 18:04:28,238 DEBUG [main] client.HBaseAdmin: (#3) Sleeping: 300ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,538 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=t1_snap table=t1 type=FLUSH } had an error.  Procedure t1_snap { 
> waiting=[] done=[] }
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1091)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2418)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:191)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via 
> Failed taking snapshot { ss=t1_snap table=t1 type=FLUSH } due to 
> exception:Manifest region info {ENCODED => 3158abebd655fca73cd87b6e84584197, 
> NAME => 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY 
> => '', ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't 
> match expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => 
> true}:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Manifest 
> region info {ENCODED => 3158abebd655fca73cd87b6e84584197, NAME => 
> 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match 
> expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
> at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344)
> ... 6 more
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: 
> Manifest region info {ENCODED => 3158abebd655fca73cd87b6e84584197, NAME => 
> 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match 
> expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifyRegionInfo(MasterSnapshotVerifier.java:220)
> at 
> 

[jira] [Commented] (HBASE-19281) Snapshot creation failed after splitting table (replica region > 1)

2018-02-21 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372199#comment-16372199
 ] 

huaxiang sun commented on HBASE-19281:
--

The fix was committed through HBASE-19934, resolve it as a dup. 

> Snapshot creation failed after splitting table (replica region > 1)
> ---
>
> Key: HBASE-19281
> URL: https://issues.apache.org/jira/browse/HBASE-19281
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 1.3.1
>Reporter: Chandra Sekhar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-19281-branch-1.patch
>
>
> Snapshot creation failed with below error when tried on table with multiple 
> replica region,
> {noformat}
> hbase(main):025:0> snapshot 't1','t1_snap'
> 2017-11-16 18:04:27,930 DEBUG [main] client.HBaseAdmin: Waiting a max of 
> 30 ms for snapshot '{ ss=t1_snap table=t1 type=FLUSH }'' to complete. 
> (max 42857 ms per retry)
> 2017-11-16 18:04:27,930 DEBUG [main] client.HBaseAdmin: (#1) Sleeping: 100ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,030 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> 2017-11-16 18:04:28,035 DEBUG [main] client.HBaseAdmin: (#2) Sleeping: 200ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,236 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> 2017-11-16 18:04:28,238 DEBUG [main] client.HBaseAdmin: (#3) Sleeping: 300ms 
> while waiting for snapshot completion.
> 2017-11-16 18:04:28,538 DEBUG [main] client.HBaseAdmin: Getting current 
> status of snapshot from master...
> ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=t1_snap table=t1 type=FLUSH } had an error.  Procedure t1_snap { 
> waiting=[] done=[] }
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1091)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2418)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:191)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via 
> Failed taking snapshot { ss=t1_snap table=t1 type=FLUSH } due to 
> exception:Manifest region info {ENCODED => 3158abebd655fca73cd87b6e84584197, 
> NAME => 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY 
> => '', ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't 
> match expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => 
> true}:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Manifest 
> region info {ENCODED => 3158abebd655fca73cd87b6e84584197, NAME => 
> 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match 
> expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
> at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344)
> ... 6 more
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: 
> Manifest region info {ENCODED => 3158abebd655fca73cd87b6e84584197, NAME => 
> 't1,,1510826577196_0002.3158abebd655fca73cd87b6e84584197.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 2}doesn't match 
> expected region:{ENCODED => 73aa1a133d3344a67afa46ee135e389a, NAME => 
> 't1,,1510826577196.73aa1a133d3344a67afa46ee135e389a.', STARTKEY => '', ENDKEY 
> => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifyRegionInfo(MasterSnapshotVerifier.java:220)
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifyRegions(MasterSnapshotVerifier.java:198)
> 

[jira] [Commented] (HBASE-19934) HBaseSnapshotException when read replicas is enabled and online snapshot is taken after region splitting

2018-02-21 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372195#comment-16372195
 ] 

huaxiang sun commented on HBASE-19934:
--

Sorry, late in the game. I am about to commit HBASE-19281 and found the fix was 
committed here. 

> HBaseSnapshotException when read replicas is enabled and online snapshot is 
> taken after region splitting
> 
>
> Key: HBASE-19934
> URL: https://issues.apache.org/jira/browse/HBASE-19934
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 2.0.0-beta-2, 1.4.2
>
> Attachments: HBASE-19934-UT.patch, HBASE-19934-branch-1.patch, 
> HBASE-19934-v2.patch, HBASE-19934-v3.patch, HBASE-19934-v3.patch, 
> HBASE-19934.branch-1.001.patch, HBASE-19934.patch, HBASE-19934.patch, 
> HBASE-19934.patch, HBASE-19934.patch
>
>
> Investigating HBASE-19893, I'm encountering another issue.
> Steps to reproduce are as follows:
> 1. Create a table
> {code:java}
> create "test", "cf", {REGION_REPLICATION => 2}{code}
> 2. Load data to the table
> {code:java}
> (0...2000).each{|i| put "test", "row#{i}", "cf:col", "val"}{code}
> 3. Split the table
> {code:java}
> split "test"{code}
> 4. Take a snapshot for the table
> {code:java}
> snapshot "test", "snap"{code}
> And I encountered the following error:
> {code:java}
> hbase(main):004:0> snapshot "test", "snap"
> ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snap table=test type=FLUSH } had an error. Procedure snap { waiting=[] 
> done=[] }
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:379)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1144)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via 
> Failed taking snapshot { ss=snap table=test type=FLUSH } due to 
> exception:Manifest region info {ENCODED => b910488a686644a7c1c85246d0d123d5, 
> NAME => 'test,,1517808523837_0001.b910488a686644a7c1c85246d0d123d5.', 
> STARTKEY => '', ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 
> 1}doesn't match expected region:{ENCODED => ef8665859c0b19927b7dc127ec10120a, 
> NAME => 'test,,1517808523837.ef8665859c0b19927b7dc127ec10120a.', STARTKEY => 
> '', ENDKEY => '', OFFLINE => true, SPLIT => 
> true}:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Manifest 
> region info {ENCODED => b910488a686644a7c1c85246d0d123d5, NAME => 
> 'test,,1517808523837_0001.b910488a686644a7c1c85246d0d123d5.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 1}doesn't match 
> expected region:{ENCODED => ef8665859c0b19927b7dc127ec10120a, NAME => 
> 'test,,1517808523837.ef8665859c0b19927b7dc127ec10120a.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:82)
> at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:306)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:368)
> ... 6 more
> Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: 
> Manifest region info {ENCODED => b910488a686644a7c1c85246d0d123d5, NAME => 
> 'test,,1517808523837_0001.b910488a686644a7c1c85246d0d123d5.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true, REPLICA_ID => 1}doesn't match 
> expected region:{ENCODED => ef8665859c0b19927b7dc127ec10120a, NAME => 
> 'test,,1517808523837.ef8665859c0b19927b7dc127ec10120a.', STARTKEY => '', 
> ENDKEY => '', OFFLINE => true, SPLIT => true}
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifyRegionInfo(MasterSnapshotVerifier.java:223)
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifyRegions(MasterSnapshotVerifier.java:201)
> at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:119)
> at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:202)
> at 

[jira] [Updated] (HBASE-15740) Replication source.shippedKBs metric is undercounting because it is in KB

2018-02-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15740:
-
Release Note: Removed Replication source.shippedKBs metric in favor of 
source.shippedBytes  (was: Deprecated Replication source.shippedKBs metric in 
favor of source.shippedBytes)

> Replication source.shippedKBs metric is undercounting because it is in KB
> -
>
> Key: HBASE-15740
> URL: https://issues.apache.org/jira/browse/HBASE-15740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Major
> Fix For: 2.0.0, 1.3.0, 0.98.20
>
> Attachments: HBASE-15740-0.98.patch, hbase-15740_v1.patch, 
> hbase-15740_v2.patch
>
>
> In a cluster where there is replication going on, I've noticed that this is 
> always 0:
> {code}
> "source.shippedKBs" : 0,
> {code}
> Looking at the source reveals why:
> {code}
>   metrics.shipBatch(currentNbOperations, currentSize / 1024, 
> currentNbHFiles);
> {code}
> It is always undercounting because we discard remaining bytes after KB 
> boundary. This is specially a problem when we are always shipping small 
> batches <1KB.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372184#comment-16372184
 ] 

stack commented on HBASE-19767:
---

Patch looks good. Waiting on hadoopqa.

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372181#comment-16372181
 ] 

Josh Elser commented on HBASE-20035:


bq. Does the comment mean the tableSize should be 15? Maybe clear up on commit.

Essentially trying to explain the difference in writing 1KB of "Cells" and what 
the resultant HFile space would be for the same. Will clarify it.

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
>   at 
> 

[jira] [Commented] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372177#comment-16372177
 ] 

stack commented on HBASE-20035:
---

+1 Try it.

I don't get this bit sir:

203 // As of 2.0.0-beta-2, this 1KB of data size actually results in 
about 15KB on disk
204 final long tableSize = 1024L * 1; // 1KB

Does the comment mean the tableSize should be 15? Maybe clear up on commit.

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> 

[jira] [Assigned] (HBASE-20043) ITBLL fails against hadoop3

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-20043:
-

Assignee: stack

> ITBLL fails against hadoop3
> ---
>
> Key: HBASE-20043
> URL: https://issues.apache.org/jira/browse/HBASE-20043
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Mike Drob
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0-beta-2
>
>
> This has been failing for a while, I haven't tried to bisec but it was before 
> my changes for HBASE-19991 at least.
> {code}
> mvn clean verify -pl hbase-it -Dhadoop.profile=3.0 
> -Dit.test=IntegrationTestBigLinkedList -Dtest=none -am
> {code}
> {code}
> 2018-02-21 16:43:13,265 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=60450] 
> ipc.RpcServer(464): Unexpected throwable object 
> java.lang.AssertionError: 
> hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
>  source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
>   at org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:1691)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.moveRegion(MasterRpcServices.java:1348)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> 2018-02-21 16:43:13,276 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=60450] 
> ipc.CallRunner(141): callId: 49 service: MasterService methodName: MoveRegion 
> size: 106 connection: 192.168.1.134:60743 deadline: 1519253053263
> java.io.IOException: 
> hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
>  source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:465)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.AssertionError: 
> hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
>  source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
>   at org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:1691)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.moveRegion(MasterRpcServices.java:1348)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
>   ... 3 more
> {code}
> The assertion that it trips is:
> {code}
> // Now we can do the move
> RegionPlan rp = new RegionPlan(hri, regionState.getServerName(), dest);
> assert rp.getDestination() != null: rp.toString() + " " + dest;
> assert rp.getSource() != null: rp.toString();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372168#comment-16372168
 ] 

stack commented on HBASE-20041:
---

No one depends on hadoop-yarn-server-nodemanager, etc., running against h3 but 
hbase-rest?

This all to support MR PE against REST? If we killed the latter facility would 
that help?

You think the purge of the above from dependency management causes HBASE-20043? 
I've not tried it. Did it work before this change?

Thanks [~mdrob]

> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20041:
--
Fix Version/s: 2.0.0-beta-2

> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20041:
--
Status: Patch Available  (was: Open)

> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20041) cannot start mini mapreduce cluster for ITs

2018-02-21 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20041:
--
Attachment: HBASE-20041.patch

> cannot start mini mapreduce cluster for ITs
> ---
>
> Key: HBASE-20041
> URL: https://issues.apache.org/jira/browse/HBASE-20041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20041.patch
>
>
> We killed a lot of the jersey yarn dependencies, so now we can't start the 
> hadoop3 mini MR cluster. This make ITs sad.
> Need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372151#comment-16372151
 ] 

stack commented on HBASE-20044:
---

.001 is what I pushed to master and branch-2. Keeping open to see if this helps.

> TestClientClusterStatus is flakey
> -
>
> Key: HBASE-20044
> URL: https://issues.apache.org/jira/browse/HBASE-20044
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey
>Reporter: stack
>Priority: Major
> Attachments: HBASE-20044.branch-2.001.patch
>
>
> It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
> slightest variance. Here are two failiures... one because the previous test 
> had not finished putting back a Region that had been offlined, and the other 
> because the count of requests was off slightly. Let me make the compare 
> coarser 
> {code}
> Test set: org.apache.hadoop.hbase.TestClientClusterStatus
> ---
> Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
> FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 
> 0.236 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.25
> Number of requests: 17
> Number of regions: 1
> Number of regions in transition: 0> but was: asf903.gq1.ygridcore.net,45687,1519246533030
> Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}
> and 
> {code}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 
> s <<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> [ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
> elapsed: 0.065 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0> but was: 9845c79afe69,46509,1519227084385
> Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 19
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20044:
--
Attachment: HBASE-20044.branch-2.001.patch

> TestClientClusterStatus is flakey
> -
>
> Key: HBASE-20044
> URL: https://issues.apache.org/jira/browse/HBASE-20044
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey
>Reporter: stack
>Priority: Major
> Attachments: HBASE-20044.branch-2.001.patch
>
>
> It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
> slightest variance. Here are two failiures... one because the previous test 
> had not finished putting back a Region that had been offlined, and the other 
> because the count of requests was off slightly. Let me make the compare 
> coarser 
> {code}
> Test set: org.apache.hadoop.hbase.TestClientClusterStatus
> ---
> Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
> FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 
> 0.236 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.25
> Number of requests: 17
> Number of regions: 1
> Number of regions in transition: 0> but was: asf903.gq1.ygridcore.net,45687,1519246533030
> Number of backup masters: 2
>   asf903.gq1.ygridcore.net,34661,1519246530655
>   asf903.gq1.ygridcore.net,34734,1519246533133
> Number of live region servers: 4
>   asf903.gq1.ygridcore.net,37432,1519246533632
>   asf903.gq1.ygridcore.net,42964,1519246533554
>   asf903.gq1.ygridcore.net,43699,1519246533376
>   asf903.gq1.ygridcore.net,56911,1519246533711
> Number of dead region servers: 1
>   asf903.gq1.ygridcore.net,57278,1519246533770
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}
> and 
> {code}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 
> s <<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
> [ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
> elapsed: 0.065 s  <<< FAILURE!
> java.lang.AssertionError: 
> expected: Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 17
> Number of regions: 2
> Number of regions in transition: 0> but was: 9845c79afe69,46509,1519227084385
> Number of backup masters: 2
>   9845c79afe69,35076,1519227086213
>   9845c79afe69,45963,1519227086363
> Number of live region servers: 4
>   9845c79afe69,34709,1519227086571
>   9845c79afe69,34961,1519227086645
>   9845c79afe69,35891,1519227086720
>   9845c79afe69,36139,1519227086486
> Number of dead region servers: 1
>   9845c79afe69,41992,1519227086820
> Average load: 0.5
> Number of requests: 19
> Number of regions: 2
> Number of regions in transition: 0>
>   at 
> org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20044) TestClientClusterStatus is flakey

2018-02-21 Thread stack (JIRA)
stack created HBASE-20044:
-

 Summary: TestClientClusterStatus is flakey
 Key: HBASE-20044
 URL: https://issues.apache.org/jira/browse/HBASE-20044
 Project: HBase
  Issue Type: Sub-task
  Components: flakey
Reporter: stack


It killed a nightly. Failed in flakey suite.  The compare is too sensitive to 
slightest variance. Here are two failiures... one because the previous test had 
not finished putting back a Region that had been offlined, and the other 
because the count of requests was off slightly. Let me make the compare 
coarser 


{code}
Test set: org.apache.hadoop.hbase.TestClientClusterStatus
---
Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.858 s <<< 
FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time elapsed: 0.236 
s  <<< FAILURE!
java.lang.AssertionError: 
expected: but was:
at 
org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
{code}


and 


{code}
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.416 s 
<<< FAILURE! - in org.apache.hadoop.hbase.TestClientClusterStatus
[ERROR] testNone(org.apache.hadoop.hbase.TestClientClusterStatus)  Time 
elapsed: 0.065 s  <<< FAILURE!
java.lang.AssertionError: 
expected: but was:
at 
org.apache.hadoop.hbase.TestClientClusterStatus.testNone(TestClientClusterStatus.java:107)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20043) ITBLL fails against hadoop3

2018-02-21 Thread Mike Drob (JIRA)
Mike Drob created HBASE-20043:
-

 Summary: ITBLL fails against hadoop3
 Key: HBASE-20043
 URL: https://issues.apache.org/jira/browse/HBASE-20043
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Reporter: Mike Drob
 Fix For: 2.0.0-beta-2


This has been failing for a while, I haven't tried to bisec but it was before 
my changes for HBASE-19991 at least.

{code}
mvn clean verify -pl hbase-it -Dhadoop.profile=3.0 
-Dit.test=IntegrationTestBigLinkedList -Dtest=none -am
{code}

{code}
2018-02-21 16:43:13,265 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=60450] ipc.RpcServer(464): 
Unexpected throwable object 
java.lang.AssertionError: 
hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
 source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
at org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:1691)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.moveRegion(MasterRpcServices.java:1348)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
2018-02-21 16:43:13,276 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=60450] ipc.CallRunner(141): 
callId: 49 service: MasterService methodName: MoveRegion size: 106 connection: 
192.168.1.134:60743 deadline: 1519253053263
java.io.IOException: 
hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
 source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:465)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.AssertionError: 
hri=IntegrationTestBigLinkedList,\x8E8\xE3\x8E8\xE3\x8E5,1519252895022.236bbedde32e4549691c108a1a7005a8.,
 source=, destination=mdrob-mbp.hsd1.tx.comcast.net,60456,1519252856027
at org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:1691)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.moveRegion(MasterRpcServices.java:1348)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406)
... 3 more
{code}

The assertion that it trips is:

{code}
// Now we can do the move
RegionPlan rp = new RegionPlan(hri, regionState.getServerName(), dest);
assert rp.getDestination() != null: rp.toString() + " " + dest;
assert rp.getSource() != null: rp.toString();
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372130#comment-16372130
 ] 

Umesh Agashe commented on HBASE-19767:
--

Here are my findings:
 # There are no fixed steps to reproduce the problem and it shows up 
intermittently (if thats not the case please update the Jira with steps for 
repro).
 # I verified all the calculations from UI code to backend and it looks okay to 
me. No int overflows, 'long' is used all along. RS aggregates all compaction 
progress numbers across all regions.
 # totalCompactingKVs are not accurate but over estimated. Considering this, I 
am not quite sure how totalCompactingKVs can be less than currentCompactedKVs.
 # I tried combinations or operations with different store files through UT to 
get negative remaining KVs but didn't succeed.
 # One possibility is error while writing trailer and because of this 
totalCompactingKVs are stored as 0 (speculation).

Based on this I have uploaded the patch to print warning when 
totalCompactingKVs are less than currentCompactedKVs. This will help with 
further debugging. When total is less than current, current is returned. This 
should take care of displaying incorrect percentage and remaining KVs in UI.

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-19767:
-
Status: Patch Available  (was: In Progress)

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19075) Task tabs on master UI cause page scroll

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372128#comment-16372128
 ] 

Hadoop QA commented on HBASE-19075:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}114m 
31s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911420/HBASE-19075.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux bedc5082628c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 401227ba6a |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11603/testReport/ |
| Max. process+thread count | 4962 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11603/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Task tabs on master UI cause page scroll
> 
>
> Key: HBASE-19075
> URL: https://issues.apache.org/jira/browse/HBASE-19075
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Mike Drob
>Assignee: Sahil Aggarwal
>Priority: Major
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-19075.master.001.patch, 
> HBASE-19075.master.002.patch
>
>
> On the master info page, the clicking the tabs under Tasks causes the page to 
> scroll back to the top of the page.
> {noformat}
> Tasks
> Show All Monitored Tasks Show non-RPC Tasks Show All RPC Handler Tasks Show 
> Active RPC Calls Show Client Operations View as JSON
> {noformat}
> ^^ Any of those
> The other tab-like links on the page keep the scroll in the same location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19767 started by Umesh Agashe.

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19767) Master web UI shows negative values for Remaining KVs

2018-02-21 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-19767:
-
Attachment: hbase-19767.master.001.patch

> Master web UI shows negative values for Remaining KVs
> -
>
> Key: HBASE-19767
> URL: https://issues.apache.org/jira/browse/HBASE-19767
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: Screen Shot 2018-01-12 at 12.18.41 PM.png, 
> hbase-19767.master.001.patch
>
>
> In the Master Web UI, under the compaction tab, the Remaining KVs sometimes 
> shows negative values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20035:
---
Status: Patch Available  (was: Open)

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
>   at 
> 

[jira] [Updated] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20035:
---
Fix Version/s: 2.0.0-beta-2

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
>   at 
> 

[jira] [Updated] (HBASE-20035) .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and RuntimeExceptions

2018-02-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20035:
---
Attachment: HBASE-20035.001.branch-2.patch

> .TestQuotaStatusRPCs.testQuotaStatusFromMaster failed with NPEs and 
> RuntimeExceptions
> -
>
> Key: HBASE-20035
> URL: https://issues.apache.org/jira/browse/HBASE-20035
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Josh Elser
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20035.001.branch-2.patch
>
>
> It failed the nightly.
> Says this...
> Error Message
> Waiting timed out after [30,000] msec
> Stacktrace
> java.lang.AssertionError: Waiting timed out after [30,000] msec
>   at 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(TestQuotaStatusRPCs.java:267)
> ... but looking in log I see following:
> Odd thing is the test is run three times and it failed all three times for 
> same reason.
> [ERROR] Failures: 
> [ERROR] 
> org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs.testQuotaStatusFromMaster(org.apache.hadoop.hbase.quotas.TestQuotaStatusRPCs)
> [ERROR]   Run 1: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 2: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> [ERROR]   Run 3: TestQuotaStatusRPCs.testQuotaStatusFromMaster:267 Waiting 
> timed out after [30,000] msec
> If you go to build artifacts you can download full -output.txt log. I see 
> stuff like this which might be ok
> {code}
> 2018-02-21 01:29:59,546 INFO  
> [StoreCloserThread-testQuotaStatusFromMaster4,0,1519176558800.1dbd00f38915cd276410065f85140b26.-1]
>  regionserver.HStore(930): Closed f1
> 2018-02-21 01:29:59,551 ERROR [master/ad51e354307e:0.Chore.2] 
> hbase.ScheduledChore(189): Caught error
> java.lang.RuntimeException: java.util.concurrent.RejectedExecutionException: 
> Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:200)
>   at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
>   at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.next(QuotaRetriever.java:106)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever$Iter.(QuotaRetriever.java:125)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaRetriever.iterator(QuotaRetriever.java:117)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.fetchAllTablesWithQuotasDefined(QuotaObserverChore.java:458)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore._chore(QuotaObserverChore.java:148)
>   at 
> org.apache.hadoop.hbase.quotas.QuotaObserverChore.chore(QuotaObserverChore.java:136)
>   at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture@79ec2ef9
>  rejected from java.util.concurrent.ThreadPoolExecutor@5198a326[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 142]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
>   at 
> 

[jira] [Commented] (HBASE-17104) Improve cryptic error message "Memstore size is" on region close

2018-02-21 Thread Sahil Aggarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371898#comment-16371898
 ] 

Sahil Aggarwal commented on HBASE-17104:


Sorry for the delay here.

Earlier we were just logging it as ERROR and continuing further where we are 
executing post-close hooks for coprocessor and closing the metricregion, I 
thought that should still continue. Thoughts?

> Improve cryptic error message "Memstore size is" on region close
> 
>
> Key: HBASE-17104
> URL: https://issues.apache.org/jira/browse/HBASE-17104
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Matteo Bertozzi
>Assignee: Sahil Aggarwal
>Priority: Trivial
>  Labels: beginner, noob
> Fix For: 2.0.0
>
> Attachments: HBASE-17104.master.001 (1) (1).patch, 
> HBASE-17104.master.001 (1).patch, HBASE-17104.master.001.patch
>
>
> while grepping my RS log for ERROR I found a cryptic
> {noformat}
> ERROR [RS_CLOSE_REGION-u1604vm:35021-1] regionserver.HRegion(1601): Memstore 
> size is 33744
> {noformat}
> from the code looks like we seems to want to notify the user about the fact 
> that on close the rs was not able to flush and there were things in the RS. 
> https://github.com/apache/hbase/blob/c3685760f004450667920144f926383eb307de53/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L1601
> {code}
> if (!canFlush) {
>   this.decrMemstoreSize(new MemstoreSize(memstoreDataSize.get(), 
> getMemstoreHeapOverhead()));
> } else if (memstoreDataSize.get() != 0) {
>   LOG.error("Memstore size is " + memstoreDataSize.get());
> }
> {code}
> this should probably not even be an error but a warn or even info, unless we 
> have puts that specifically asked to not be written to the wal,  otherwise 
> the data in the memstore should be safe in the wals. 
> In any case it will be nice to have a message describing what is going on and 
> why we are notifying about the memstore size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16147) Shell command for getting compaction state

2018-02-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16147:
-
Summary: Shell command for getting compaction state  (was: Add ruby wrapper 
for getting compaction state)

> Shell command for getting compaction state
> --
>
> Key: HBASE-16147
> URL: https://issues.apache.org/jira/browse/HBASE-16147
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16147.v1.txt, 16147.v2.txt
>
>
> [~romil.choksi] was asking for command that can poll compaction status from 
> hbase shell.
> This issue is to add ruby wrapper for getting compaction state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20031) Unable to run integration test using mvn due to missing HBaseClassTestRule

2018-02-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20031:
---
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-beta-2

> Unable to run integration test using mvn due to missing HBaseClassTestRule
> --
>
> Key: HBASE-20031
> URL: https://issues.apache.org/jira/browse/HBASE-20031
> Project: HBase
>  Issue Type: Test
>Affects Versions: 2.0.0-beta-1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 20031.v1.txt, 20031.v2.txt, 20031.v3.txt
>
>
> In branch-1, the following command works:
> {code}
> mvn test -Dtest=org.apache.hadoop.hbase.IntegrationTestIngest
> {code}
> For hbase2, we have the following error:
> {code}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.249 
> s <<< FAILURE! - in org.apache.hadoop.hbase.IntegrationTestIngest
> [ERROR] org.apache.hadoop.hbase.IntegrationTestIngest  Time elapsed: 0.01 s  
> <<< FAILURE!
> java.lang.AssertionError: No HBaseClassTestRule ClassRule for 
> org.apache.hadoop.hbase.IntegrationTestIngest
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20038) TestLockProcedure.testTimeout is flakey

2018-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371925#comment-16371925
 ] 

Hadoop QA commented on HBASE-20038:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 6s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
7s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m  
6s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
14s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911407/HBASE-20038.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux dd70117c9595 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 92d04d5751 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11601/testReport/ |
| Max. process+thread count | 5844 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11601/console |
| 

[jira] [Commented] (HBASE-20042) TestRegionServerAbort flakey

2018-02-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372035#comment-16372035
 ] 

stack commented on HBASE-20042:
---

I pushed .001 to branch-2 and master. Lets see if it helps.

> TestRegionServerAbort flakey
> 
>
> Key: HBASE-20042
> URL: https://issues.apache.org/jira/browse/HBASE-20042
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Attachments: HBASE-20042.branch-2.001.patch
>
>
> Failed with a hang and an index out of bounds in last 30 runs. The timeout 
> has no logs. The indexoutofbounds seems basic... Looking at logs all seems to 
> be working... eventually... as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20042) TestRegionServerAbort flakey

2018-02-21 Thread stack (JIRA)
stack created HBASE-20042:
-

 Summary: TestRegionServerAbort flakey
 Key: HBASE-20042
 URL: https://issues.apache.org/jira/browse/HBASE-20042
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
 Attachments: HBASE-20042.branch-2.001.patch

Failed with a hang and an index out of bounds in last 30 runs. The timeout has 
no logs. The indexoutofbounds seems basic... Looking at logs all seems to be 
working... eventually... as it should.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >