[jira] [Commented] (HBASE-20387) flaky infrastructure should work for all branches

2018-08-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584679#comment-16584679
 ] 

Duo Zhang commented on HBASE-20387:
---

I mean it is important for me to get the full logs of the failing UTs...

If we only record 5 builds for flaky test job, lots of flaky tests will not 
show up on the page so it is not easy for me to get the log, unless I click the 
flaky test runs one by one to check whether there is failure for the specific 
test...

> flaky infrastructure should work for all branches
> -
>
> Key: HBASE-20387
> URL: https://issues.apache.org/jira/browse/HBASE-20387
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20387.0.patch, HBASE-20387.1.patch
>
>
> We need a flaky list per-branch, since what does/does not work reliably on 
> master isn't really relevant to our older maintenance release lines.
> We should just make the invocation a step in the current per-branch nightly 
> jobs, prior to when we need the list in the stages that run unit tests. We 
> can publish it in the nightly job as well so that precommit can still get it. 
> (and can fetch it per-branch if needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584668#comment-16584668
 ] 

Hadoop QA commented on HBASE-21060:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
25s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 16s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:34a9b27 |
| JIRA Issue | HBASE-21060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936127/HBASE-21060-branch-1.2.v0.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile 

[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584669#comment-16584669
 ] 

Hudson commented on HBASE-20940:


Results for branch branch-2
[build #1126 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1126/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1126//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1126//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1126//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584663#comment-16584663
 ] 

Hadoop QA commented on HBASE-21060:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
31s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
6m  3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 10s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat |
|   | hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | hadoop.hbase.mapreduce.TestMultiTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:34a9b27 |
| JIRA Issue | 

[jira] [Commented] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584662#comment-16584662
 ] 

Hadoop QA commented on HBASE-21060:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
6m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
44s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:53dba69 |
| JIRA Issue | HBASE-21060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936125/HBASE-21060-branch-1.3.v0.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle

[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584654#comment-16584654
 ] 

Hudson commented on HBASE-20940:


Results for branch branch-2.1
[build #205 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/205/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/205//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/205//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/205//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584649#comment-16584649
 ] 

Hudson commented on HBASE-20940:


Results for branch branch-2.0
[build #693 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/693/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/693//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/693//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/693//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584629#comment-16584629
 ] 

Hadoop QA commented on HBASE-20734:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 0s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} hbase-server: The patch generated 10 new + 403 
unchanged - 5 fixed = 413 total (was 408) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}151m 
40s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936110/HBASE-20734.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0f9c998ecab2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f9793fafb7 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3c

[jira] [Commented] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584619#comment-16584619
 ] 

Sean Busbey commented on HBASE-21060:
-

-v0 for branch-1.2
  - same as the branch-1.3 backports

> fix dead store in SecureBulkLoadEndpoint
> 
>
> Key: HBASE-21060
> URL: https://issues.apache.org/jira/browse/HBASE-21060
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 1.2.7, 1.3.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-21060-branch-1.2.v0.patch, 
> HBASE-21060-branch-1.3.v0.patch
>
>
> Dead store to fsSet in 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At 
> SecureBulkLoadEndpoint.java:org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At SecureBulkLoadEndpoint.java:[line 145]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21060:

Attachment: HBASE-21060-branch-1.2.v0.patch

> fix dead store in SecureBulkLoadEndpoint
> 
>
> Key: HBASE-21060
> URL: https://issues.apache.org/jira/browse/HBASE-21060
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 1.2.7, 1.3.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-21060-branch-1.2.v0.patch, 
> HBASE-21060-branch-1.3.v0.patch
>
>
> Dead store to fsSet in 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At 
> SecureBulkLoadEndpoint.java:org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At SecureBulkLoadEndpoint.java:[line 145]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-17 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584616#comment-16584616
 ] 

Zach York commented on HBASE-20429:
---

I was planning to starting work on some of this in a little bit, but I think we 
need to decide whether we want to:

1) fix this in the FileSystem (not change HBase's assumption of a strongly 
consistent FileSystem)

or

2) fix this in HBase where we know what we are doing with the data and any 
guarantees needed.

 

Personally I think #2 will be easier, but would be willing to discuss. It might 
end up being a mix of things.

 

Also, let's start with what currently is *not* working with HBase backed by S3 
- what are the pain points we are trying to solve. That will help us direct the 
effort better. I can definitely help where I can with that list.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21060:

Status: Patch Available  (was: In Progress)

-v0 for branch-1.3
  - backport HBASE-17861, differs from branch-1.4 since HBASE-20605 is in place 
already
  - backport HBASE-18512, related permission bug in the same code

> fix dead store in SecureBulkLoadEndpoint
> 
>
> Key: HBASE-21060
> URL: https://issues.apache.org/jira/browse/HBASE-21060
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 1.2.7, 1.3.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-21060-branch-1.3.v0.patch
>
>
> Dead store to fsSet in 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At 
> SecureBulkLoadEndpoint.java:org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At SecureBulkLoadEndpoint.java:[line 145]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21060) fix dead store in SecureBulkLoadEndpoint

2018-08-17 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21060:

Attachment: HBASE-21060-branch-1.3.v0.patch

> fix dead store in SecureBulkLoadEndpoint
> 
>
> Key: HBASE-21060
> URL: https://issues.apache.org/jira/browse/HBASE-21060
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 1.2.7, 1.3.3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 1.2.7, 1.3.3
>
> Attachments: HBASE-21060-branch-1.3.v0.patch
>
>
> Dead store to fsSet in 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At 
> SecureBulkLoadEndpoint.java:org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(CoprocessorEnvironment)
>  At SecureBulkLoadEndpoint.java:[line 145]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21070) SnapshotFileCache won't update for snapshots stored in S3

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584610#comment-16584610
 ] 

Hadoop QA commented on HBASE-21070:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} hbase-server: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 0s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}241m 53s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}276m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21070 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936104/HBASE-21070.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f55f00722cbd 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f9793fafb7 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.

[jira] [Updated] (HBASE-21056) Findbugs false positive: BucketCache.persistToFile may fail to clean up java.io.OutputStream

2018-08-17 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21056:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Findbugs false positive: BucketCache.persistToFile may fail to clean up 
> java.io.OutputStream 
> -
>
> Key: HBASE-21056
> URL: https://issues.apache.org/jira/browse/HBASE-21056
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21056.0.patch
>
>
> Found by the nightly job via FindBugs:
> {code}
> FindBugs  module:hbase-server
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.persistToFile() may fail 
> to clean up java.io.OutputStream Obligation to clean up resource created at 
> BucketCache.java:up java.io.OutputStream Obligation to clean up resource 
> created at BucketCache.java:[line 1089] is not discharged
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-08-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584602#comment-16584602
 ] 

Sean Busbey commented on HBASE-18477:
-

I'm still interested. I'll queue up looking at this next week.

> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584596#comment-16584596
 ] 

Hadoop QA commented on HBASE-20874:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 12 new + 413 unchanged - 0 fixed = 
425 total (was 413) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  5s{color} | {color:orange} The patch generated 13 new + 749 unchanged - 0 
fixed = 762 total (was 749) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
19s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}154m 
53s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
7s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
19s{color} | {color:green} The patch does not generate ASF

[jira] [Commented] (HBASE-20941) Create and implement HbckService in master

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584592#comment-16584592
 ] 

Hadoop QA commented on HBASE-20941:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
19s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} hbase-client: The patch generated 10 new + 123 
unchanged - 0 fixed = 133 total (was 123) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
18s{color} | {color:red} hbase-server: The patch generated 5 new + 306 
unchanged - 0 fixed = 311 total (was 306) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} hbase-mapreduce: The patch generated 2 new + 5 
unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
2m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hbase-client generated 1 new + 2 unchanged - 0 fixed = 
3 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}137m 
12s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:gr

[jira] [Commented] (HBASE-20387) flaky infrastructure should work for all branches

2018-08-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584591#comment-16584591
 ] 

Sean Busbey commented on HBASE-20387:
-


[we check the "rerun all the flaky tests" for up to 40 
runs|https://github.com/apache/hbase/blob/master/dev-support/flaky-tests/flaky-reporting.Jenkinsfile#L44]

{code}
  flaky_args=("${flaky_args[@]}" --urls 
"${JENKINS_URL}/job/HBase-Flaky-Tests/job/${BRANCH_NAME}" --is-yetus False 
--max-builds 40)
{code}

Running once an hour, means this is about 2 days lag if a job is no longer 
flaky. I believe the "1/5" in the current report was just an artifact of when 
it ran since the flaky run job was new.

[we check nightly tests for up to 5 
runs|https://github.com/apache/hbase/blob/master/dev-support/flaky-tests/flaky-reporting.Jenkinsfile#L43]

{code}
  flaky_args=("${flaky_args[@]}" --urls 
"${JENKINS_URL}/job/HBase%20Nightly/job/${BRANCH_NAME}" --is-yetus True 
--max-builds 5)
{code}

Running 1/day means that it's about a work week of lag if a job is no longer 
flaky.

[the old 
job|https://builds.apache.org/job/HBase-Find-Flaky-Tests-old-just-master/configure]
 checked the flaky runs for 30 builds and checked the nightly tests for 6 
builds.

A test has to avoid being in either list to be run in the normal builds, which 
means the failing rate needs to be below 2.5% to get out.

> flaky infrastructure should work for all branches
> -
>
> Key: HBASE-20387
> URL: https://issues.apache.org/jira/browse/HBASE-20387
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20387.0.patch, HBASE-20387.1.patch
>
>
> We need a flaky list per-branch, since what does/does not work reliably on 
> master isn't really relevant to our older maintenance release lines.
> We should just make the invocation a step in the current per-branch nightly 
> jobs, prior to when we need the list in the stages that run unit tests. We 
> can publish it in the nightly job as well so that precommit can still get it. 
> (and can fetch it per-branch if needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584587#comment-16584587
 ] 

Hadoop QA commented on HBASE-21066:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 55s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}141m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 27s{color} 
| {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestTruncateTableProcedure 
|
|   | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | hadoop.hbase.master.procedure.TestCloneSnapshotProcedure |
|   | hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | hadoop.hbase.rsgroup.TestRSGroups |
|   | hadoop.hbase.rsgroup.TestRSGroupsWithACL |
|   | hadoop.hbase.rsgroup.TestRSGroupsOfflineMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21066 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936092/HBASE-210

[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584586#comment-16584586
 ] 

Mingliang Liu commented on HBASE-21071:
---

Thanks [~yuzhih...@gmail.com] and [~stack] for positive feedback.

Yes, the existing structure is not very well organized as we have multiple 
{{MiniHBaseCluster}} constructors accepting different combination of arguments, 
13 {{startMiniCluster()}} methods, and 3 {{startMiniHBaseCluster()}}; plus 
{{startMiniCluster()}} ultimately calls {{startMiniHBaseCluster()}} after 
building {{MiniDFSCluster}} and {{MiniZKCluster}}.

{quote}
Does this mean the options should be MiniHBaseClusterOptions and we should 
rename this start method to be startMiniHBaseCluster.
{quote}
My previous idea was to have both DFS cluster options ({{numDataNodes}} and 
{{dataNodeHosts}}) and HBase cluster options as most tests create them together 
via {{startMiniCluster()}}. I also see usages where only DFSCluster was created 
so I think HBase cluster (option) builder also makes sense here.

{quote}
Would a MiniHBaseClusterBuilder make sense returning a MiniHBaseCluster 
instance on which you called start.
{quote}
I like this idea. Let me provide an early patch to show how the structure can 
be clearer. I will work on {{master}} branch first as I don't worry about 
pruning old stuff.


> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584584#comment-16584584
 ] 

Hadoop QA commented on HBASE-21069:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
45s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 42s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure.TestFailedProcCleanup |
|   | hadoop.hbase.zookeeper.TestZKLeaderManager |
|   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
|   | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0

[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584576#comment-16584576
 ] 

Hudson commented on HBASE-20940:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #454 (See 
[https://builds.apache.org/job/HBase-1.3-IT/454/])
HBASE-20940 HStore.cansplit should not allow split to happen if it has 
(apurtell: rev 25a62962166ec0d0b42d54bf7857b2dfea76cc7a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java


> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584567#comment-16584567
 ] 

Andrew Purtell edited comment on HBASE-21069 at 8/18/18 1:27 AM:
-

I plan to commit these tomorrow. branch-1.3 and up. Let me know if you have any 
concerns. 


was (Author: apurtell):
I plan to commit these tomorrow. Let me know if you have any concerns.

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch, HBASE-21069.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584567#comment-16584567
 ] 

Andrew Purtell commented on HBASE-21069:


I plan to commit these tomorrow. Let me know if you have any concerns.

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch, HBASE-21069.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang resolved HBASE-21066.
-
Resolution: Won't Fix

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584565#comment-16584565
 ] 

Andrew Purtell commented on HBASE-21069:


Attaching patch for master that also adds the check for null 
{{memstoreScanners}}, even if NPE not seen on branch-2+ doesn't hurt to be 
defensive

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch, HBASE-21069.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: HBASE-21069.patch

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch, HBASE-21069.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21056) Findbugs false positive: BucketCache.persistToFile may fail to clean up java.io.OutputStream

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584563#comment-16584563
 ] 

stack commented on HBASE-21056:
---

I enjoy reading these detective novels [~busbey]

> Findbugs false positive: BucketCache.persistToFile may fail to clean up 
> java.io.OutputStream 
> -
>
> Key: HBASE-21056
> URL: https://issues.apache.org/jira/browse/HBASE-21056
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-21056.0.patch
>
>
> Found by the nightly job via FindBugs:
> {code}
> FindBugs  module:hbase-server
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.persistToFile() may fail 
> to clean up java.io.OutputStream Obligation to clean up resource created at 
> BucketCache.java:up java.io.OutputStream Obligation to clean up resource 
> created at BucketCache.java:[line 1089] is not discharged
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20940:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.7
   2.1.1
   2.2.0
   2.0.2
   1.3.3
   1.5.0
   Status: Resolved  (was: Patch Available)

Pushed bugfix to all active branches except branch-1.2, which doesn't carry the 
same code

> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21056) Findbugs false positive: BucketCache.persistToFile may fail to clean up java.io.OutputStream

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584559#comment-16584559
 ] 

Andrew Purtell commented on HBASE-21056:


+1

> Findbugs false positive: BucketCache.persistToFile may fail to clean up 
> java.io.OutputStream 
> -
>
> Key: HBASE-21056
> URL: https://issues.apache.org/jira/browse/HBASE-21056
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-21056.0.patch
>
>
> Found by the nightly job via FindBugs:
> {code}
> FindBugs  module:hbase-server
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.persistToFile() may fail 
> to clean up java.io.OutputStream Obligation to clean up resource created at 
> BucketCache.java:up java.io.OutputStream Obligation to clean up resource 
> created at BucketCache.java:[line 1089] is not discharged
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584556#comment-16584556
 ] 

stack commented on HBASE-20429:
---

(note to self, invite [~ste...@apache.org] to any meeting if it happensand 
review s3guard )

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584555#comment-16584555
 ] 

Steve Loughran commented on HBASE-20429:


One thing which would be good for you all to write down is: what are your 
expectations of an FS to work.

in particular
* create/read/update/delete consistency
* listing consistency
* which ops are required to be atomic and O(1)
* is it ok for create(path, overwrite=false) to be non-atomic?
* when you expect things to be written to store
* how long do you expect the final close() to take.

Identify these things and you can start to see what stores can work. And show 
you where you need to involve other thigs for the semantics you need. 

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-17 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584555#comment-16584555
 ] 

Steve Loughran edited comment on HBASE-20429 at 8/18/18 1:02 AM:
-

One thing which would be good for you all to write down is: what are your 
expectations of an FS to work.

in particular
* create/read/update/delete consistency
* listing consistency
* which ops are required to be atomic and O(1)
* is it ok for create(path, overwrite=false) to be non-atomic?
* when you expect things to be written to store
* how long do you expect the final close() to take.

Identify these things and you can start to see what stores can work. And show 
you where you need to involve other things for the semantics you need. 


was (Author: ste...@apache.org):
One thing which would be good for you all to write down is: what are your 
expectations of an FS to work.

in particular
* create/read/update/delete consistency
* listing consistency
* which ops are required to be atomic and O(1)
* is it ok for create(path, overwrite=false) to be non-atomic?
* when you expect things to be written to store
* how long do you expect the final close() to take.

Identify these things and you can start to see what stores can work. And show 
you where you need to involve other thigs for the semantics you need. 

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20881) Introduce a region transition procedure to handle all the state transition for a region

2018-08-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584554#comment-16584554
 ] 

Duo Zhang commented on HBASE-20881:
---

It's fine. Let me prepare a new patch to answer your questions on rb.

> Introduce a region transition procedure to handle all the state transition 
> for a region
> ---
>
> Key: HBASE-20881
> URL: https://issues.apache.org/jira/browse/HBASE-20881
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20881-v1.patch, HBASE-20881-v10.patch, 
> HBASE-20881-v11.patch, HBASE-20881-v12.patch, HBASE-20881-v13.patch, 
> HBASE-20881-v13.patch, HBASE-20881-v2.patch, HBASE-20881-v3.patch, 
> HBASE-20881-v4.patch, HBASE-20881-v4.patch, HBASE-20881-v5.patch, 
> HBASE-20881-v6.patch, HBASE-20881-v7.patch, HBASE-20881-v7.patch, 
> HBASE-20881-v8.patch, HBASE-20881-v9.patch, HBASE-20881.patch
>
>
> Now have an AssignProcedure, an UnssignProcedure, and also a 
> MoveRegionProcedure which schedules an AssignProcedure and an 
> UnssignProcedure to move a region. This makes the logic a bit complicated, as 
> MRP is not a RIT, so when SCP can not interrupt it directly...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20881) Introduce a region transition procedure to handle all the state transition for a region

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584552#comment-16584552
 ] 

stack commented on HBASE-20881:
---

Good by me.  Let's do another round of review first though? I can get u 
feedback over next day or so. Thanks.

> Introduce a region transition procedure to handle all the state transition 
> for a region
> ---
>
> Key: HBASE-20881
> URL: https://issues.apache.org/jira/browse/HBASE-20881
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20881-v1.patch, HBASE-20881-v10.patch, 
> HBASE-20881-v11.patch, HBASE-20881-v12.patch, HBASE-20881-v13.patch, 
> HBASE-20881-v13.patch, HBASE-20881-v2.patch, HBASE-20881-v3.patch, 
> HBASE-20881-v4.patch, HBASE-20881-v4.patch, HBASE-20881-v5.patch, 
> HBASE-20881-v6.patch, HBASE-20881-v7.patch, HBASE-20881-v7.patch, 
> HBASE-20881-v8.patch, HBASE-20881-v9.patch, HBASE-20881.patch
>
>
> Now have an AssignProcedure, an UnssignProcedure, and also a 
> MoveRegionProcedure which schedules an AssignProcedure and an 
> UnssignProcedure to move a region. This makes the logic a bit complicated, as 
> MRP is not a RIT, so when SCP can not interrupt it directly...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584550#comment-16584550
 ] 

stack commented on HBASE-21066:
---

[~xucang] should we resolve as won't fix then? Thanks

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584548#comment-16584548
 ] 

stack commented on HBASE-21069:
---

Ok.

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20881) Introduce a region transition procedure to handle all the state transition for a region

2018-08-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584546#comment-16584546
 ] 

Duo Zhang commented on HBASE-20881:
---

I believe there are still some bugs, maybe around unassign the regions when 
deleting a table, but it is hard to catch as we can not see the full log in pre 
commit result. Maybe we could push it to master first and then focus on the 
flaky dashboard to fix the problems.

> Introduce a region transition procedure to handle all the state transition 
> for a region
> ---
>
> Key: HBASE-20881
> URL: https://issues.apache.org/jira/browse/HBASE-20881
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-20881-v1.patch, HBASE-20881-v10.patch, 
> HBASE-20881-v11.patch, HBASE-20881-v12.patch, HBASE-20881-v13.patch, 
> HBASE-20881-v13.patch, HBASE-20881-v2.patch, HBASE-20881-v3.patch, 
> HBASE-20881-v4.patch, HBASE-20881-v4.patch, HBASE-20881-v5.patch, 
> HBASE-20881-v6.patch, HBASE-20881-v7.patch, HBASE-20881-v7.patch, 
> HBASE-20881-v8.patch, HBASE-20881-v9.patch, HBASE-20881.patch
>
>
> Now have an AssignProcedure, an UnssignProcedure, and also a 
> MoveRegionProcedure which schedules an AssignProcedure and an 
> UnssignProcedure to move a region. This makes the logic a bit complicated, as 
> MRP is not a RIT, so when SCP can not interrupt it directly...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20387) flaky infrastructure should work for all branches

2018-08-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584545#comment-16584545
 ] 

Duo Zhang commented on HBASE-20387:
---

We only count 5 builds for the flaky jobs now? This page:

https://builds.apache.org/job/HBASE-Find-Flaky-Tests/job/master/lastSuccessfulBuild/artifact/dashboard.html

It used to be 30 I think. In the past the failing rate for some flaky tests can 
be less than 20%...

Thanks.

> flaky infrastructure should work for all branches
> -
>
> Key: HBASE-20387
> URL: https://issues.apache.org/jira/browse/HBASE-20387
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 2.0.2, 2.2.0, 2.1.1, 1.4.7
>
> Attachments: HBASE-20387.0.patch, HBASE-20387.1.patch
>
>
> We need a flaky list per-branch, since what does/does not work reliably on 
> master isn't really relevant to our older maintenance release lines.
> We should just make the invocation a step in the current per-branch nightly 
> jobs, prior to when we need the list in the stages that run unit tests. We 
> can publish it in the nightly job as well so that precommit can still get it. 
> (and can fetch it per-branch if needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584543#comment-16584543
 ] 

stack commented on HBASE-20429:
---

[~apurtell]

Should we pow-wow on s3'ing? A meeting/hangout)? I see a bunch of efforts in 
this direction (e.g. WAL elsewhere). Perhaps itd be possible for there to  be a 
bit of coordination.

I like your talking out loud on your experience. That helps 

I'd be interested too in how we could avoid fsredo.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584541#comment-16584541
 ] 

Andrew Purtell edited comment on HBASE-21069 at 8/18/18 12:45 AM:
--

This is the unintended consequence of application of 2eaa24a1323 (HBASE-17885) 
after 9ced0c936f4 (HBASE-20322), the former a backport, so it was just 
something overlooked. The null test added by this patch seems like the right 
thing to do. I have the branch-1 suite running after this change, so far so 
good, not expecting problems. 


was (Author: apurtell):
This is the unintended consequence of application of 2eaa24a1323 (HBASE-17885) 
after 9ced0c936f4 (HBASE-20322), the former a backport, so it was just 
something overlooked. The null test seems like the right thing to do. I have 
the branch-1 suite running after this change, so far so good, not expecting 
problems. 

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584541#comment-16584541
 ] 

Andrew Purtell commented on HBASE-21069:


This is the unintended consequence of application of 2eaa24a1323 (HBASE-17885) 
after 9ced0c936f4 (HBASE-20322), the former a backport, so it was just 
something overlooked. The null test seems like the right thing to do. I have 
the branch-1 suite running after this change, so far so good, not expecting 
problems. 

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584538#comment-16584538
 ] 

Xu Cang commented on HBASE-21066:
-

Reviewed HBASE-7767 and the ways we are using this method. I changed my mind 
and thinking catching exception here and return false is a good solution. 

The semantic for #isTableState is: "Return true if the table is in these 
states, otherwise return false."

For method #isTableDisabled, it's similar, "Return true when the table is 
indeed disabled, return false when table state is not disabled. (could be 
enabled or unknown)"

I am canceling the patch for this issue. Though HBASE-20690 is still valid, 
possible to improve something for that.

 

 

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned HBASE-21066:
---

Assignee: (was: Xu Cang)

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-21066:

Status: Open  (was: Patch Available)

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.3.0, 3.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584532#comment-16584532
 ] 

stack commented on HBASE-21069:
---

Poke around? Iirc, this update readers is an old font of npes... Just a 
suggestion.

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584531#comment-16584531
 ] 

stack commented on HBASE-21066:
---

Thanks for working in this.  Is it right though converting all ioes into 
tablenotfoundexception?  At least add the ioe as the cause to the tnfe when 
throwing it? 

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584529#comment-16584529
 ] 

stack commented on HBASE-21071:
---

Or, I just took a look... startMiniCluster method starts a MiniHBaseCluster 
Does this mean the options should be MiniHBaseClusterOptions and we should 
rename this start method to be startMiniHBaseCluster. 

These test classes are messy. They grew over time. They intentionally allow 
user many options but yes, as you note, the plethora tend to overwhelm making 
it unreadable at a certain point. Would a MiniHBaseClusterBuilder make sense 
returning a MiniHBaseCluster instance on which you called start.

Sorry for the messy feedback. You seem good at 'design' -- though limited 
interaction -- so I feel ok pushing back offering a larger refactor (smile).

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21062) WALFactory has misleading notion of "default"

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584524#comment-16584524
 ] 

Hudson commented on HBASE-21062:


Results for branch branch-2.0
[build #692 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/692/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/692//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/692//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/692//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> WALFactory has misleading notion of "default"
> -
>
> Key: HBASE-21062
> URL: https://issues.apache.org/jira/browse/HBASE-21062
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-21062.001.branch-2.0.patch, 
> HBASE-21062.002.branch-2.0.patch
>
>
> In WALFactory, there is an enum {{Providers}} which has a list of supported 
> WALProvider implementations. In addition to list this, there is also a 
> {{defaultProvider}} (which the Configuration defaults to), that is meant to 
> be our "advertised" default WALProvider.
> However, the implementation of {{getProviderClass}} in WALFactory doesn't 
> actually adhere to the value of this enum, instead *always* returning 
> AsyncFSWal if it can be loaded.
> Having the default value in the enum but then overriding it in the 
> implementation of {{getProviderClass}} is silly and misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584522#comment-16584522
 ] 

stack commented on HBASE-21071:
---

Thank you for taking this up [~liuml07].

Should it be StartMiniClusterOptions to tie the new Builder tighter to the 
startMiniCluster method? (Will other methods in MiniCluster want to take 
options?)

Otherwise, looks great.

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20705) Having RPC Quota on a table prevents Space quota to be recreated/removed

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584517#comment-16584517
 ] 

Hudson commented on HBASE-20705:


Results for branch branch-2.1
[build #204 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Having RPC Quota on a table prevents Space quota to be recreated/removed
> 
>
> Key: HBASE-20705
> URL: https://issues.apache.org/jira/browse/HBASE-20705
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: hbase-20705.master.001.patch
>
>
> * Property {{hbase.quota.remove.on.table.delete}} is set to {{true}} by 
> default
>  * Create a table and set RPC and Space quota
> {noformat}
> hbase(main):022:0> create 't2','cf1'
> Created table t2
> Took 0.7420 seconds
> => Hbase::Table - t2
> hbase(main):023:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0105 seconds
> hbase(main):024:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 
> '10M/sec'
> Took 0.0186 seconds
> hbase(main):025:0> list_quotas
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_WRITES{noformat}
>  * Drop the table and the Space quota is set to {{REMOVE => true}}
> {noformat}
> hbase(main):026:0> disable 't2'
> Took 0.4363 seconds
> hbase(main):027:0> drop 't2'
> Took 0.2344 seconds
> hbase(main):028:0> list_quotas
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true
> USER => u1 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
> SCOPE => MACHINE{noformat}
>  * Recreate the table and set Space quota back. The Space quota on the table 
> is still set to {{REMOVE => true}}
> {noformat}
> hbase(main):029:0> create 't2','cf1'
> Created table t2
> Took 0.7348 seconds
> => Hbase::Table - t2
> hbase(main):031:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0088 seconds
> hbase(main):032:0> list_quotas
> OWNER QUOTAS
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true{noformat}
>  * Remove RPC quota and drop the table, the Space Quota is not removed
> {noformat}
> hbase(main):033:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => NONE
> Took 0.0193 seconds
> hbase(main):036:0> disable 't2'
> Took 0.4305 seconds
> hbase(main):037:0> drop 't2'
> Took 0.2353 seconds
> hbase(main):038:0> list_quotas
> OWNER QUOTAS
> TABLE => t2                               TYPE => SPACE, TABLE => t2, REMOVE 
> => true{noformat}
>  * Deleting the quota entry from {{hbase:quota}} seems to be the option to 
> reset it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21062) WALFactory has misleading notion of "default"

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584518#comment-16584518
 ] 

Hudson commented on HBASE-21062:


Results for branch branch-2.1
[build #204 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/204//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> WALFactory has misleading notion of "default"
> -
>
> Key: HBASE-21062
> URL: https://issues.apache.org/jira/browse/HBASE-21062
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-21062.001.branch-2.0.patch, 
> HBASE-21062.002.branch-2.0.patch
>
>
> In WALFactory, there is an enum {{Providers}} which has a list of supported 
> WALProvider implementations. In addition to list this, there is also a 
> {{defaultProvider}} (which the Configuration defaults to), that is meant to 
> be our "advertised" default WALProvider.
> However, the implementation of {{getProviderClass}} in WALFactory doesn't 
> actually adhere to the value of this enum, instead *always* returning 
> AsyncFSWal if it can be loaded.
> Having the default value in the enum but then overriding it in the 
> implementation of {{getProviderClass}} is silly and misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584511#comment-16584511
 ] 

Ted Yu commented on HBASE-21071:


Overall, I like this initiative.

It seems the Builder can be a class within MiniClusterOptions.
A pattern in current base would look something like this:

MiniClusterOptions.newBuilder().setX().setY().build();

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21055) NullPointerException when balanceOverall() but server balance info is null

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584510#comment-16584510
 ] 

Andrew Purtell commented on HBASE-21055:


Seems fine. We should apply to all relevant branches.
+1

> NullPointerException when balanceOverall() but server balance info is null 
> ---
>
> Key: HBASE-21055
> URL: https://issues.apache.org/jira/browse/HBASE-21055
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Affects Versions: 2.1.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-21055.branch-2.1.001.patch
>
>
> 2018-08-15,10:07:30,456 ERROR [master/c4-hadoop-tst-ct15:42900.Chore.1] 
> org.apache.hadoop.hbase.ScheduledChore: Caught error
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer.balanceOverall(SimpleLoadBalancer.java:482)
> at 
> org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer.balanceCluster(SimpleLoadBalancer.java:426)
> at 
> org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer.balanceCluster(SimpleLoadBalancer.java:592)
> at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:1535)
> at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:1466)
> at 
> org.apache.hadoop.hbase.master.balancer.BalancerChore.chore(BalancerChore.java:49)
> at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20734:
--
Attachment: HBASE-20734.master.004.patch

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch, HBASE-20734.master.004.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584509#comment-16584509
 ] 

Zach York commented on HBASE-20734:
---

Latest patch fixes the tests.

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch, HBASE-20734.master.004.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584507#comment-16584507
 ] 

Mingliang Liu commented on HBASE-21071:
---

ping [~te...@apache.org] and [~stack]. Thanks,

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-17 Thread Mingliang Liu (JIRA)
Mingliang Liu created HBASE-21071:
-

 Summary: HBaseTestingUtility::startMiniCluster() to use builder 
pattern
 Key: HBASE-21071
 URL: https://issues.apache.org/jira/browse/HBASE-21071
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Currently there are 13 {{startMiniCluster()}} methods to set up a mini cluster. 
I'm not surprised if we have a few more in future. It's good to support 
different combination of optional parameters. We have to pick up one of them 
carefully while still wondering the default values of other parameters; if we 
add a new option, we may bring more new methods.

One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
{{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
methods while in branch-2, we deprecate the old 13 methods.

Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584486#comment-16584486
 ] 

Hadoop QA commented on HBASE-20734:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
24s{color} | {color:red} hbase-server: The patch generated 9 new + 401 
unchanged - 5 fixed = 410 total (was 406) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}185m 47s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestDLSAsyncFSWAL |
|   | hadoop.hbase.master.TestDLSFSHLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936070/HBASE-20734.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f6108334b091 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-pe

[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Fix Version/s: 1.4.7
   1.3.3
   1.5.0

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.7
>
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Assignee: Andrew Purtell
  Status: Patch Available  (was: Open)

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Assignee: Andrew Purtell
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-17 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584476#comment-16584476
 ] 

huaxiang sun commented on HBASE-20943:
--

I am going to commit it tonight, will put [~jinghanx]'s userid in the Author 
field.

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943-master-v3.patch, HBASE-20943.patch, 
> Screen Shot 2018-07-25 at 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20941) Create and implement HbckService in master

2018-08-17 Thread Umesh Agashe (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584467#comment-16584467
 ] 

Umesh Agashe commented on HBASE-20941:
--

Uploaded patch 002 with changes per review comments.

> Create and implement HbckService in master
> --
>
> Key: HBASE-20941
> URL: https://issues.apache.org/jira/browse/HBASE-20941
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-20941.master.001.patch, 
> hbase-20941.master.002.patch
>
>
> Create HbckService in master and implement following methods:
>  # setTableState(): If table state are inconsistent with action/ procedures 
> working on them, sometimes manipulating their states in meta fix things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21070) SnapshotFileCache won't update for snapshots stored in S3

2018-08-17 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-21070:
--
Status: Patch Available  (was: Open)

> SnapshotFileCache won't update for snapshots stored in S3
> -
>
> Key: HBASE-21070
> URL: https://issues.apache.org/jira/browse/HBASE-21070
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.1.1, 1.4.7
>Reporter: Zach York
>Assignee: Zach York
>Priority: Critical
> Attachments: HBASE-21070.master.001.patch
>
>
> The SnapshotFileCache depends on last modified time to determine whether to 
> update the Snapshot HFile cache. However, in S3, real 'folders' don't exist. 
> S3 filesystems create a dummy file in place of a folder, but the dummy file 
> last modified time is not updated when files are changed 'under' it. This 
> means that the SnapshotFileCache doesn't pick up new snapshot HFiles and 
> these files aren't removed from the HFileCleaner and can be eligible for 
> deletion.
>  
> My patch removes the lastmodified assumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21070) SnapshotFileCache won't update for snapshots stored in S3

2018-08-17 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-21070:
--
Attachment: HBASE-21070.master.001.patch

> SnapshotFileCache won't update for snapshots stored in S3
> -
>
> Key: HBASE-21070
> URL: https://issues.apache.org/jira/browse/HBASE-21070
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.1.1, 1.4.7
>Reporter: Zach York
>Assignee: Zach York
>Priority: Critical
> Attachments: HBASE-21070.master.001.patch
>
>
> The SnapshotFileCache depends on last modified time to determine whether to 
> update the Snapshot HFile cache. However, in S3, real 'folders' don't exist. 
> S3 filesystems create a dummy file in place of a folder, but the dummy file 
> last modified time is not updated when files are changed 'under' it. This 
> means that the SnapshotFileCache doesn't pick up new snapshot HFiles and 
> these files aren't removed from the HFileCleaner and can be eligible for 
> deletion.
>  
> My patch removes the lastmodified assumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20941) Create and implement HbckService in master

2018-08-17 Thread Umesh Agashe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-20941:
-
Attachment: hbase-20941.master.002.patch

> Create and implement HbckService in master
> --
>
> Key: HBASE-20941
> URL: https://issues.apache.org/jira/browse/HBASE-20941
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-20941.master.001.patch, 
> hbase-20941.master.002.patch
>
>
> Create HbckService in master and implement following methods:
>  # setTableState(): If table state are inconsistent with action/ procedures 
> working on them, sometimes manipulating their states in meta fix things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21070) SnapshotFileCache won't update for snapshots stored in S3

2018-08-17 Thread Zach York (JIRA)
Zach York created HBASE-21070:
-

 Summary: SnapshotFileCache won't update for snapshots stored in S3
 Key: HBASE-21070
 URL: https://issues.apache.org/jira/browse/HBASE-21070
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0, 2.1.1, 1.4.7
Reporter: Zach York
Assignee: Zach York


The SnapshotFileCache depends on last modified time to determine whether to 
update the Snapshot HFile cache. However, in S3, real 'folders' don't exist. S3 
filesystems create a dummy file in place of a folder, but the dummy file last 
modified time is not updated when files are changed 'under' it. This means that 
the SnapshotFileCache doesn't pick up new snapshot HFiles and these files 
aren't removed from the HFileCleaner and can be eligible for deletion.

 

My patch removes the lastmodified assumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: (was: HBASE-21069-branch-1.patch)

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: HBASE-21069-branch-1.patch

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: HBASE-21069-branch-1.patch

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: (was: HBASE-21069-branch-1.patch)

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Mohit Goel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Goel updated HBASE-20874:
---
Attachment: HBASE-20874.master.008.patch

> Sending compaction descriptions from all regionservers to master.
> -
>
> Key: HBASE-20874
> URL: https://issues.apache.org/jira/browse/HBASE-20874
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mohit Goel
>Assignee: Mohit Goel
>Priority: Minor
> Attachments: HBASE-20874.master.004.patch, 
> HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, 
> HBASE-20874.master.007.patch, HBASE-20874.master.008.patch
>
>
> Need to send the compaction description from region servers to Master , to 
> let master know of the entire compaction state of the cluster. Further need 
> to change the implementation of client Side API than like getCompactionState, 
> which will consult master for the result instead of sending individual 
> request to regionservers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Thomas D'Silva (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584457#comment-16584457
 ] 

Thomas D'Silva commented on HBASE-21069:


I didn't sync my local branch looks like DefaultMemStore#getScanners was 
updated recently, Thanks [~apurtell] !

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584452#comment-16584452
 ] 

Andrew Purtell edited comment on HBASE-21069 at 8/17/18 10:24 PM:
--

Actually that might not be the wrong approach. 

StoreScanner#updateReaders is called from here
{code}
  private void notifyChangedReadersObservers(List sfs) throws 
IOException {
for (ChangedReadersObserver o : this.changedReaderObservers) {
  List memStoreScanners;
  this.lock.readLock().lock();
  try {
memStoreScanners = this.memstore.getScanners(o.getReadPoint());
  } finally {
this.lock.readLock().unlock();
  }
--->  o.updateReaders(sfs, memStoreScanners);
}
  }
{code}

And DefaultMemStore#getScanners can return null. 

{code}
  public List getScanners(long readPt) {
MemStoreScanner scanner =
  new MemStoreScanner(activeSection, snapshotSection, readPt, comparator);
scanner.seek(CellUtil.createCell(HConstants.EMPTY_START_ROW));
if (scanner.peek() == null) {
  scanner.close();
 ---> return null;
}
return Collections. singletonList(scanner);
  }
{code}



was (Author: apurtell):
Actually that might not be the wrong approach. 

StoreScanner#updateReaders is called from here
{code}
  private void notifyChangedReadersObservers(List sfs) throws 
IOException {
for (ChangedReadersObserver o : this.changedReaderObservers) {
  List memStoreScanners;
  this.lock.readLock().lock();
  try {
--->memStoreScanners = this.memstore.getScanners(o.getReadPoint());
  } finally {
this.lock.readLock().unlock();
  }
  o.updateReaders(sfs, memStoreScanners);
}
  }
{code}

And DefaultMemStore#getScanners can return null. 

{code}
  public List getScanners(long readPt) {
MemStoreScanner scanner =
  new MemStoreScanner(activeSection, snapshotSection, readPt, comparator);
scanner.seek(CellUtil.createCell(HConstants.EMPTY_START_ROW));
if (scanner.peek() == null) {
  scanner.close();
 ---> return null;
}
return Collections. singletonList(scanner);
  }
{code}


> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regio

[jira] [Comment Edited] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584452#comment-16584452
 ] 

Andrew Purtell edited comment on HBASE-21069 at 8/17/18 10:24 PM:
--

Actually that might be the right approach. 

StoreScanner#updateReaders is called from here
{code}
  private void notifyChangedReadersObservers(List sfs) throws 
IOException {
for (ChangedReadersObserver o : this.changedReaderObservers) {
  List memStoreScanners;
  this.lock.readLock().lock();
  try {
memStoreScanners = this.memstore.getScanners(o.getReadPoint());
  } finally {
this.lock.readLock().unlock();
  }
--->  o.updateReaders(sfs, memStoreScanners);
}
  }
{code}

And DefaultMemStore#getScanners can return null. 

{code}
  public List getScanners(long readPt) {
MemStoreScanner scanner =
  new MemStoreScanner(activeSection, snapshotSection, readPt, comparator);
scanner.seek(CellUtil.createCell(HConstants.EMPTY_START_ROW));
if (scanner.peek() == null) {
  scanner.close();
 ---> return null;
}
return Collections. singletonList(scanner);
  }
{code}



was (Author: apurtell):
Actually that might not be the wrong approach. 

StoreScanner#updateReaders is called from here
{code}
  private void notifyChangedReadersObservers(List sfs) throws 
IOException {
for (ChangedReadersObserver o : this.changedReaderObservers) {
  List memStoreScanners;
  this.lock.readLock().lock();
  try {
memStoreScanners = this.memstore.getScanners(o.getReadPoint());
  } finally {
this.lock.readLock().unlock();
  }
--->  o.updateReaders(sfs, memStoreScanners);
}
  }
{code}

And DefaultMemStore#getScanners can return null. 

{code}
  public List getScanners(long readPt) {
MemStoreScanner scanner =
  new MemStoreScanner(activeSection, snapshotSection, readPt, comparator);
scanner.seek(CellUtil.createCell(HConstants.EMPTY_START_ROW));
if (scanner.peek() == null) {
  scanner.close();
 ---> return null;
}
return Collections. singletonList(scanner);
  }
{code}


> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionser

[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584452#comment-16584452
 ] 

Andrew Purtell commented on HBASE-21069:


Actually that might not be the wrong approach. 

StoreScanner#updateReaders is called from here
{code}
  private void notifyChangedReadersObservers(List sfs) throws 
IOException {
for (ChangedReadersObserver o : this.changedReaderObservers) {
  List memStoreScanners;
  this.lock.readLock().lock();
  try {
--->memStoreScanners = this.memstore.getScanners(o.getReadPoint());
  } finally {
this.lock.readLock().unlock();
  }
  o.updateReaders(sfs, memStoreScanners);
}
  }
{code}

And DefaultMemStore#getScanners can return null. 

{code}
  public List getScanners(long readPt) {
MemStoreScanner scanner =
  new MemStoreScanner(activeSection, snapshotSection, readPt, comparator);
scanner.seek(CellUtil.createCell(HConstants.EMPTY_START_ROW));
if (scanner.peek() == null) {
  scanner.close();
 ---> return null;
}
return Collections. singletonList(scanner);
  }
{code}


> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584451#comment-16584451
 ] 

Hadoop QA commented on HBASE-20874:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
34s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
22s{color} | {color:red} hbase-server: The patch generated 1 new + 294 
unchanged - 0 fixed = 295 total (was 294) {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 12 new + 413 unchanged - 0 fixed = 
425 total (was 413) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  5s{color} | {color:orange} The patch generated 13 new + 749 unchanged - 0 
fixed = 762 total (was 749) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m  
0s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
42s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}

[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584450#comment-16584450
 ] 

Andrew Purtell commented on HBASE-21069:


Here's a naive patch that treats the symptom but we need to look further I think

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21069:
---
Attachment: HBASE-21069-branch-1.patch

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
> Attachments: HBASE-21069-branch-1.patch
>
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584437#comment-16584437
 ] 

Andrew Purtell commented on HBASE-21069:


Might be related to HBASE-20322 CME in StoreScanner causes region server crash. 
That was the most recent change here. 

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20723) Custom hbase.wal.dir results in data loss because we write recovered edits into a different place than where the recovering region server looks for them

2018-08-17 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584434#comment-16584434
 ] 

Zach York commented on HBASE-20723:
---

[~jerrychabot] that commit doesn't appear in Apache HBase 1.3.1 so no need to 
fix it here.

Since you are using EMR, that particular patch was present in EMR's version of 
HBase 1.3.1. The bug has been fixed in EMR 5.16.0 and onward. Please reach out 
to me if you have any questions. Jira shouldn't be used for vendor specific 
issues.

> Custom hbase.wal.dir results in data loss because we write recovered edits 
> into a different place than where the recovering region server looks for them
> 
>
> Key: HBASE-20723
> URL: https://issues.apache.org/jira/browse/HBASE-20723
> Project: HBase
>  Issue Type: Bug
>  Components: Recovery, wal
>Affects Versions: 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 2.0.0
>Reporter: Rohan Pednekar
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.1.0, 1.5.0, 1.4.6
>
> Attachments: 20723.branch-1.txt, 20723.branch-2.txt, 20723.v1.txt, 
> 20723.v10.txt, 20723.v2.txt, 20723.v3.txt, 20723.v4.txt, 20723.v5.txt, 
> 20723.v5.txt, 20723.v6.txt, 20723.v7.txt, 20723.v8.txt, 20723.v9.txt, logs.zip
>
>
> Description:
> When custom hbase.wal.dir is configured the recovery system uses it in place 
> of the HBase root dir and thus constructs an incorrect path for recovered 
> edits when splitting WALs. This causes the recovery code in Region Servers to 
> believe there are no recovered edits to replay, which causes a loss of writes 
> that had not flushed prior to loss of a server.
>  
> Reproduction:
> This is an Azure HDInsight HBase cluster with HDP 2.6. and HBase 
> 1.1.2.2.6.3.2-14 
> By default the underlying data is going to wasb://x@y/hbase 
>  I tried to move WAL folders to HDFS, which is the SSD mounted on each VM at 
> /mnt.
> hbase.wal.dir= hdfs://mycluster/walontest
> hbase.wal.dir.perms=700
> hbase.rootdir.perms=700
> hbase.rootdir= 
> wasb://XYZ[@hbaseperf.core.net|mailto:duohbase5ds...@duohbaseperf.blob.core.windows.net]/hbase
> Procedure to reproduce this issue:
> 1. create a table in hbase shell
> 2. insert a row in hbase shell
> 3. reboot the VM which hosts that region
> 4. scan the table in hbase shell and it is empty
> Looking at the region server logs:
> {code:java}
> 2018-06-12 22:08:40,455 INFO  [RS_LOG_REPLAY_OPS-wn2-duohba:16020-0-Writer-1] 
> wal.WALSplitter: This region's directory doesn't exist: 
> hdfs://mycluster/walontest/data/default/tb1/b7fd7db5694eb71190955292b3ff7648. 
> It is very likely that it was already split so it's safe to discard those 
> edits.
> {code}
> The log split/replay ignored actual WAL due to WALSplitter is looking for the 
> region directory in the hbase.wal.dir we specified rather than the 
> hbase.rootdir.
> Looking at the source code,
>  
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java]
>  it uses the rootDir, which is walDir, as the tableDir root path.
> So if we use HBASE-17437, waldir and hbase rootdir are in different path or 
> even in different filesystem, then the #5 uses walDir as tableDir is 
> apparently wrong.
> CC: [~zyork], [~yuzhih...@gmail.com] Attached the logs for quick review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Thomas D'Silva (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584426#comment-16584426
 ] 

Thomas D'Silva commented on HBASE-21069:


FYI [~apurtell] [~vik.karma] [~abhishek.chouhan]

> NPE in StoreScanner.updateReaders causes RS to crash 
> -
>
> Key: HBASE-21069
> URL: https://issues.apache.org/jira/browse/HBASE-21069
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Thomas D'Silva
>Priority: Major
>
> I see the following NPE in the region server log for a table that is taking 
> heavy writes. 
> I am not sure how the {{memStoreScanners}} variable gets set to null.
> {code}
> 2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
> regionserver.HRegionFileSystem - Committing store file ...
> 2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
> hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
> 2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - ABORTING region server 
> iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of 
> WAL required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.util.ArrayList.(ArrayList.java:177)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
> at 
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
> ... 9 more
> 2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer 
> - RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.phoenix.coprocessor.ScanRegionObserver, 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
> org.apache.phoenix.hbase.index.Indexer, 
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
> org.apache.hadoop.hbase.security.token.TokenProvider, 
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21069) NPE in StoreScanner.updateReaders causes RS to crash

2018-08-17 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created HBASE-21069:
--

 Summary: NPE in StoreScanner.updateReaders causes RS to crash 
 Key: HBASE-21069
 URL: https://issues.apache.org/jira/browse/HBASE-21069
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.3.2
Reporter: Thomas D'Silva


I see the following NPE in the region server log for a table that is taking 
heavy writes. 
I am not sure how the {{memStoreScanners}} variable gets set to null.

{code}
2018-08-17 19:59:23,682 DEBUG [MemStoreFlusher.1] 
regionserver.HRegionFileSystem - Committing store file ...
2018-08-17 19:59:23,684 INFO  [MemStoreFlusher.1] regionserver.HStore - Added 
hdfs://, entries=919170, sequenceid=275114, filesize=22.6 M
2018-08-17 19:59:23,689 FATAL [MemStoreFlusher.1] regionserver.HRegionServer - 
ABORTING region server 
iotperf1dchbase1a-dnds22-2-prd.eng.sfdc.net,60020,1533915690501: Replay of WAL 
required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region: ..
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2581)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2258)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2220)
at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2106)
at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2031)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:508)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:478)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at java.util.ArrayList.(ArrayList.java:177)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.updateReaders(StoreScanner.java:827)
at 
org.apache.hadoop.hbase.regionserver.HStore.notifyChangedReadersObservers(HStore.java:1160)
at 
org.apache.hadoop.hbase.regionserver.HStore.updateStorefiles(HStore.java:1133)
at 
org.apache.hadoop.hbase.regionserver.HStore.access$900(HStore.java:120)
at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.commit(HStore.java:2487)
at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2536)
... 9 more
2018-08-17 19:59:23,692 FATAL [MemStoreFlusher.1] regionserver.HRegionServer - 
RegionServer abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.security.access.AccessController, 
org.apache.phoenix.coprocessor.ScanRegionObserver, 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver, 
org.apache.phoenix.hbase.index.Indexer, 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver, 
org.apache.hadoop.hbase.security.token.TokenProvider, 
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-21066:

Attachment: HBASE-21066.master.002.patch

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch, 
> HBASE-21066.master.002.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Mohit Goel (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584356#comment-16584356
 ] 

Mohit Goel commented on HBASE-20874:


Uploaded patch 7 with fix for the test TestMasterMetrics failure.

> Sending compaction descriptions from all regionservers to master.
> -
>
> Key: HBASE-20874
> URL: https://issues.apache.org/jira/browse/HBASE-20874
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mohit Goel
>Assignee: Mohit Goel
>Priority: Minor
> Attachments: HBASE-20874.master.004.patch, 
> HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, 
> HBASE-20874.master.007.patch
>
>
> Need to send the compaction description from region servers to Master , to 
> let master know of the entire compaction state of the cluster. Further need 
> to change the implementation of client Side API than like getCompactionState, 
> which will consult master for the result instead of sending individual 
> request to regionservers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-08-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584341#comment-16584341
 ] 

Hudson commented on HBASE-18477:


Results for branch HBASE-18477
[build #298 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/298/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/298//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/298//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/298//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/298//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584294#comment-16584294
 ] 

Xu Cang commented on HBASE-21066:
-

That's correct. I added 1.0 and 2.0 to affected versions.

The reason I created this new Jira is to fix this method and review the impact 
on all callers and modify related unit tests to accommodate this change. I plan 
to finish this patch and then the HBASE-20690 will be resolved.  I will make 
sure all related branches are covered. Thanks, [~apurtell]   Please let me know 
if you think I need to adjust something else.

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-21066:

Affects Version/s: 1.3.0
   2.0.0

> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.3.0, 2.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21062) WALFactory has misleading notion of "default"

2018-08-17 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21062:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks, folks!

> WALFactory has misleading notion of "default"
> -
>
> Key: HBASE-21062
> URL: https://issues.apache.org/jira/browse/HBASE-21062
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1
>
> Attachments: HBASE-21062.001.branch-2.0.patch, 
> HBASE-21062.002.branch-2.0.patch
>
>
> In WALFactory, there is an enum {{Providers}} which has a list of supported 
> WALProvider implementations. In addition to list this, there is also a 
> {{defaultProvider}} (which the Configuration defaults to), that is meant to 
> be our "advertised" default WALProvider.
> However, the implementation of {{getProviderClass}} in WALFactory doesn't 
> actually adhere to the value of this enum, instead *always* returning 
> AsyncFSWal if it can be loaded.
> Having the default value in the enum but then overriding it in the 
> implementation of {{getProviderClass}} is silly and misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584280#comment-16584280
 ] 

Zach York commented on HBASE-20734:
---

New patch fixes TestHeapSize

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-17 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20734:
--
Attachment: HBASE-20734.master.003.patch

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20705) Having RPC Quota on a table prevents Space quota to be recreated/removed

2018-08-17 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20705:
---
Fix Version/s: 2.1.1

> Having RPC Quota on a table prevents Space quota to be recreated/removed
> 
>
> Key: HBASE-20705
> URL: https://issues.apache.org/jira/browse/HBASE-20705
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: hbase-20705.master.001.patch
>
>
> * Property {{hbase.quota.remove.on.table.delete}} is set to {{true}} by 
> default
>  * Create a table and set RPC and Space quota
> {noformat}
> hbase(main):022:0> create 't2','cf1'
> Created table t2
> Took 0.7420 seconds
> => Hbase::Table - t2
> hbase(main):023:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0105 seconds
> hbase(main):024:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 
> '10M/sec'
> Took 0.0186 seconds
> hbase(main):025:0> list_quotas
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_WRITES{noformat}
>  * Drop the table and the Space quota is set to {{REMOVE => true}}
> {noformat}
> hbase(main):026:0> disable 't2'
> Took 0.4363 seconds
> hbase(main):027:0> drop 't2'
> Took 0.2344 seconds
> hbase(main):028:0> list_quotas
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true
> USER => u1 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
> SCOPE => MACHINE{noformat}
>  * Recreate the table and set Space quota back. The Space quota on the table 
> is still set to {{REMOVE => true}}
> {noformat}
> hbase(main):029:0> create 't2','cf1'
> Created table t2
> Took 0.7348 seconds
> => Hbase::Table - t2
> hbase(main):031:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0088 seconds
> hbase(main):032:0> list_quotas
> OWNER QUOTAS
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true{noformat}
>  * Remove RPC quota and drop the table, the Space Quota is not removed
> {noformat}
> hbase(main):033:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => NONE
> Took 0.0193 seconds
> hbase(main):036:0> disable 't2'
> Took 0.4305 seconds
> hbase(main):037:0> drop 't2'
> Took 0.2353 seconds
> hbase(main):038:0> list_quotas
> OWNER QUOTAS
> TABLE => t2                               TYPE => SPACE, TABLE => t2, REMOVE 
> => true{noformat}
>  * Deleting the quota entry from {{hbase:quota}} seems to be the option to 
> reset it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Mohit Goel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Goel updated HBASE-20874:
---
Attachment: HBASE-20874.master.007.patch

> Sending compaction descriptions from all regionservers to master.
> -
>
> Key: HBASE-20874
> URL: https://issues.apache.org/jira/browse/HBASE-20874
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mohit Goel
>Assignee: Mohit Goel
>Priority: Minor
> Attachments: HBASE-20874.master.004.patch, 
> HBASE-20874.master.005.patch, HBASE-20874.master.006.patch, 
> HBASE-20874.master.007.patch
>
>
> Need to send the compaction description from region servers to Master , to 
> let master know of the entire compaction state of the cluster. Further need 
> to change the implementation of client Side API than like getCompactionState, 
> which will consult master for the result instead of sending individual 
> request to regionservers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21066) Improve isTableState() method to ensure caller gets correct info

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584259#comment-16584259
 ] 

Andrew Purtell commented on HBASE-21066:


Correct me if I'm wrong but this is an improvement that impacts RSGroups (I 
came here from HBASE-20690 Moving table to target rsgroup needs to handle 
TableStateNotFoundException), which is also in branch-1, so we should figure 
out something there too. 


> Improve isTableState() method to ensure caller gets correct info
> 
>
> Key: HBASE-21066
> URL: https://issues.apache.org/jira/browse/HBASE-21066
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-21066.master.001.patch
>
>
>  
> {code:java}
> public boolean isTableState(TableName tableName, TableState.State... states) {
>  try {
>  TableState tableState = getTableState(tableName);
>  return tableState.isInStates(states);
>  } catch (IOException e) {
>  LOG.error("Unable to get table " + tableName + " state", e);
>  // XXX: is it safe to just return false here?
>  return false;
>  }
>  }
>  
> {code}
>  
> When cannot get table state, returning false is not always safe or correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20940) HStore.cansplit should not allow split to happen if it has references

2018-08-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584252#comment-16584252
 ] 

Andrew Purtell commented on HBASE-20940:


Thanks. Let me try again!

> HStore.cansplit should not allow split to happen if it has references
> -
>
> Key: HBASE-20940
> URL: https://issues.apache.org/jira/browse/HBASE-20940
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Vishal Khandelwal
>Assignee: Vishal Khandelwal
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20940.branch-1.3.v1.patch, 
> HBASE-20940.branch-1.3.v2.patch, HBASE-20940.branch-1.v1.patch, 
> HBASE-20940.branch-1.v2.patch, HBASE-20940.branch-1.v3.patch, 
> HBASE-20940.v1.patch, HBASE-20940.v2.patch, HBASE-20940.v3.patch, 
> HBASE-20940.v4.patch, result_HBASE-20940.branch-1.v2.log
>
>
> When split happens and immediately another split happens, it may result into 
> a split of a region who still has references to its parent. More details 
> about scenario can be found here HBASE-20933
> HStore.hasReferences should check from fs.storefile rather than in memory 
> objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Umesh Agashe (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584237#comment-16584237
 ] 

Umesh Agashe edited comment on HBASE-20874 at 8/17/18 6:10 PM:
---

{code}
/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:102:81: C: 
Metrics/LineLength: Line is too long. [100/80] 
/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:114:81: C: 
Metrics/LineLength: Line is too long. [99/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:81: 
C: Metrics/LineLength: Line is too long. [90/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:37:81: 
C: Metrics/LineLength: Line is too long. [83/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:43:81: 
C: Metrics/LineLength: Line is too long. [84/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:44:81: 
C: Metrics/LineLength: Line is too long. [89/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:45:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:46:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:47:81: 
C: Metrics/LineLength: Line is too long. [81/80]{code}

The above errors will go away after addressing HBASE-20851 and following issues 
are showing up in most files:
{code}
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/AbcSize: Assignment Branch Condition size for command is too high. 
[45.01/15]
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/MethodLength: Method has too many lines. [16/10] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:26: 
C: Style/WordArray: Use `%w` or `%W` for an array of words.{code}
 


was (Author: uagashe):
/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:102:81: C: 
Metrics/LineLength: Line is too long. [100/80] 
/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:114:81: C: 
Metrics/LineLength: Line is too long. [99/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:81: 
C: Metrics/LineLength: Line is too long. [90/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:37:81: 
C: Metrics/LineLength: Line is too long. [83/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:43:81: 
C: Metrics/LineLength: Line is too long. [84/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:44:81: 
C: Metrics/LineLength: Line is too long. [89/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:45:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:46:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:47:81: 
C: Metrics/LineLength: Line is too long. [81/80]

The above errors will go away after addressing HBASE-20851 and following issues 
are showing up in most files:

/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/AbcSize: Assignment Branch Condition size for command is too high. 
[45.01/15]

/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/MethodLength: Method has too many lines. [16/10] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:26: 
C: Style/WordArray: Use `%w` or `%W` for an array of words.

 

> Sending compaction descriptions from all regionservers to master.
> -
>
> Key: HBASE-20874
> URL: https://issues.apache.org/jira/browse/HBASE-20874
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mohit Goel
>Assignee: Mohit Goel
>Priority: Minor
> Attachments: HBASE-20874.master.004.patch, 
> HBASE-20874.master.005.patch, HBASE-20874.master.006.patch
>
>
> Need to send the compaction description from region servers to Master , to 
> let master know of the entire compaction state of the cluster. Further need 
> to change the implementation of client Side API than like getCompactionState, 
> which will consult master for the result instead of sending individual 
> request to regionservers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20874) Sending compaction descriptions from all regionservers to master.

2018-08-17 Thread Umesh Agashe (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16584237#comment-16584237
 ] 

Umesh Agashe commented on HBASE-20874:
--

/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:102:81: C: 
Metrics/LineLength: Line is too long. [100/80] 
/testptch/hbase/hbase-shell/src/main/ruby/hbase/admin.rb:114:81: C: 
Metrics/LineLength: Line is too long. [99/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:81: 
C: Metrics/LineLength: Line is too long. [90/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:37:81: 
C: Metrics/LineLength: Line is too long. [83/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:43:81: 
C: Metrics/LineLength: Line is too long. [84/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:44:81: 
C: Metrics/LineLength: Line is too long. [89/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:45:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:46:81: 
C: Metrics/LineLength: Line is too long. [97/80] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:47:81: 
C: Metrics/LineLength: Line is too long. [81/80]

The above errors will go away after addressing HBASE-20851 and following issues 
are showing up in most files:

/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/AbcSize: Assignment Branch Condition size for command is too high. 
[45.01/15]

/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:35:7: 
C: Metrics/MethodLength: Method has too many lines. [16/10] 
/testptch/hbase/hbase-shell/src/main/ruby/shell/commands/compactions.rb:36:26: 
C: Style/WordArray: Use `%w` or `%W` for an array of words.

 

> Sending compaction descriptions from all regionservers to master.
> -
>
> Key: HBASE-20874
> URL: https://issues.apache.org/jira/browse/HBASE-20874
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mohit Goel
>Assignee: Mohit Goel
>Priority: Minor
> Attachments: HBASE-20874.master.004.patch, 
> HBASE-20874.master.005.patch, HBASE-20874.master.006.patch
>
>
> Need to send the compaction description from region servers to Master , to 
> let master know of the entire compaction state of the cluster. Further need 
> to change the implementation of client Side API than like getCompactionState, 
> which will consult master for the result instead of sending individual 
> request to regionservers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately when cleaner chore is disabled

2018-08-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu resolved HBASE-21011.
--
Resolution: Won't Fix

Aligned with Reid's comment, operator has few options to get around with this 
situation, e.g. turn on cleaner chore. close it and marked as won't fix.

> Provide CLI option to run oldwals and hfiles cleaner separately when cleaner 
> chore is disabled
> --
>
> Key: HBASE-21011
> URL: https://issues.apache.org/jira/browse/HBASE-21011
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, Client
>Affects Versions: 3.0.0, 1.4.6, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Attachments: HBASE-21011.master.001.patch, 
> HBASE-21011.master.002.patch, HBASE-21011.master.003.patch, 
> HBASE-21011.master.004.patch
>
>
> There is a corner case when cleaner chore for HFiles and oldwals is disabled, 
> admin/user needs to manually execute admin command {{cleaner_chore_run}} to 
> clean the old HFiles and oldwals. Existing logic of {{cleaner_chore_run}} is 
> to [firstly trigger the HFiles cleaner and then oldwals 
> cleaner|https://github.com/taklwu/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L1414-L1420],
>  and only return succeed if both completes. 
> but when running this {{cleaner_chore_run}} command, there is a potential use 
> case that admin would like trigger the cleaner for only oldwals or hfiles but 
> still keep the automatic cleaner chore disabled. So, this change aims to 
> provide support for this corner case, and provide flexibility for those user 
> with cleaner chore disabled by default to execute admin CLI to run oldwals 
> and HFiles cleaning procedure individually.
> NOTE that {{cleaner_chore_run}} was introduced in HBASE-17280, this patch 
> added options 'hfiles' and 'oldwals' to it. Also fix default behavior of 
> {{cleaner_chore_run}} will be only ran when cleaner chore is set to disabled, 
> e.g. the proposed admin CLI options are
> {noformat}
> hbase> cleaner_chore_run   # this was introduced in HBASE-17280, 
> but changed the behavior to only ran when cleaner chore is set to disabled
> hbase> cleaner_chore_run 'hfiles'  # added, ran when cleaner chore is set 
> to disabled
> hbase> cleaner_chore_run 'oldwals' # added, ran when cleaner chore is set 
> to disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately when cleaner chore is disabled

2018-08-17 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-21011:
-
Status: Open  (was: Patch Available)

> Provide CLI option to run oldwals and hfiles cleaner separately when cleaner 
> chore is disabled
> --
>
> Key: HBASE-21011
> URL: https://issues.apache.org/jira/browse/HBASE-21011
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, Client
>Affects Versions: 1.4.6, 3.0.0, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Attachments: HBASE-21011.master.001.patch, 
> HBASE-21011.master.002.patch, HBASE-21011.master.003.patch, 
> HBASE-21011.master.004.patch
>
>
> There is a corner case when cleaner chore for HFiles and oldwals is disabled, 
> admin/user needs to manually execute admin command {{cleaner_chore_run}} to 
> clean the old HFiles and oldwals. Existing logic of {{cleaner_chore_run}} is 
> to [firstly trigger the HFiles cleaner and then oldwals 
> cleaner|https://github.com/taklwu/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L1414-L1420],
>  and only return succeed if both completes. 
> but when running this {{cleaner_chore_run}} command, there is a potential use 
> case that admin would like trigger the cleaner for only oldwals or hfiles but 
> still keep the automatic cleaner chore disabled. So, this change aims to 
> provide support for this corner case, and provide flexibility for those user 
> with cleaner chore disabled by default to execute admin CLI to run oldwals 
> and HFiles cleaning procedure individually.
> NOTE that {{cleaner_chore_run}} was introduced in HBASE-17280, this patch 
> added options 'hfiles' and 'oldwals' to it. Also fix default behavior of 
> {{cleaner_chore_run}} will be only ran when cleaner chore is set to disabled, 
> e.g. the proposed admin CLI options are
> {noformat}
> hbase> cleaner_chore_run   # this was introduced in HBASE-17280, 
> but changed the behavior to only ran when cleaner chore is set to disabled
> hbase> cleaner_chore_run 'hfiles'  # added, ran when cleaner chore is set 
> to disabled
> hbase> cleaner_chore_run 'oldwals' # added, ran when cleaner chore is set 
> to disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >