[jira] [Updated] (HBASE-19973) Implement a procedure to replay sync replication wal for standby cluster

2018-03-05 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19973:
---
Attachment: HBASE-19973.HBASE-19064.006.patch

> Implement a procedure to replay sync replication wal for standby cluster
> 
>
> Key: HBASE-19973
> URL: https://issues.apache.org/jira/browse/HBASE-19973
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19973.HBASE-19064.001.patch, 
> HBASE-19973.HBASE-19064.002.patch, HBASE-19973.HBASE-19064.003.patch, 
> HBASE-19973.HBASE-19064.004.patch, HBASE-19973.HBASE-19064.005.patch, 
> HBASE-19973.HBASE-19064.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387398#comment-16387398
 ] 

Hadoop QA commented on HBASE-20137:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
4s{color} | {color:red} hbase-server: The patch generated 5 new + 46 unchanged 
- 0 fixed = 51 total (was 46) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 46s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.snapshot.TestAssignProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20137 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913157/HBASE-20137.branch-2.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b4a94ec7b628 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / b59c39d942 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11825/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11825/artifact/patchprocess/patch-unit

[jira] [Commented] (HBASE-18309) Support multi threads in CleanerChore

2018-03-05 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387387#comment-16387387
 ] 

Reid Chan commented on HBASE-18309:
---

Just in few days i suppose:)

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-18309.addendum.patch, 
> HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, 
> HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, 
> HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, 
> HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, 
> HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, 
> HBASE-18309.master.012.patch, space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.
> What's more, currently there's only one thread iterating the archive 
> directory, and we could use multiple threads cleaning sub directories in 
> parallel to speed it up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387376#comment-16387376
 ] 

stack commented on HBASE-20137:
---

.003 fix unit test.

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch, 
> HBASE-20137.branch-2.002.patch, HBASE-20137.branch-2.003.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20137:
--
Attachment: HBASE-20137.branch-2.003.patch

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch, 
> HBASE-20137.branch-2.002.patch, HBASE-20137.branch-2.003.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19973) Implement a procedure to replay sync replication wal for standby cluster

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387368#comment-16387368
 ] 

Hadoop QA commented on HBASE-19973:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19064 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
20s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HBASE-19064 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} hbase-server: The patch generated 1 new + 423 unchanged 
- 3 fixed = 424 total (was 426) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
0m 54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913145/HBASE-19973.HBASE-19064.005.patch
 |
| Optional Tests |  asflicense  cc  unit  hbaseprotoc  javac  javadoc  findbugs 
 shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 12cc3e8c0c31 4.4.0-116-generic 

[jira] [Reopened] (HBASE-18309) Support multi threads in CleanerChore

2018-03-05 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan reopened HBASE-18309:
---

I will provide a patch for branch-1 together with HBASE-20095 as soon as that 
is committed.

> Support multi threads in CleanerChore
> -
>
> Key: HBASE-18309
> URL: https://issues.apache.org/jira/browse/HBASE-18309
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: Reid Chan
>Priority: Major
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-18309.addendum.patch, 
> HBASE-18309.master.001.patch, HBASE-18309.master.002.patch, 
> HBASE-18309.master.004.patch, HBASE-18309.master.005.patch, 
> HBASE-18309.master.006.patch, HBASE-18309.master.007.patch, 
> HBASE-18309.master.008.patch, HBASE-18309.master.009.patch, 
> HBASE-18309.master.010.patch, HBASE-18309.master.011.patch, 
> HBASE-18309.master.012.patch, space_consumption_in_archive.png
>
>
> There is only one thread in LogCleaner to clean oldWALs and in our big 
> cluster we find this is not enough. The number of files under oldWALs reach 
> the max-directory-items limit of HDFS and cause region server crash, so we 
> use multi threads for LogCleaner and the crash not happened any more.
> What's more, currently there's only one thread iterating the archive 
> directory, and we could use multiple threads cleaning sub directories in 
> parallel to speed it up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387349#comment-16387349
 ] 

Hadoop QA commented on HBASE-20090:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} hbase-server: The patch generated 0 new + 29 
unchanged - 1 fixed = 29 total (was 30) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}136m 
52s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913112/20090.v6.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 190f8227d5cc 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a4c012049 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| De

[jira] [Commented] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387347#comment-16387347
 ] 

Hadoop QA commented on HBASE-20137:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 26s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.snapshot.TestAssignProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20137 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913154/HBASE-20137.branch-2.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 35b2fa4b7770 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / b59c39d942 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11823/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/

[jira] [Commented] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387340#comment-16387340
 ] 

stack commented on HBASE-20137:
---

.002 adds exercising new code path to test; TestAssignmentManagement already 
had general machinery in place... just added two new scenarios.

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch, 
> HBASE-20137.branch-2.002.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20137:
--
Attachment: HBASE-20137.branch-2.002.patch

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch, 
> HBASE-20137.branch-2.002.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20134) support scripts use hard-coded /tmp

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387339#comment-16387339
 ] 

Hadoop QA commented on HBASE-20134:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
4s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913156/HBASE-20134.0.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux c89ab3956165 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f89a1f7d7a |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 48 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11824/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> support scripts use hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20134.0.patch
>
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20134) support scripts use hard-coded /tmp

2018-03-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20134:

Summary: support scripts use hard-coded /tmp  (was: website generation uses 
hard-coded /tmp)

> support scripts use hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20134.0.patch
>
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20134) website generation uses hard-coded /tmp

2018-03-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20134:

Status: Patch Available  (was: Open)

> website generation uses hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20134.0.patch
>
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20134) website generation uses hard-coded /tmp

2018-03-05 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20134:

Attachment: HBASE-20134.0.patch

> website generation uses hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20134.0.patch
>
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387313#comment-16387313
 ] 

ramkrishna.s.vasudevan commented on HBASE-20090:


I think the case here is valid. As [~anoop.hbase] said this region size calc 
has been changed recently so we should be careful here.
Instead of changing the precondition to a normal 'if' clause, can we add a 
check to see if the region is having 0 data in this method
{code}
getBiggestMemStoreRegion()
{code}
It already has code for the concurrency that you are saying here where a split 
has happened and it has marked the region as not eligible for flush. So I think 
it would be better to add that check here? 


> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20137:
--
Status: Patch Available  (was: Open)

Trying patch against precommit while I try to figure a test (the circumstance 
is a bit tough to conjure what w/ an rpc to a server concurrently undergoing 
server crash procedure...)

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20137:
--
Fix Version/s: 2.0.0

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20137:
--
Attachment: HBASE-20137.branch-2.001.patch

> TestRSGroups is flakey
> --
>
> Key: HBASE-20137
> URL: https://issues.apache.org/jira/browse/HBASE-20137
> Project: HBase
>  Issue Type: Bug
>  Components: flakey
>Affects Versions: 2.0.0-beta-2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20137.branch-2.001.patch
>
>
> It was the single test that failed the hbase-2 nightlies in #440 at the 
> hadoop2 stage.
> The failure manifests as a timeout. It actually has an interesting cause 
> calling into question some of the clauses in 
> UnassignProcedure#remoteCallFailed.
> We are running a disabletable concurrent with a shutdown. pid=309 is the 
> disable. pid=311 is the interesting one. The below is a little hard to read 
> -- the exception 'message' is the the current procedure as a String... hard 
> to parse, fixing -- but we are trying to unassign as part of a the 
> disabletable. Our RPC fails because the server we are trying to rpc too is 
> currently being processed as crashed (pid=308 is a servercrashprocedure for 
> this server). As part of the processing of the failed RPC we will expire the 
> server -- if we can't RPC to it, it must be gone. The current procedure is 
> then suspended until it gets woken up by the servercrashprocedure triggered 
> by the expire only in this case we are shutting down so the expire is 
> ignored... The current procedure is left in its suspend state. This prevents 
> the Master going down. So we time out.
> 2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
> assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
> ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
> assignment.UnassignProcedure(276): Expiring server pid=311, ppid=309, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
> table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
> location=1cfd208ff882,40584,1520249102524, 
> exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
>  pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
> UnassignProcedure table=Group_ns:testKillRS, 
> region=de7534c208a06502537cd95c248b3043, 
> server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
> 2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
> Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
> progress
> I need to cater for case where the expire server is rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20137) TestRSGroups is flakey

2018-03-05 Thread stack (JIRA)
stack created HBASE-20137:
-

 Summary: TestRSGroups is flakey
 Key: HBASE-20137
 URL: https://issues.apache.org/jira/browse/HBASE-20137
 Project: HBase
  Issue Type: Bug
  Components: flakey
Affects Versions: 2.0.0-beta-2
Reporter: stack
Assignee: stack


It was the single test that failed the hbase-2 nightlies in #440 at the hadoop2 
stage.

The failure manifests as a timeout. It actually has an interesting cause 
calling into question some of the clauses in UnassignProcedure#remoteCallFailed.

We are running a disabletable concurrent with a shutdown. pid=309 is the 
disable. pid=311 is the interesting one. The below is a little hard to read -- 
the exception 'message' is the the current procedure as a String... hard to 
parse, fixing -- but we are trying to unassign as part of a the disabletable. 
Our RPC fails because the server we are trying to rpc too is currently being 
processed as crashed (pid=308 is a servercrashprocedure for this server). As 
part of the processing of the failed RPC we will expire the server -- if we 
can't RPC to it, it must be gone. The current procedure is then suspended until 
it gets woken up by the servercrashprocedure triggered by the expire only 
in this case we are shutting down so the expire is ignored... The current 
procedure is left in its suspend state. This prevents the Master going down. So 
we time out.

2018-03-05 11:29:22,507 INFO  [PEWorker-13] 
assignment.RegionTransitionProcedure(213): Dispatch pid=311, ppid=309, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
location=1cfd208ff882,40584,1520249102524
2018-03-05 11:29:22,508 WARN  [PEWorker-13] 
assignment.RegionTransitionProcedure(187): Remote call failed pid=311, 
ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
location=1cfd208ff882,40584,1520249102524; exception=pid=311, ppid=309, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=Group_ns:testKillRS, region=de7534c208a06502537cd95c248b3043, 
server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
2018-03-05 11:29:22,508 WARN  [PEWorker-13] assignment.UnassignProcedure(276): 
Expiring server pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=Group_ns:testKillRS, 
region=de7534c208a06502537cd95c248b3043, 
server=1cfd208ff882,40584,1520249102524; rit=CLOSING, 
location=1cfd208ff882,40584,1520249102524, 
exception=org.apache.hadoop.hbase.master.assignment.FailedRemoteDispatchException:
 pid=311, ppid=309, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=Group_ns:testKillRS, 
region=de7534c208a06502537cd95c248b3043, 
server=1cfd208ff882,40584,1520249102524 to 1cfd208ff882,40584,1520249102524
2018-03-05 11:29:22,508 WARN  [PEWorker-13] master.ServerManager(580): 
Expiration of 1cfd208ff882,40584,1520249102524 but server shutdown already in 
progress

I need to cater for case where the expire server is rejected.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387255#comment-16387255
 ] 

Hadoop QA commented on HBASE-20129:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
59s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}151m 
36s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913127/HBASE-20129-v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 5fa449b5b396 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a4c012049 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| 

[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387251#comment-16387251
 ] 

Zheng Hu commented on HBASE-20129:
--

> Maybe we should use a byte other than 0xFF? 
In theory, we can use any byte as the escape byte.   the diff is that if we use 
readable  byte , will be helpful when we scan meta by shell or zookeeper ( 
depends on which storage layer we choose). 

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387248#comment-16387248
 ] 

Zheng Hu commented on HBASE-20129:
--

The UT looks great.   +1.  

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387234#comment-16387234
 ] 

Duo Zhang commented on HBASE-20129:
---

Maybe we should use a byte other than 0xFF?

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387225#comment-16387225
 ] 

Zheng Hu commented on HBASE-20129:
--

Oh,   0xFFa will be encoded to 0xFF0xFFa , so seems no problem.

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387223#comment-16387223
 ] 

Duo Zhang commented on HBASE-20129:
---

It will be convert to '0xFF 0xFF a' when storing to meta, and then convert to 
'0xFF a' back when loading from meta.

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20121:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews, Mike and Vlad.

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Fix For: 3.0.0
>
> Attachments: 20121.v1.txt, 20121.v2.txt, 20121.v3.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387218#comment-16387218
 ] 

Zheng Hu commented on HBASE-20129:
--

According the parse method,  when region name has a startRowKey  0xFFa , it 'll 
parse to be 'a'  ?  

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387215#comment-16387215
 ] 

Mike Drob commented on HBASE-20121:
---

+1

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt, 20121.v2.txt, 20121.v3.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19973) Implement a procedure to replay sync replication wal for standby cluster

2018-03-05 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19973:
---
Attachment: HBASE-19973.HBASE-19064.005.patch

> Implement a procedure to replay sync replication wal for standby cluster
> 
>
> Key: HBASE-19973
> URL: https://issues.apache.org/jira/browse/HBASE-19973
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19973.HBASE-19064.001.patch, 
> HBASE-19973.HBASE-19064.002.patch, HBASE-19973.HBASE-19064.003.patch, 
> HBASE-19973.HBASE-19064.004.patch, HBASE-19973.HBASE-19064.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20090:
---
Status: Patch Available  (was: Open)

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387189#comment-16387189
 ] 

Ted Yu commented on HBASE-20136:


The assertion above the catch is supposed to fail (due to bad comparator):
{code}
 assertTrue(count++ == k.getTimestamp());
{code}
That is why AssertionError is caught.

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: CE
> Attachments: 20136.v1.txt, 20136.v2.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20108) `hbase zkcli` falls into a non-interactive prompt after HBASE-15199

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387185#comment-16387185
 ] 

Hudson commented on HBASE-20108:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4697 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4697/])
HBASE-20108 Remove jline exclusion from ZooKeeper (elserj: rev 
2402f1fd43fbe04ecce8bac67d31931251bcac6c)
* (edit) hbase-assembly/pom.xml
* (edit) hbase-assembly/src/main/assembly/hadoop-two-compat.xml
* (edit) bin/hbase
* (edit) pom.xml
* (edit) bin/hbase.cmd


> `hbase zkcli` falls into a non-interactive prompt after HBASE-15199
> ---
>
> Key: HBASE-20108
> URL: https://issues.apache.org/jira/browse/HBASE-20108
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20108.001.branch-2.patch, 
> HBASE-20108.002.branch-2.patch, HBASE-20108.003.branch-2.patch
>
>
> HBASE-15199 pulls the jruby-complete jar out of the normal classpath for 
> commands run in HBase. Jruby-complete bundles jline inside. ZK uses jline for 
> its nice shell-like usage.
> The problem is that this uncovered a bug where we're not explicitly bundling 
> a version of jline to make sure that {{hbase zkcli}} actually works. As long 
> as we're expecting {{zkcli}} to be there, we should provide jline on the 
> classpath to make sure the users get a real cli.
> Thanks to [~sergey.soldatov] for getting to the bottom of it quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387186#comment-16387186
 ] 

Hudson commented on HBASE-18135:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4697 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4697/])
HBASE-18135 Implement mechanism for RegionServers to report file (elserj: rev 
4a4c0120494757539d680c2d7d44fe6ab3d71d27)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileArchiverNotifier.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileArchiverNotifierFactoryImpl.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/SnapshotQuotaObserverChore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestSnapshotQuotaObserverChore.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestFileArchiverNotifierImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java
* (edit) hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestLowLatencySpaceQuotas.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileArchiverNotifierFactory.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RegionServerSpaceQuotaManager.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/FileArchiverNotifierImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java


> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387168#comment-16387168
 ] 

Duo Zhang commented on HBASE-20136:
---

{code}
-} catch (junit.framework.AssertionFailedError e) {
+} catch (java.lang.AssertionError e) {
{code}

This is a bit strange but not your fault. +1 if pre commit says OK.

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: CE
> Attachments: 20136.v1.txt, 20136.v2.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387165#comment-16387165
 ] 

Hadoop QA commented on HBASE-20136:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} hbase-common: The patch generated 1 new + 2 unchanged 
- 1 fixed = 3 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
7s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m  
5s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
29s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20136 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913135/20136.v2.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d21aa48c2242 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| 

[jira] [Updated] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20136:
---
Attachment: 20136.v2.txt

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: CE
> Attachments: 20136.v1.txt, 20136.v2.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19945) Make TestRSGroups LargeTest to prevent timeout

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19945:
--
Labels: CE  (was: )

> Make TestRSGroups LargeTest to prevent timeout
> --
>
> Key: HBASE-19945
> URL: https://issues.apache.org/jira/browse/HBASE-19945
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Major
>  Labels: CE
> Attachments: 19945.v1.txt, 19945.v2.txt
>
>
> TestRSGroups is annotated as MediumTests. It times out on Jenkins:
> https://builds.apache.org/job/HBase-Trunk_matrix/4537/jdk=JDK%201.8%20(latest),label=(Hadoop%20&&%20!H5)/testReport/junit/org.apache.hadoop.hbase.rsgroup/TestRSGroups/org_apache_hadoop_hbase_rsgroup_TestRSGroups/
> {code}
> org.junit.runners.model.TestTimedOutException: test timed out after 180 
> seconds
> {code}
> The above is reproducible on Linux locally.
> TestRSGroups should be made LargeTest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387112#comment-16387112
 ] 

Hadoop QA commented on HBASE-20121:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hbase-backup in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
11s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m 
23s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 
41s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} hbase-backup generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
7s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913128/20121.v3.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e80499661bc3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64

[jira] [Updated] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20136:
--
Labels: CE  (was: )

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: CE
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387085#comment-16387085
 ] 

stack commented on HBASE-20136:
---

Thanks [~Apache9] for identifying the actual issue.

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387060#comment-16387060
 ] 

Duo Zhang commented on HBASE-20136:
---

Seems the problem is that, there is no small|medium|large category for this 
class so in the pre commit or nightly build processing we never run it... We 
will first run SmallTests, and then MediumTests and LargeTests.

[~tedyu] Could you please also change the test to junit4 style? Not a hard 
work. Just 'extends TestCase', use org.junit.Assert.XXX.

Thanks.

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20129:
--
Attachment: HBASE-20129-v1.patch

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20121:
---
Attachment: 20121.v3.txt

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt, 20121.v2.txt, 20121.v3.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20129) Add UT for serial replication checker

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387053#comment-16387053
 ] 

Duo Zhang commented on HBASE-20129:
---

Fix checkstyle issues.

> Add UT for serial replication checker
> -
>
> Key: HBASE-20129
> URL: https://issues.apache.org/jira/browse/HBASE-20129
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-20129-v1.patch, HBASE-20129.patch
>
>
> Now it is a separated class so it is much easier to write UT to test the 
> corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387051#comment-16387051
 ] 

Ted Yu commented on HBASE-20136:


Not in a hurry to commit.

bq. the rules actually work

I think it does, when the above mvn command is used. Otherwise there wouldn't 
be ArrayIndexOutOfBoundsException

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387037#comment-16387037
 ] 

Duo Zhang commented on HBASE-20136:
---

Hold on committing. This is junit3 style test, I wonder whether the rules 
actually work. Need to verify. Or can we rewrite the test with junit4 style?

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387035#comment-16387035
 ] 

Ted Yu commented on HBASE-20136:


bq. Why does the build not fail?

The following command can reproduce the test error:
{code}
mvn -B -nsu test -Dtest=TestKeyValue --projects :hbase-common
{code}
For hbase Jenkins build, we don't use such formation.

bq. Why did HBaseClassTestRuleChecker not find this?

ArrayIndexOutOfBoundsException was from HBaseClassTestRuleChecker.
See snippet from dump file:
{code}
# Created on 2018-03-05T22:25:50.252
org.apache.maven.surefire.testset.TestSetFailedException: Test mechanism :: 0
at 
org.apache.maven.surefire.common.junit4.JUnit4RunListener.rethrowAnyTestMechanismFailures(JUnit4RunListener.java:223)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:168)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.hadoop.hbase.HBaseClassTestRuleChecker.testStarted(HBaseClassTestRuleChecker.java:45)
at 
org.junit.runner.notification.RunNotifier$3.notifyListener(RunNotifier.java:121)
at 
org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
at 
org.junit.runner.notification.RunNotifier.fireTestStarted(RunNotifier.java:118)
{code}

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387031#comment-16387031
 ] 

Ted Yu commented on HBASE-20121:


[~mdrob]:
Do you have any more question ?

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt, 20121.v2.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387017#comment-16387017
 ] 

Ted Yu commented on HBASE-16179:


w.r.t. the warning on @transient , I cannot get rid of error for the following 
place:
{code}
[ERROR] 
/Users/tyu/master/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/NewHBaseRDD.scala:31:
 error: overriding method conf in class RDD of type => 
org.apache.spark.SparkConf;
[ERROR]  value conf has weaker access privileges; it should not be private
[ERROR]@transient private val conf: Configuration,
[ERROR]   ^
{code}

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387006#comment-16387006
 ] 

stack commented on HBASE-20136:
---

Why did HBaseClassTestRuleChecker not find this?

Why does the build not fail?



> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16387001#comment-16387001
 ] 

Hadoop QA commented on HBASE-20136:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20136 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913117/20136.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7c2d0c1b9ba0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a4c012049 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11817/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1

[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386980#comment-16386980
 ] 

Umesh Agashe commented on HBASE-20136:
--

+1 lgtm

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386948#comment-16386948
 ] 

Peter Somogyi commented on HBASE-20136:
---

+1

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20136:
---
Status: Patch Available  (was: Open)

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20136:
---
Attachment: 20136.v1.txt

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 20136.v1.txt
>
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386933#comment-16386933
 ] 

Ted Yu commented on HBASE-20136:


This was discovered due to the following test error:
{code}
Tests run: 18, Failures: 0, Errors: 18, Skipped: 0, Time elapsed: 0.339 s <<< 
FAILURE! - in org.apache.hadoop.hbase.TestKeyValue
Test mechanism  Time elapsed: 0.024 s  <<< ERROR!
java.lang.ArrayIndexOutOfBoundsException: 0
{code}

> TestKeyValue misses ClassRule and Category annotations
> --
>
> Key: HBASE-20136
> URL: https://issues.apache.org/jira/browse/HBASE-20136
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>
> hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
> ClassRule and Category annotations.
> This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20136) TestKeyValue misses ClassRule and Category annotations

2018-03-05 Thread Ted Yu (JIRA)
Ted Yu created HBASE-20136:
--

 Summary: TestKeyValue misses ClassRule and Category annotations
 Key: HBASE-20136
 URL: https://issues.apache.org/jira/browse/HBASE-20136
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu


hbase-common/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java misses 
ClassRule and Category annotations.

This issue adds the annotations to this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386922#comment-16386922
 ] 

Mike Drob commented on HBASE-16179:
---

I've been doing some reading on this for you, found 
https://issues.scala-lang.org/browse/SI-8813
https://issues.scala-lang.org/browse/SI-6325

the last comment on SI-8813 provides a good clue

Looking at NewHadoopRDD in the spark core codebase, I found it looks like this:
{noformat}
@DeveloperApi
class NewHadoopRDD[K, V](
sc : SparkContext,
inputFormatClass: Class[_ <: InputFormat[K, V]],
keyClass: Class[K],
valueClass: Class[V],
@transient private val _conf: Configuration)
  extends RDD[(K, V)](sc, Nil) with Logging {
{noformat}

I'm not sure if we need to back off the @transient annotation on some of our 
own fields then or if we need to explicitly declare them as {{private val}} 
now. Can you experiment with it and let us know what works? It would be a good 
idea to run the IntegrationTestSparkBulkLoad as a sanity check too.

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386912#comment-16386912
 ] 

Hadoop QA commented on HBASE-20121:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hbase-backup in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hbase-backup: The patch generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} hbase-backup generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
24s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20121 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913110/20121.v2.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c32699e4d6e3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2402f1fd43 |
| maven | version: Apache Maven 

[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386890#comment-16386890
 ] 

Ted Yu commented on HBASE-16179:


I got the following when adopting the suggestion from scaladoc warning:
{code}
[ERROR] 
/Users/tyu/master/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/NewHBaseRDD.scala:27:
 error: not found: type param
[ERROR] class NewHBaseRDD[K,V](@(transient @param) sc : SparkContext,
[ERROR] ^
[ERROR] 
/Users/tyu/master/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/NewHBaseRDD.scala:28:
 error: not found: type param
[ERROR]@(transient @param) inputFormatClass: Class[_ <: 
InputFormat[K, V]],
{code}

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17449) Add explicit document on different timeout settings

2018-03-05 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386883#comment-16386883
 ] 

Peter Somogyi commented on HBASE-17449:
---

Thanks for comments [~appy]! Setting the delays at startup doesn't work because 
table creates will fail using these config properties.

> Add explicit document on different timeout settings
> ---
>
> Key: HBASE-17449
> URL: https://issues.apache.org/jira/browse/HBASE-17449
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-Standalone.patch, HBASE-17449.master.002.patch, 
> WIP-HBASE-17499.master.001.patch
>
>
> Currently we have more than one timeout settings, mainly includes:
> * hbase.rpc.timeout
> * hbase.client.operation.timeout
> * hbase.client.scanner.timeout.period
> And in latest branch-1 or master branch code, we will have two other 
> properties:
> * hbase.rpc.read.timeout
> * hbase.rpc.write.timeout
> However, in current refguid we don't have explicit instruction on the 
> difference of these timeout settings (there're explanations for each 
> property, but no instruction on when to use which)
> In my understanding, for RPC layer timeout, or say each rpc call:
> * Scan (openScanner/next): controlled by hbase.client.scanner.timeout.period
> * Other operations:
>1. For released versions: controlled by hbase.rpc.timeout
>2. For 1.4+ versions: read operation controlled by hbase.rpc.read.timeout, 
> write operation controlled by hbase.rpc.write.timeout, or hbase.rpc.timeout 
> if the previous two are not set.
> And hbase.client.operation.timeout is a higher-level control counting retry 
> in, or say the overall control for one user call.
> After this JIRA, I hope when users ask questions like "What settings I should 
> use if I don't want to wait for more than 1 second for a single 
> put/get/scan.next call", we could give a neat answer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17449) Add explicit document on different timeout settings

2018-03-05 Thread Peter Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-17449:
--
Attachment: HBASE-17449.master.002.patch

> Add explicit document on different timeout settings
> ---
>
> Key: HBASE-17449
> URL: https://issues.apache.org/jira/browse/HBASE-17449
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-Standalone.patch, HBASE-17449.master.002.patch, 
> WIP-HBASE-17499.master.001.patch
>
>
> Currently we have more than one timeout settings, mainly includes:
> * hbase.rpc.timeout
> * hbase.client.operation.timeout
> * hbase.client.scanner.timeout.period
> And in latest branch-1 or master branch code, we will have two other 
> properties:
> * hbase.rpc.read.timeout
> * hbase.rpc.write.timeout
> However, in current refguid we don't have explicit instruction on the 
> difference of these timeout settings (there're explanations for each 
> property, but no instruction on when to use which)
> In my understanding, for RPC layer timeout, or say each rpc call:
> * Scan (openScanner/next): controlled by hbase.client.scanner.timeout.period
> * Other operations:
>1. For released versions: controlled by hbase.rpc.timeout
>2. For 1.4+ versions: read operation controlled by hbase.rpc.read.timeout, 
> write operation controlled by hbase.rpc.write.timeout, or hbase.rpc.timeout 
> if the previous two are not set.
> And hbase.client.operation.timeout is a higher-level control counting retry 
> in, or say the overall control for one user call.
> After this JIRA, I hope when users ask questions like "What settings I should 
> use if I don't want to wait for more than 1 second for a single 
> put/get/scan.next call", we could give a neat answer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20090:
---
Attachment: (was: 20090.v5.txt)

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20090:
---
Attachment: (was: 20090.v4.txt)

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20090:
---
Attachment: 20090.v6.txt

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18135:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review, Ted.

> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18135:
---
Hadoop Flags: Reviewed
Release Note: Changes the manner in which file space consumption is 
reported to the Master for the purposes of space quota tracking to reduce the 
latency in which system space utilization is observed. This will have a 
positive effect in how quickly HBase will react to changes in filesystem usage 
related to file archiving.

> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386866#comment-16386866
 ] 

Josh Elser commented on HBASE-18135:


{quote}yeah, that's HBASE-20068
{quote}
Sweet. Thanks, sir!

Pushing this one in given the review board.

> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386860#comment-16386860
 ] 

Mike Drob commented on HBASE-16179:
---

the hadoop check is likely HBASE-20068

I'm happy with this after we fix the precommit scaladoc

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386858#comment-16386858
 ] 

Sean Busbey commented on HBASE-18135:
-

yeah, that's HBASE-20068

> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386853#comment-16386853
 ] 

Josh Elser commented on HBASE-18135:


{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-install-plugin:2.5.2:install (default-install) 
on project hbase-thrift: Failed to install metadata 
org.apache.hbase:hbase-thrift:3.0.0-SNAPSHOT/maven-metadata.xml: Could not 
parse metadata 
/home/jenkins/.m2/repository/org/apache/hbase/hbase-thrift/3.0.0-SNAPSHOT/maven-metadata-local.xml:
 in epilog non whitespace content is not allowed but got / (position: END_TAG 
seen ...\n/... @25:2)  -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-thrift{noformat}
The hadoopcheck failures are all due to the above. Checking it locally, but I 
think it's just some issue on the build machine. [~busbey], does this ring any 
bells to you?

> Track file archival for low latency space quota with snapshots
> --
>
> Key: HBASE-18135
> URL: https://issues.apache.org/jira/browse/HBASE-18135
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-18135.001.patch, HBASE-18135.002.patch, 
> HBASE-18135.004.patch, HBASE-18135.005.patch
>
>
> Related to the work proposed on HBASE-17748 and building on the same idea as 
> HBASE-18133, we can make the space quota tracking for HBase snapshots faster 
> to respond.
> When snapshots are in play, the location of a file (whether in the {{data}} 
> or {{archive}} directory) plays a factor in the realized size of a table. 
> Like flushes, compactions, etc, moving files from the data directory to the 
> archive directory is done by the RegionServer. We can hook into this call and 
> send the necessary information to the Master so that it can more quickly 
> update the size of a table when there are snapshots in play.
> This will require the RegionServer to report the full coordinates of the file 
> being moved (table+region+family+file) so that the SnapshotQuotaObserverChore 
> running in the master can avoid HDFS lookups in partial or total to compute 
> the location of a Region's hfiles.
> This may also require some refactoring of the SnapshotQuotaObserverChore to 
> de-couple the receipt of these file archival reports from RegionServers (e.g. 
> {{HRegionFileSystem.removeStoreFiles(..)}}, and the Master processing the 
> sizes of snapshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18467) nightly job needs to run all stages and then comment on jira

2018-03-05 Thread Francis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-18467:

Fix Version/s: (was: 1.3.2)
   1.3.3

> nightly job needs to run all stages and then comment on jira
> 
>
> Key: HBASE-18467
> URL: https://issues.apache.org/jira/browse/HBASE-18467
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 1.4.3
>
> Attachments: HBASE-18467.0.WIP.patch, HBASE-18467.0.patch, 
> HBASE-18467.1.patch, HBASE-18467.1.patch, HBASE-18467.2.patch, 
> HBASE-18467.3.patch, HBASE-18467.4.patch, HBASE-18467.5.patch, 
> HBASE-18467.6.patch
>
>
> follow on from HBASE-18147, need a post action that pings all newly-committed 
> jiras with result of the branch build



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386844#comment-16386844
 ] 

Ted Yu commented on HBASE-16179:


[~busbey]:
See if you can spare some time to take a look.

[~mdrob]:
Do you have more comments ?

Thanks

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20121:
---
Attachment: 20121.v2.txt

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt, 20121.v2.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386827#comment-16386827
 ] 

Ted Yu commented on HBASE-20121:


Mike:
Thanks for the comment.
Incremental restore is handled by this call in restoreImages():
{code}
restoreTool.incrementalRestoreTable(conn, tableBackupPath, paths, new 
TableName[] { sTable },
{code}
There have been several refactorings since I initially implemented bulk load 
support.

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386824#comment-16386824
 ] 

Vladimir Rodionov commented on HBASE-20121:
---

{code}
So the behavior of incremental and non-incremental restore ends up being the 
same?
{code}
That was an old code artefact, which has skipped numerous code reviews.  Should 
be removed. Completely. Including this:
{code}
if (image.getType() == BackupType.INCREMENTAL) {
backupIdSet.add(image.getBackupId());
LOG.debug("adding " + image.getBackupId() + " for bulk load");
 }
{code}

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20123) Backup test fails against hadoop 3

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386772#comment-16386772
 ] 

Ted Yu edited comment on HBASE-20123 at 3/5/18 9:52 PM:


Thanks Steve for taking a look.

I gave +1 for HADOOP-15289.


was (Author: yuzhih...@gmail.com):
Thanks Steve for taking a look.

HADOOP-15290 has been logged.

> Backup test fails against hadoop 3
> --
>
> Key: HBASE-20123
> URL: https://issues.apache.org/jira/browse/HBASE-20123
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> When running backup unit test against hadoop3, I saw:
> {code}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 88.862 s <<< FAILURE! - in 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes
> [ERROR] 
> testBackupMultipleDeletes(org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes)
>   Time elapsed: 86.206 s  <<< ERROR!
> java.io.IOException: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> Caused by: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> {code}
> In the test output, I found:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
> java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at 
> org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at 
> org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:196)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:290)
>   at 
> org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605)
> {code}
> It seems the failure was related to how we use distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19075) Task tabs on master UI cause page scroll

2018-03-05 Thread Balazs Meszaros (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386814#comment-16386814
 ] 

Balazs Meszaros commented on HBASE-19075:
-

+1, it worked for me.

> Task tabs on master UI cause page scroll
> 
>
> Key: HBASE-19075
> URL: https://issues.apache.org/jira/browse/HBASE-19075
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Mike Drob
>Assignee: Sahil Aggarwal
>Priority: Major
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-19075.master.001.patch, 
> HBASE-19075.master.002.patch
>
>
> On the master info page, the clicking the tabs under Tasks causes the page to 
> scroll back to the top of the page.
> {noformat}
> Tasks
> Show All Monitored Tasks Show non-RPC Tasks Show All RPC Handler Tasks Show 
> Active RPC Calls Show Client Operations View as JSON
> {noformat}
> ^^ Any of those
> The other tab-like links on the page keep the scroll in the same location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20132) Change the "KV" to "Cell" for web UI

2018-03-05 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386805#comment-16386805
 ] 

Umesh Agashe commented on HBASE-20132:
--

+1 lgtm

> Change the "KV" to "Cell" for web UI
> 
>
> Key: HBASE-20132
> URL: https://issues.apache.org/jira/browse/HBASE-20132
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Guangxu Cheng
>Priority: Minor
>  Labels: beginner, beginners
> Fix For: 2.0.0
>
> Attachments: HBASE-20132.master.001.patch
>
>
> grep the source code. The related words which should be revised are shown 
> below.
>  # Num. Compacting KVs
>  # Num. Compacted KVs
>  # Remaining KVs
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20134) website generation uses hard-coded /tmp

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386800#comment-16386800
 ] 

Mike Drob commented on HBASE-20134:
---

According to 
https://stackoverflow.com/questions/2792675/how-portable-is-mktemp1 it looks 
like it works everywhere except HP-UX?

> website generation uses hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20134) website generation uses hard-coded /tmp

2018-03-05 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386793#comment-16386793
 ] 

Sean Busbey commented on HBASE-20134:
-

{{mktemp -d}} sounds very tempting. Where does it not work? Hopefully not a 
platform we're worried about using for website generation?

> website generation uses hard-coded /tmp
> ---
>
> Key: HBASE-20134
> URL: https://issues.apache.org/jira/browse/HBASE-20134
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Mike Drob
>Assignee: Sean Busbey
>Priority: Major
>
> {code}
> if [ -z "${working_dir}" ]; then
>   echo "[DEBUG] defaulting to creating a directory in /tmp"
>   working_dir=/tmp
>   while [[ -e ${working_dir} ]]; do
> working_dir=/tmp/hbase-generate-website-${RANDOM}.${RANDOM}
>   done
>   mkdir "${working_dir}"
> else
> {code}
> This should likely use {{$TMPDIR}} or {{mktemp -d}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386781#comment-16386781
 ] 

Hadoop QA commented on HBASE-16179:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m 
32s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m  
2s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 
54s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} scaladoc {color} | {color:red}  1m 
10s{color} | {color:red} hbase-spark generated 12 new + 0 unchanged - 0 fixed = 
12 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
47s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce

[jira] [Created] (HBASE-20135) NullPointerException during reading bloom filter when upgraded from hbase-1 to hbase-2

2018-03-05 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-20135:


 Summary: NullPointerException during reading bloom filter when 
upgraded from hbase-1 to hbase-2
 Key: HBASE-20135
 URL: https://issues.apache.org/jira/browse/HBASE-20135
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0-beta-2
Reporter: Umesh Agashe
 Fix For: 2.0.0


When upgraded from hbase-1 to hbase-2, found following exception logged 
multiple times in the log:
{code:java}
ERROR [StoreFileOpenerThread-test_cf-1] regionserver.StoreFileReader: Error 
reading bloom filter meta for GENERAL_BLOOM_META -- proceeding without
java.io.IOException: Comparator class 
org.apache.hadoop.hbase.KeyValue$RawBytesComparator is not instantiable
        at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:628)
        at 
org.apache.hadoop.hbase.io.hfile.CompoundBloomFilter.(CompoundBloomFilter.java:79)
        at 
org.apache.hadoop.hbase.util.BloomFilterFactory.createFromMeta(BloomFilterFactory.java:104)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileReader.loadBloomfilter(StoreFileReader.java:479)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:425)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:460)
        at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:671)
        at 
org.apache.hadoop.hbase.regionserver.HStore.lambda$openStoreFiles$0(HStore.java:537)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException{code}
 
Analysis from [~anoop.hbase]:
Checking the related code.  There seems no issue..  We are not going
to even fail reading the bloom.  In 2.0 code base we expect the
comparator class name to be null.  But in 1.x we write old KV based
Raw Bytes comparator class name.  So reading that back, we will return
class name as null and we get NPE it looks like.

 else if 
(comparatorClassName.equals("org.apache.hadoop.hbase.KeyValue$RawBytesComparator")
        || 
comparatorClassName.equals("org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator"))
{
      // When the comparator to be used is Bytes.BYTES_RAWCOMPARATOR,
we just return null from here
      // Bytes.BYTES_RAWCOMPARATOR is not a CellComparator
      comparatorKlass = null;
    }

We can better do a null check before trying the comparator class
instantiation so that we can avoid this scary error logs :-)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20123) Backup test fails against hadoop 3

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386772#comment-16386772
 ] 

Ted Yu commented on HBASE-20123:


Thanks Steve for taking a look.

HADOOP-15290 has been logged.

> Backup test fails against hadoop 3
> --
>
> Key: HBASE-20123
> URL: https://issues.apache.org/jira/browse/HBASE-20123
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> When running backup unit test against hadoop3, I saw:
> {code}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 88.862 s <<< FAILURE! - in 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes
> [ERROR] 
> testBackupMultipleDeletes(org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes)
>   Time elapsed: 86.206 s  <<< ERROR!
> java.io.IOException: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> Caused by: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> {code}
> In the test output, I found:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
> java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at 
> org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at 
> org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:196)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:290)
>   at 
> org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605)
> {code}
> It seems the failure was related to how we use distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20123) Backup test fails against hadoop 3

2018-03-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386757#comment-16386757
 ] 

Steve Loughran commented on HBASE-20123:


That looks like branch-2 stack trace; HADOOP-13626 changed 
CopyListingFileStatus to not be a subclass of FileStatus, instead explictly 
marshalling the permissions.

At the same time, that getSymlink() in readFields() call is a branch-3 
operation; it's in an assert at the end
{code}
assert (isDirectory() && getSymlink() == null) || !isDirectory();
{code}

I believe that assertion is wrong. It's assuming that getSymlink() returns null 
if there is no symlink, but instead it raises and exception.

And as its an assert(), it's only going to show up in JVMs with assert turned 
on.

I'd suggest that someone (you?) files a JIRA against Hadoop with a patch that 
changes the exception to something like 

{code}
assert (!(isDirectory() && isSymlink())
{code}

that is, you can't be both a dir and a symlink.




> Backup test fails against hadoop 3
> --
>
> Key: HBASE-20123
> URL: https://issues.apache.org/jira/browse/HBASE-20123
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> When running backup unit test against hadoop3, I saw:
> {code}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 88.862 s <<< FAILURE! - in 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes
> [ERROR] 
> testBackupMultipleDeletes(org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes)
>   Time elapsed: 86.206 s  <<< ERROR!
> java.io.IOException: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> Caused by: java.io.IOException: Failed copy from 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 to 
> hdfs://localhost:40578/backupUT
>   at 
> org.apache.hadoop.hbase.backup.TestBackupMultipleDeletes.testBackupMultipleDeletes(TestBackupMultipleDeletes.java:82)
> {code}
> In the test output, I found:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
> java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at 
> org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at 
> org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:196)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:408)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:348)
>   at 
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:290)
>   at 
> org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:605)
> {code}
> It seems the failure was related to how we use distcp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18135) Track file archival for low latency space quota with snapshots

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386741#comment-16386741
 ] 

Hadoop QA commented on HBASE-18135:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} The patch hbase-protocol-shaded passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hbase-client: The patch generated 0 new + 7 
unchanged - 1 fixed = 7 total (was 8) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hbase-server: The patch generated 0 new + 332 
unchanged - 3 fixed = 332 total (was 335) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
5s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
59s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m  
2s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 

[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386732#comment-16386732
 ] 

Ted Yu commented on HBASE-16179:


Patch v34 drops the unneeded import.

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16179:
---
Attachment: 16179.v34.txt

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v34.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, 
> HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20108) `hbase zkcli` falls into a non-interactive prompt after HBASE-15199

2018-03-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20108:
---
Release Note: This issue fixes a runtime dependency issues where JLine is 
not made available on the classpath which causes the ZooKeeper CLI to appear 
non-interactive. JLine was being made available unintentionally via the JRuby 
jar file on the classpath for the HBase shell. While the JRuby jar is not 
always present, the fix made here was to selectively include the JLine 
dependency on the zkcli command's classpath.

Added RN.

Users on previously release can workaround this by doing something like the 
following:
{code:java}
$ HBASE_CLASSPATH="${HBASE_CLASSPATH}:${ZOOKEEPER_HOME}/lib/jline-0.9.94.jar" 
hbase zkcli{code}

> `hbase zkcli` falls into a non-interactive prompt after HBASE-15199
> ---
>
> Key: HBASE-20108
> URL: https://issues.apache.org/jira/browse/HBASE-20108
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20108.001.branch-2.patch, 
> HBASE-20108.002.branch-2.patch, HBASE-20108.003.branch-2.patch
>
>
> HBASE-15199 pulls the jruby-complete jar out of the normal classpath for 
> commands run in HBase. Jruby-complete bundles jline inside. ZK uses jline for 
> its nice shell-like usage.
> The problem is that this uncovered a bug where we're not explicitly bundling 
> a version of jline to make sure that {{hbase zkcli}} actually works. As long 
> as we're expecting {{zkcli}} to be there, we should provide jline on the 
> classpath to make sure the users get a real cli.
> Thanks to [~sergey.soldatov] for getting to the bottom of it quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20108) `hbase zkcli` falls into a non-interactive prompt after HBASE-15199

2018-03-05 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386706#comment-16386706
 ] 

Josh Elser commented on HBASE-20108:


Thanks, Mike!

FYI [~stack], this would be a good one if you re-roll another beta2 RC. 
Otherwise, I'll wait for your all-clear to commit to branch-2.

> `hbase zkcli` falls into a non-interactive prompt after HBASE-15199
> ---
>
> Key: HBASE-20108
> URL: https://issues.apache.org/jira/browse/HBASE-20108
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20108.001.branch-2.patch, 
> HBASE-20108.002.branch-2.patch, HBASE-20108.003.branch-2.patch
>
>
> HBASE-15199 pulls the jruby-complete jar out of the normal classpath for 
> commands run in HBase. Jruby-complete bundles jline inside. ZK uses jline for 
> its nice shell-like usage.
> The problem is that this uncovered a bug where we're not explicitly bundling 
> a version of jline to make sure that {{hbase zkcli}} actually works. As long 
> as we're expecting {{zkcli}} to be there, we should provide jline on the 
> classpath to make sure the users get a real cli.
> Thanks to [~sergey.soldatov] for getting to the bottom of it quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386699#comment-16386699
 ] 

Hadoop QA commented on HBASE-16179:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
5s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} scaladoc {color} | {color:red}  0m 
35s{color} | {color:red} hbase-spark generated 14 new + 0 unchanged - 0 fixed = 
14 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
25s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-16179 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913093/16179.v33.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  findbugs  hbaseanti  checkstyle  scalac  sca

[jira] [Commented] (HBASE-20108) `hbase zkcli` falls into a non-interactive prompt after HBASE-15199

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386686#comment-16386686
 ] 

Mike Drob commented on HBASE-20108:
---

+1

> `hbase zkcli` falls into a non-interactive prompt after HBASE-15199
> ---
>
> Key: HBASE-20108
> URL: https://issues.apache.org/jira/browse/HBASE-20108
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20108.001.branch-2.patch, 
> HBASE-20108.002.branch-2.patch, HBASE-20108.003.branch-2.patch
>
>
> HBASE-15199 pulls the jruby-complete jar out of the normal classpath for 
> commands run in HBase. Jruby-complete bundles jline inside. ZK uses jline for 
> its nice shell-like usage.
> The problem is that this uncovered a bug where we're not explicitly bundling 
> a version of jline to make sure that {{hbase zkcli}} actually works. As long 
> as we're expecting {{zkcli}} to be there, we should provide jline on the 
> classpath to make sure the users get a real cli.
> Thanks to [~sergey.soldatov] for getting to the bottom of it quickly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20121) Fix findbugs warning for RestoreTablesClient

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386675#comment-16386675
 ] 

Mike Drob commented on HBASE-20121:
---

So the behavior of incremental and non-incremental restore ends up being the 
same?

> Fix findbugs warning for RestoreTablesClient
> 
>
> Key: HBASE-20121
> URL: https://issues.apache.org/jira/browse/HBASE-20121
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: backup
> Attachments: 20121.v1.txt
>
>
> In RestoreTablesClient#restore(), the following variable is not used:
> {code}
> Set backupIdSet = new HashSet<>();
> {code}
> There is backupIdSet#add() call later in the method but the variable doesn't 
> appear in any other part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Imran Rashid (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386669#comment-16386669
 ] 

Imran Rashid commented on HBASE-16179:
--

you don't need to import code in the same package (Logging is in the same 
package as DefaultSource).   So the warning is not serious, but better to 
remove the extra import.

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v4.txt, 
> 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-05 Thread Sakthi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386636#comment-16386636
 ] 

Sakthi commented on HBASE-18864:


Thanks for the comments.
bq. I don't follow the above? What is 'a part'?
By 'a part' I meant, that whenever a user enters a value of REPLICATION_SCOPE 
greater than 0, he/she wanted to enable replication. But the other version of 
the argument ('safe' one) looks more appropriate.
bq. I'd suggest we do the 'safe' option generally sir.
Roger that, stack.
bq. elsewhere in the code base when there is a fear we might write thousands of 
logs a second, we'll put up a barrier or throttle logging only the first 
instance or log at a rate of once every minute or so... You might do similar 
her sir.
Sounds good. I'll look into this. 


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Attachments: hbase-18864.branch-1.2.001.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386629#comment-16386629
 ] 

Ted Yu commented on HBASE-16179:


Patch v33 adds the import for ReturnCode

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v4.txt, 
> 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16179:
---
Attachment: 16179.v33.txt

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v33.txt, 16179.v4.txt, 
> 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2018-03-05 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386616#comment-16386616
 ] 

Mike Drob commented on HBASE-16179:
---

that's just a warning, less concerned about it.

i think
{noformat}
/testptch/hbase/hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/SparkSQLPushDownFilter.java:111:
 error: not found: type ReturnCode
  public ReturnCode filterCell(final Cell c) throws IOException {

{noformat}
is the error causing a -1 on precommit

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>  Labels: build
> Fix For: 3.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 16179.v25.txt, 
> 16179.v26.txt, 16179.v27.txt, 16179.v28.txt, 16179.v28.txt, 16179.v29.txt, 
> 16179.v30.txt, 16179.v31.txt, 16179.v32.txt, 16179.v4.txt, 16179.v5.txt, 
> 16179.v7.txt, 16179.v8.txt, 16179.v9.txt, HBASE-16179.v29.patch
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386603#comment-16386603
 ] 

stack commented on HBASE-18864:
---

bq. then probably the user wanted the replication to be a part?

I don't follow the above? What is 'a part'?

I'd suggest we do the 'safe' option generally sir.

bq. should the log messages be put to "LOG.debug" instead, or checking for 
redundant log message before putting in another "LOG.warn" would be a better 
option?

... elsewhere in the code base when there is a fear we might write thousands of 
logs a second, we'll put up a barrier or throttle logging only the first 
instance or log at a rate of once every minute or so... You might do similar 
her sir.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Attachments: hbase-18864.branch-1.2.001.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >