[jira] [Work started] (HBASE-20404) Ugly cleanerchore complaint that dir is not empty

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20404 started by Sean Busbey.
---
> Ugly cleanerchore complaint that dir is not empty
> -
>
> Key: HBASE-20404
> URL: https://issues.apache.org/jira/browse/HBASE-20404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
>
>  I see these big dirty exceptions in my master log during a long-run Lets 
> clean them up (Are they exceptions I as an operator can actually do something 
> about? Are they 'problems'? Should they be LOG.warn?)
> {code}
> 2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] 
> cleaner.CleanerChore: Could not delete dir under 
> hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
>  {}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
>  
> `/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
>  is non empty': Directory is not empty
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1435)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1345)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy26.delete(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:568)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy27.delete(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> ...
> {code}
> Looks like log format is off too...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20404) Ugly cleanerchore complaint that dir is not empty

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20404:
---

Assignee: Sean Busbey

> Ugly cleanerchore complaint that dir is not empty
> -
>
> Key: HBASE-20404
> URL: https://issues.apache.org/jira/browse/HBASE-20404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Major
>
>  I see these big dirty exceptions in my master log during a long-run Lets 
> clean them up (Are they exceptions I as an operator can actually do something 
> about? Are they 'problems'? Should they be LOG.warn?)
> {code}
> 2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] 
> cleaner.CleanerChore: Could not delete dir under 
> hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
>  {}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
>  
> `/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
>  is non empty': Directory is not empty
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1435)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1345)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy26.delete(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:568)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy27.delete(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> ...
> {code}
> Looks like log format is off too...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436884#comment-16436884
 ] 

Hadoop QA commented on HBASE-20369:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
27s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
51s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 107 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
3s{color} | {color:red} The patch 4 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
19s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918884/HBASE-20369.patch |
| Optional Tests |  asflicense  refguide  |
| uname | Linux 09cd8a568f37 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 
12:16:42 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d59a6c8166 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12428/artifact/patchprocess/branch-site/book.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12428/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12428/artifact/patchprocess/whitespace-tabs.txt
 |
| refguide | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12428/artifact/patchprocess/patch-site/book.html
 |
| Max. process+thread count | 93 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12428/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436866#comment-16436866
 ] 

stack commented on HBASE-20112:
---

This kinda work makes the world a better place. +1. Try it.


> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20112.0.patch
>
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20406:
---

Assignee: Kevin Risden

> HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
> --
>
> Key: HBASE-20406
> URL: https://issues.apache.org/jira/browse/HBASE-20406
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Thrift
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-20406.master.001.patch
>
>
> HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to 
> prevent Jetty from answering on TRACE and OPTIONS methods. This should be 
> added to Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20406:

Status: Patch Available  (was: Open)

> HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
> --
>
> Key: HBASE-20406
> URL: https://issues.apache.org/jira/browse/HBASE-20406
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Thrift
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-20406.master.001.patch
>
>
> HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to 
> prevent Jetty from answering on TRACE and OPTIONS methods. This should be 
> added to Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436860#comment-16436860
 ] 

Hadoop QA commented on HBASE-20364:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20364 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918887/HBASE-20364.0.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux f3f824c35c1f 4.4.0-98-generic #121-Ubuntu SMP Tue Oct 10 
14:24:03 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d59a6c8166 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 47 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12429/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20364.0.patch
>
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436857#comment-16436857
 ] 

Hadoop QA commented on HBASE-20388:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  4m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918881/HBASE-20388.1.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 26a7ef9114b4 4.4.0-98-generic #121-Ubuntu SMP Tue Oct 10 
14:24:03 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d59a6c8166 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 48 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12427/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch, HBASE-20388.1.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20364:

Status: Patch Available  (was: In Progress)

-v0
  - update final build check so that "null" and "SUCCESS" are both treated as 
success
  - before doing scm checkout, write a commentfile for hte stage that will say 
the stage failed if we don't overwrite it later.

[see nightly build with this change in 
place|https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/HBASE-20364/2/]

Comment built (with WIP patch that fails each stage):

{code}
00:20:12.400 [INFO] Comment:
[Pipeline] echo
00:20:12.402 Results for branch HBASE-20364
00:20:12.402[build #2 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20364/2/]: 
(x) *{color:red}-1 overall{color}*
00:20:12.402 
00:20:12.402 details (if available):
00:20:12.402 
00:20:12.402 (x) {color:red}-1 general checks{color}
00:20:12.402 -- Something went wrong running this stage, please [check relevant 
console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20364/2//console].
00:20:12.402 
00:20:12.402 
00:20:12.402 
00:20:12.402 
00:20:12.402 (x) {color:red}-1 jdk8 hadoop2 checks{color}
00:20:12.402 -- Something went wrong running this stage, please [check relevant 
console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20364/2//console].
00:20:12.402 
00:20:12.402 
00:20:12.402 (x) {color:red}-1 jdk8 hadoop3 checks{color}
00:20:12.402 -- Something went wrong running this stage, please [check relevant 
console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20364/2//console].
00:20:12.402 
00:20:12.402 
00:20:12.402 (x) {color:red}-1 source release artifact{color}
00:20:12.402 -- Something went wrong with this stage, [check relevant console 
output|${BUILD_URL}/console].
00:20:12.402 
{code}

no comment posted here because it was the first build so jenkins didn't think 
anything had changed.

> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20364.0.patch
>
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20364:

Attachment: HBASE-20364.0.patch

> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20364.0.patch
>
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2018-04-12 Thread Xiaolin Ha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-20368:
---
Attachment: HBASE-20368.branch-2.003.patch

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-12 Thread Thiriguna Bharat Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiriguna Bharat Rao updated HBASE-20369:
-
Labels: patch  (was: )
  Tags: Documentation
Status: Patch Available  (was: Open)

[~mdrob] and [~busbey] Please review the patch that I've created for this JIRA. 
It's in ASCiidoc format. Right now, it highlights the coprocessor changes for 
HBase 2.0. 

Appreciate your support and time.

Best,

Triguna 

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>  Labels: patch
> Attachments: HBASE-20369.patch
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-12 Thread Thiriguna Bharat Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiriguna Bharat Rao updated HBASE-20369:
-
Attachment: HBASE-20369.patch

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
> Attachments: HBASE-20369.patch
>
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20369) Document incompatibilities between HBase 1.1.2 and HBase 2.0

2018-04-12 Thread Thiriguna Bharat Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiriguna Bharat Rao updated HBASE-20369:
-
Attachment: (was: Document incompatibilities between HBase 1.docx)

> Document incompatibilities between HBase 1.1.2 and HBase 2.0
> 
>
> Key: HBASE-20369
> URL: https://issues.apache.org/jira/browse/HBASE-20369
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Thiriguna Bharat Rao
>Assignee: Thiriguna Bharat Rao
>Priority: Critical
>
> Hi, 
> I compiled a  draft document for the HBase incompatibilities from the raw 
> source content that was available in HBase Beta 1 site. Can someone please 
> review and provide a feedback or share your comments on this document?
> Appreciate your support and time.
>  
> Best Regards, 
> Triguna



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436845#comment-16436845
 ] 

Sean Busbey commented on HBASE-20388:
-

-v1

  - correct use of "seenJiras" in new helper function to the "seen" variable.

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch, HBASE-20388.1.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436844#comment-16436844
 ] 

Hadoop QA commented on HBASE-20112:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918877/HBASE-20112.0.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 4d850ae04b83 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d59a6c8166 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 42 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12425/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20112.0.patch
>
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20388:

Attachment: HBASE-20388.1.patch

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch, HBASE-20388.1.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436833#comment-16436833
 ] 

Sean Busbey commented on HBASE-20388:
-

it is! great catch.

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20270) Turn off command help that follows all errors in shell

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20270:

Status: In Progress  (was: Patch Available)

moving out of patch available status pending update per review

> Turn off command help that follows all errors in shell
> --
>
> Key: HBASE-20270
> URL: https://issues.apache.org/jira/browse/HBASE-20270
> Project: HBase
>  Issue Type: Task
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sakthi
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: hbase-20270.master.001.patch
>
>
> Right now if a shell command gives an error, any error, it then echos the 
> command help. It makes it harder to see the actual error text and is annoying.
> example:
> {code}
>   
>   
>
> hbase(main):007:0> create 'test:a_table', 'family', { NUMREGIONS => 20, 
> SPLITALGO => 'HexStringSplit'}
> ERROR: Unknown namespace test!
> Creates a table. Pass a table name, and a set of column family
> specifications (at least one), and, optionally, table configuration.
> Column specification can be a simple string (name), or a dictionary
> (dictionaries are described below in main help output), necessarily
> including NAME attribute.
> Examples:
> Create a table with namespace=ns1 and table qualifier=t1
>   hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5}
> Create a table with namespace=default and table qualifier=t1
>   hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
>   hbase> # The above in shorthand would be the following:
>   hbase> create 't1', 'f1', 'f2', 'f3'
>   hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, 
> BLOCKCACHE => true}
>   hbase> create 't1', {NAME => 'f1', CONFIGURATION => 
> {'hbase.hstore.blockingStoreFiles' => '10'}}
>   hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 
> 100, MOB_COMPACT_PARTITION_POLICY => 'weekly'}
> Table configuration options can be put at the end.
> Examples:
>   hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
>   hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
>   hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
>   hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 
> 'myvalue' }
>   hbase> # Optionally pre-split the table into NUMREGIONS, using
>   hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
>   hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
>   hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit', 
> REGION_REPLICATION => 2, CONFIGURATION => 
> {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}
>   hbase> create 't1', {NAME => 'f1', DFS_REPLICATION => 1}
> You can also keep around a reference to the created table:
>   hbase> t1 = create 't1', 'f1'
> Which gives you a reference to the table named 't1', on which you can then
> call methods.
> Took 0.0221 seconds   
>   
> 
> hbase(main):008:0> create_namespace 'test'
> Took 0.2554 seconds   
>   
> 
> hbase(main):009:0> create 'test:a_table', 'family', { NUMREGIONS => 20, 
> SPLITALGO => 'HexStringSplit'}
> Created table test:a_table
> Took 1.2264 seconds 
> {code}
> I was trying to make a table in the test namespace before making the 
> namespace. Much faster to recognize and move on when the error text isn't 
> followed by 80x the text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20394) HBase over rides the value of HBASE_OPTS (if any) set by client

2018-04-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436828#comment-16436828
 ] 

Hudson commented on HBASE-20394:


Results for branch branch-2
[build #606 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> HBase over rides the value of HBASE_OPTS (if any) set by client
> ---
>
> Key: HBASE-20394
> URL: https://issues.apache.org/jira/browse/HBASE-20394
> Project: HBase
>  Issue Type: Bug
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20394.master.001.patch
>
>
> Currently HBase will over ride the value of HBASE_OPTS (if any) set by user
> {code:java}
> export HBASE_OPTS="-XX:+UseConcMarkSweepGC" {code}
> [See 
> hbase-env.sh|https://github.com/apache/hbase/blob/68726b0ee3ef3eb52d32481444e64236c5a9e733/conf/hbase-env.sh#L43]
>  
> But, a user may have the following set in his environment:
> {code:java}
> HBASE_OPTS="-Xmn512m"{code}
> While starting the processes, HBase will internally over-ride the existing 
> HBASE_OPTS value with the one set in hbase-env.sh
>  
> Instead of over-riding we can have the following in hbase-env.sh:
> {code:java}
> export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20356) protoc 3.5.1 can't compile on rhel6

2018-04-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436829#comment-16436829
 ] 

Hudson commented on HBASE-20356:


Results for branch branch-2
[build #606 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> protoc 3.5.1 can't compile on rhel6
> ---
>
> Key: HBASE-20356
> URL: https://issues.apache.org/jira/browse/HBASE-20356
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, thirdparty
>Affects Versions: 2.0.0-beta-2
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20356.patch, HBASE-20356.v2.patch, 
> HBASE-20356.v3.patch
>
>
> We upgraded our internal protoc version, and now can't build on RHEL6.
> I get this build error:
> {noformat}
> 2018-04-05 08:15:21.929278 [ERROR] PROTOC FAILED: ... /lib64/libc.so.6: 
> version `GLIBC_2.14' not found
> {noformat}
> See https://github.com/google/protobuf/issues/4109
> And this has come up before in https://github.com/google/protobuf/issues/3718
> Looks like we need to be on 3.4.0, unless there's a compelling reason to be 
> on something newer? Maybe roll back all the way to 3.3.0 which is what we 
> were on before... was there a specific bug we needed to get addressed?
> cc: [~elserj] [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20338) WALProcedureStore#recoverLease() should have fixed sleeps for retrying rollWriter()

2018-04-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436830#comment-16436830
 ] 

Hudson commented on HBASE-20338:


Results for branch branch-2
[build #606 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/606//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> WALProcedureStore#recoverLease() should have fixed sleeps for retrying 
> rollWriter()
> ---
>
> Key: HBASE-20338
> URL: https://issues.apache.org/jira/browse/HBASE-20338
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20338.master.001.patch, 
> HBASE-20338.master.002.patch, HBASE-20338.master.003.patch, 
> HBASE-20338.master.004.patch, HBASE-20338.master.005.patch
>
>
> In our internal testing we observed that logs are getting flooded due to 
> continuous loop in WALProcedureStore#recoverLease():
> {code}
>   while (isRunning()) {
> // Get Log-MaxID and recover lease on old logs
> try {
>   flushLogId = initOldLogs(oldLogs);
> } catch (FileNotFoundException e) {
>   LOG.warn("Someone else is active and deleted logs. retrying.", e);
>   oldLogs = getLogFiles();
>   continue;
> }
> // Create new state-log
> if (!rollWriter(flushLogId + 1)) {
>   // someone else has already created this log
>   LOG.debug("Someone else has already created log " + flushLogId);
>   continue;
> }
> {code}
> rollWriter() fails to create a new file. Error messages in HDFS namenode logs 
> around same time:
> {code}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 172.31.121.196:38508 Call#3141 Retry#0
> java.io.IOException: Exeption while contacting value generator
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:389)
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:291)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:724)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2680)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2676)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:477)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:458)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2675)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2815)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2712)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:604)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at 

[jira] [Updated] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20112:

Attachment: HBASE-20112.0.patch

> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20112.0.patch
>
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20338) WALProcedureStore#recoverLease() should have fixed sleeps for retrying rollWriter()

2018-04-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436825#comment-16436825
 ] 

Hudson commented on HBASE-20338:


Results for branch branch-2.0
[build #168 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/168/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/168//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/168//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/168//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> WALProcedureStore#recoverLease() should have fixed sleeps for retrying 
> rollWriter()
> ---
>
> Key: HBASE-20338
> URL: https://issues.apache.org/jira/browse/HBASE-20338
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20338.master.001.patch, 
> HBASE-20338.master.002.patch, HBASE-20338.master.003.patch, 
> HBASE-20338.master.004.patch, HBASE-20338.master.005.patch
>
>
> In our internal testing we observed that logs are getting flooded due to 
> continuous loop in WALProcedureStore#recoverLease():
> {code}
>   while (isRunning()) {
> // Get Log-MaxID and recover lease on old logs
> try {
>   flushLogId = initOldLogs(oldLogs);
> } catch (FileNotFoundException e) {
>   LOG.warn("Someone else is active and deleted logs. retrying.", e);
>   oldLogs = getLogFiles();
>   continue;
> }
> // Create new state-log
> if (!rollWriter(flushLogId + 1)) {
>   // someone else has already created this log
>   LOG.debug("Someone else has already created log " + flushLogId);
>   continue;
> }
> {code}
> rollWriter() fails to create a new file. Error messages in HDFS namenode logs 
> around same time:
> {code}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 172.31.121.196:38508 Call#3141 Retry#0
> java.io.IOException: Exeption while contacting value generator
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:389)
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:291)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:724)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2680)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2676)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:477)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:458)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2675)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2815)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2712)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:604)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at 

[jira] [Updated] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20112:

Status: Patch Available  (was: In Progress)

-v0
  - uncomment hadoop3 junit results

> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-20112.0.patch
>
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20112 started by Sean Busbey.
---
> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20112) Include test results from nightly hadoop3 tests in jenkins test results

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20112:
---

Assignee: Sean Busbey

> Include test results from nightly hadoop3 tests in jenkins test results
> ---
>
> Key: HBASE-20112
> URL: https://issues.apache.org/jira/browse/HBASE-20112
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> right now our nightly tests that run atop hadoop 3 are reported on pass/fail 
> but aren't recorded via the jenkins reporting mechanism.
> we should add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-12 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-20395:
--
Status: Patch Available  (was: Open)

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-12 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436806#comment-16436806
 ] 

Guangxu Cheng commented on HBASE-20395:
---

Attach 003 patch as [~busbey] and [~yuzhih...@gmail.com]suggestions.Thanks:)

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-12 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-20395:
--
Attachment: HBASE-20395.master.003.patch

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20344) Fix asciidoc warnings

2018-04-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436803#comment-16436803
 ] 

Sean Busbey commented on HBASE-20344:
-

+1

> Fix asciidoc warnings
> -
>
> Key: HBASE-20344
> URL: https://issues.apache.org/jira/browse/HBASE-20344
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20344.master.001.patch, 
> HBASE-20344.master.001.patch, HBASE-20344.master.002.patch
>
>
> IntelliJ shows some warnings for asciidoc files.
> 1. Markdown Style Heading:
> \### Required properties
>  
> 2. Asciidoc Old Style Heading:
> Creating a New Table with Compression On a ColumnFamily
>  
>  \
> hbase> create 'test2', \{ NAME => 'cf2', COMPRESSION => 'SNAPPY' }
>  \
>  
> 3. Warning during build
> asciidoctor: WARNING: _chapters/troubleshooting.adoc: line 105: invalid style 
> for listing block: NOTE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436791#comment-16436791
 ] 

Mike Drob commented on HBASE-20388:
---

Is seenjiras in getjirastocomment supposed to be seen?
Phone posting, apologies for poor formatting

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20389) Move website building flags into a profile

2018-04-12 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436773#comment-16436773
 ] 

Mike Drob commented on HBASE-20389:
---

+1

> Move website building flags into a profile
> --
>
> Key: HBASE-20389
> URL: https://issues.apache.org/jira/browse/HBASE-20389
> Project: HBase
>  Issue Type: Improvement
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-20389.0.patch
>
>
> we have some "magic" in our website building right now. The script that's 
> used bout our automated website build + publish mechanism manually sets a 
> bunch of stuff on the maven command line.
> It'd be better to reflect those settings in a maven profile, so that folks 
> are less likely to be surprised e.g. when trying to debug a failure in the 
> {{site}} goal happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20389) Move website building flags into a profile

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20389:

Status: Patch Available  (was: In Progress)

-v0
  - move flags used in each of the steps of making the website into profiles
  - change nightly website build to use said profiles
  - add "protoc.skip" to site profile, because we don't need to regenerate 
protoc results to build site
  - add "remoteresources.skip" to site profile, because we don't need to 
download stuff for building jars when we did that already in install.

> Move website building flags into a profile
> --
>
> Key: HBASE-20389
> URL: https://issues.apache.org/jira/browse/HBASE-20389
> Project: HBase
>  Issue Type: Improvement
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-20389.0.patch
>
>
> we have some "magic" in our website building right now. The script that's 
> used bout our automated website build + publish mechanism manually sets a 
> bunch of stuff on the maven command line.
> It'd be better to reflect those settings in a maven profile, so that folks 
> are less likely to be surprised e.g. when trying to debug a failure in the 
> {{site}} goal happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20389) Move website building flags into a profile

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20389:

Attachment: HBASE-20389.0.patch

> Move website building flags into a profile
> --
>
> Key: HBASE-20389
> URL: https://issues.apache.org/jira/browse/HBASE-20389
> Project: HBase
>  Issue Type: Improvement
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-20389.0.patch
>
>
> we have some "magic" in our website building right now. The script that's 
> used bout our automated website build + publish mechanism manually sets a 
> bunch of stuff on the maven command line.
> It'd be better to reflect those settings in a maven profile, so that folks 
> are less likely to be surprised e.g. when trying to debug a failure in the 
> {{site}} goal happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20364:
---

Assignee: Sean Busbey

> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20364) nightly job gives old results or no results for stages that timeout on SCM

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20364 started by Sean Busbey.
---
> nightly job gives old results or no results for stages that timeout on SCM
> --
>
> Key: HBASE-20364
> URL: https://issues.apache.org/jira/browse/HBASE-20364
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> seen in the branch-2.0 nightly report for HBASE-18828:
>  
> {quote}
> Results for branch branch-2.0
>  [build #143 on 
> builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143/]:
>  (x) *\{color:red}-1 overall\{color}*
> 
> details (if available):
> (/) \{color:green}+1 general checks\{color}
> -- For more information [see general 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/140//General_Nightly_Build_Report/]
>  
> (/) \{color:green}+1 jdk8 hadoop2 checks\{color}
> -- For more information [see jdk8 (hadoop2) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop2)/]
> (/) \{color:green}+1 jdk8 hadoop3 checks\{color}
> -- For more information [see jdk8 (hadoop3) 
> report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/143//JDK8_Nightly_Build_Report_(Hadoop3)/]
>  
> {quote}
>  
> -1 for the overall build was correct. build #143 failed both the general 
> check and the source tarball check.
>  
> but in the posted comment, we get a false "passing" that links to the general 
> result from build #140. and we get no result for the source tarball at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436738#comment-16436738
 ] 

Hadoop QA commented on HBASE-20388:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
4s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918859/HBASE-20388.0.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux a9bca14160cf 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d59a6c8166 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 43 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12422/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20163) Forbid major compaction when standby cluster replay the remote wals

2018-04-12 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-20163:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Forbid major compaction when standby cluster replay the remote wals
> ---
>
> Key: HBASE-20163
> URL: https://issues.apache.org/jira/browse/HBASE-20163
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20163.HBASE-19064.001.patch, 
> HBASE-20163.HBASE-19064.002.patch, HBASE-20163.HBASE-19064.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20389) Move website building flags into a profile

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20389 started by Sean Busbey.
---
> Move website building flags into a profile
> --
>
> Key: HBASE-20389
> URL: https://issues.apache.org/jira/browse/HBASE-20389
> Project: HBase
>  Issue Type: Improvement
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> we have some "magic" in our website building right now. The script that's 
> used bout our automated website build + publish mechanism manually sets a 
> bunch of stuff on the maven command line.
> It'd be better to reflect those settings in a maven profile, so that folks 
> are less likely to be surprised e.g. when trying to debug a failure in the 
> {{site}} goal happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20389) Move website building flags into a profile

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20389:
---

Assignee: Sean Busbey

> Move website building flags into a profile
> --
>
> Key: HBASE-20389
> URL: https://issues.apache.org/jira/browse/HBASE-20389
> Project: HBase
>  Issue Type: Improvement
>  Components: build, website
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> we have some "magic" in our website building right now. The script that's 
> used bout our automated website build + publish mechanism manually sets a 
> bunch of stuff on the maven command line.
> It'd be better to reflect those settings in a maven profile, so that folks 
> are less likely to be surprised e.g. when trying to debug a failure in the 
> {{site}} goal happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20163) Forbid major compaction when standby cluster replay the remote wals

2018-04-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436712#comment-16436712
 ] 

Guanghao Zhang commented on HBASE-20163:


Pushed to HBASE-19064. Thanks [~Apache9] for reviewing.

> Forbid major compaction when standby cluster replay the remote wals
> ---
>
> Key: HBASE-20163
> URL: https://issues.apache.org/jira/browse/HBASE-20163
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20163.HBASE-19064.001.patch, 
> HBASE-20163.HBASE-19064.002.patch, HBASE-20163.HBASE-19064.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436711#comment-16436711
 ] 

Ted Yu commented on HBASE-20395:


Your patch changes SERVER_TYPE_CONF_KEY. Is that absolutely necessary ?
{code}
+// set the thrift server type
+conf.set("hbase.regionserver.thrift.server.type", 
ThriftMetrics.ThriftServerType.TWO.name());
{code}
Since the value for the new config is only used by .jsp, you can choose a 
different config key so that the existing config stays the same.

bq. Do you mean to set ThriftMetrics or ThriftServer instance as an attribute 
of InfoServer

bq. infoServer.setAttribute("hbase.thrift.metric", metrics);

You don't need to pass the metrics instance - you just need to pass the 
ThriftMetrics.ThriftServerType

Thanks

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20163) Forbid major compaction when standby cluster replay the remote wals

2018-04-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436698#comment-16436698
 ] 

Guanghao Zhang commented on HBASE-20163:


The failed ut not related. Will commit it later.

> Forbid major compaction when standby cluster replay the remote wals
> ---
>
> Key: HBASE-20163
> URL: https://issues.apache.org/jira/browse/HBASE-20163
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20163.HBASE-19064.001.patch, 
> HBASE-20163.HBASE-19064.002.patch, HBASE-20163.HBASE-19064.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20388:

Status: Patch Available  (was: Open)

-v0
  - first use our jira key pattern matching against the branch name
  - if we don't find anything, fall back to checking changeset messages
  - comment one whichever jira keys we find first.

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20388:
---

Assignee: Sean Busbey

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2018-04-12 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436688#comment-16436688
 ] 

Guangxu Cheng commented on HBASE-20395:
---

{quote}ThriftMetrics instance can be retrieved from the respective thrift 
server class.
 Can we get the type information from that instance instead of introducing 
another config ?
{quote}
Sorry, I missed this message.
{code:java}
InfoServer infoServer = new InfoServer("thrift", a, port, false, conf);
infoServer.setAttribute("hbase.conf", conf);
// set new Attribute
infoServer.setAttribute("hbase.thrift.metric", metrics);
//or
infoServer.setAttribute("hbase.thrift", this); 
{code}
Hi,[~yuzhih...@gmail.com]. Do you mean to set ThriftMetrics or ThriftServer 
instance as an attribute of InfoServer? However, this also requires adding a 
new config "hbase.thrift.metric" or "hbase.thrift".
 I introduce a new config, which is considered to be consistent with other 
configs(implType,framed etc.), and this may be simpler.

> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20388) nightly tests running on a feature branch should only comment on that feature branch's jira

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20388:

Attachment: HBASE-20388.0.patch

> nightly tests running on a feature branch should only comment on that feature 
> branch's jira
> ---
>
> Key: HBASE-20388
> URL: https://issues.apache.org/jira/browse/HBASE-20388
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20388.0.patch
>
>
> It would help improve our signal-to-noise ratio from nightly tests if feature 
> branch runs stopped commenting on all the jiras that got covered by a rebase 
> / merge.
> should be straight forward to have the commenting bit check the current 
> branch against our feature branch naming convention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-12 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated HBASE-20406:
-
Attachment: HBASE-20406.master.001.patch

> HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
> --
>
> Key: HBASE-20406
> URL: https://issues.apache.org/jira/browse/HBASE-20406
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Thrift
>Reporter: Kevin Risden
>Priority: Major
> Attachments: HBASE-20406.master.001.patch
>
>
> HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to 
> prevent Jetty from answering on TRACE and OPTIONS methods. This should be 
> added to Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20128) Add new UTs which extends the old replication UTs but set replication scope to SERIAL

2018-04-12 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436671#comment-16436671
 ] 

Zheng Hu commented on HBASE-20128:
--

Seems like the TestNamespaceReplication got stuck when running in serial 
replication mode.  Let me check this.

> Add new UTs which extends the old replication UTs but set replication scope 
> to SERIAL
> -
>
> Key: HBASE-20128
> URL: https://issues.apache.org/jira/browse/HBASE-20128
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-20128.v1.patch, HBASE-20128.v2.patch, 
> HBASE-20128.v3.patch, HBASE-20128.v3.patch, HBASE-20128.v4.patch, 
> HBASE-20128.v5.patch, HBASE-20128.v5.patch
>
>
> Make sure that the basic function for replicationstill works. The serial 
> replication UTs are focused on order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20145) HMaster start fails with IllegalStateException when HADOOP_HOME is set

2018-04-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436668#comment-16436668
 ] 

Rohith Sharma K S commented on HBASE-20145:
---

Thanks [~jojochuang] for trying out. I have found the root cause for the 
failure. Did you build HBase-2.0.0-beta1 from source or directly used 
hbase*.tar.gz which is available in mirrors?

Btw, I have answered in stack over flow question, see 
[hbase-error-illegalstateexception-when-starting-master-hsync|https://stackoverflow.com/questions/48709569/hbase-error-illegalstateexception-when-starting-master-hsync]
 . This should help

> HMaster start fails with IllegalStateException when HADOOP_HOME is set
> --
>
> Key: HBASE-20145
> URL: https://issues.apache.org/jira/browse/HBASE-20145
> Project: HBase
>  Issue Type: Bug
> Environment: HBase-2.0-beta1.
> Hadoop trunk version.
> java version "1.8.0_144"
>Reporter: Rohith Sharma K S
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> It is observed that HMaster start is failed when HADOOP_HOME is set as env 
> while starting HMaster. HADOOP_HOME is pointing to Hadoop trunk version.
> {noformat}
> 2018-03-07 16:59:52,654 ERROR [master//10.200.4.200:16000] master.HMaster: 
> Failed to become active master
> java.lang.IllegalStateException: The procedure WAL relies on the ability to 
> hsync for proper operation during component failures, but the underlying 
> filesystem does not support doing so. Please check the config value of 
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness 
> and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount 
> that can provide it.
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1036)
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
>     at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:532)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1232)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1145)
>     at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:837)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2026)
>     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:547)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The same configs is working in HBase-1.2.6 build properly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-12 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436654#comment-16436654
 ] 

Kevin Risden commented on HBASE-20406:
--

I am putting together a patch for this that would at least prevent this 
behavior.

> HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
> --
>
> Key: HBASE-20406
> URL: https://issues.apache.org/jira/browse/HBASE-20406
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Thrift
>Reporter: Kevin Risden
>Priority: Major
>
> HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to 
> prevent Jetty from answering on TRACE and OPTIONS methods. This should be 
> added to Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20406) HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods

2018-04-12 Thread Kevin Risden (JIRA)
Kevin Risden created HBASE-20406:


 Summary: HBase Thrift HTTP - Shouldn't handle TRACE/OPTIONS methods
 Key: HBASE-20406
 URL: https://issues.apache.org/jira/browse/HBASE-20406
 Project: HBase
  Issue Type: Improvement
  Components: security, Thrift
Reporter: Kevin Risden


HBASE-10473 introduced a utility HttpServerUtil.constrainHttpMethods to prevent 
Jetty from answering on TRACE and OPTIONS methods. This should be added to 
Thrift in HTTP mode as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436633#comment-16436633
 ] 

Wei-Chiu Chuang commented on HBASE-20244:
-

There were a number of changes related to at-rest encryption quite recently, 
namely,  HDFS-12574, which breaks the Base code here. However there are other 
refactor work, like HDFS-12396.

HDFS-12396 added HdfsKMSUtil.getCryptoProtocolVersion() and 
HdfsKMSUtil.getCryptoCodec(), which are nice because they do extra check to 
harden the system in unexpected settings. 

HDFS-12574 also added a TraceScope around 
HdfsKMSUtil.decryptEncryptedDataEncryptionKey(). 

 

Question: why doesn't the asyncfs use CryptoOutputStream when the underlying 
file system is encrypted? That way, it could just call 
HdfsKMSUtil#createWrappedInputStream(), which is a public method (even though 
still a private API), and the code could be much more cleaner.

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20404) Ugly cleanerchore complaint that dir is not empty

2018-04-12 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-20404:
--
Description: 
 I see these big dirty exceptions in my master log during a long-run Lets 
clean them up (Are they exceptions I as an operator can actually do something 
about? Are they 'problems'? Should they be LOG.warn?)

{code}
2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] cleaner.CleanerChore: 
Could not delete dir under 
hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
 {}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
 
`/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
 is non empty': Directory is not empty
  at 
org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)

  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
  at org.apache.hadoop.ipc.Client.call(Client.java:1435)
  at org.apache.hadoop.ipc.Client.call(Client.java:1345)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
  at com.sun.proxy.$Proxy26.delete(Unknown Source)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:568)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
  at com.sun.proxy.$Proxy27.delete(Unknown Source)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)

...
{code}

Looks like log format is off too...



  was:
I see these big dirty exceptions in my master log during a long-run Lets 
clean them up (Are they exceptions I as an operator can actually do something 
about? Are they 'problems'? Should they be LOG.warn?)

{code}
2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] cleaner.CleanerChore: 
Could not delete dir under 
hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
 {}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
 
`/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
 is non empty': Directory is not empty
  at 
org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
  at 

[jira] [Commented] (HBASE-20404) Ugly cleanerchore complaint that dir is not empty

2018-04-12 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436606#comment-16436606
 ] 

Reid Chan commented on HBASE-20404:
---

bq. Are they exceptions. Are they 'problems'?
Both, no.
bq.  I as an operator can actually do something about?
Yes, you can delete yourself, but no need, since it is a chore.
bq. Should they be LOG.warn?
DEBUG or remove if you like, either does no harms.

> Ugly cleanerchore complaint that dir is not empty
> -
>
> Key: HBASE-20404
> URL: https://issues.apache.org/jira/browse/HBASE-20404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Major
>
> I see these big dirty exceptions in my master log during a long-run Lets 
> clean them up (Are they exceptions I as an operator can actually do something 
> about? Are they 'problems'? Should they be LOG.warn?)
> {code}
> 2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] 
> cleaner.CleanerChore: Could not delete dir under 
> hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
>  {}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
>  
> `/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
>  is non empty': Directory is not empty
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1435)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1345)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy26.delete(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:568)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
>   at com.sun.proxy.$Proxy27.delete(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> ...
> {code}
> Looks like log format is off too...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436570#comment-16436570
 ] 

Hadoop QA commented on HBASE-18792:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hbase-common generated 1 new + 40 unchanged - 2 fixed = 41 total 
(was 42) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hbase-common: The patch generated 0 new + 1 
unchanged - 2 fixed = 1 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
0s{color} | {color:red} hbase-server: The patch generated 5 new + 94 unchanged 
- 3 fixed = 99 total (was 97) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-18792 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918831/hbase-18792.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| 

[jira] [Created] (HBASE-20405) Update website to meet foundation recommendations

2018-04-12 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20405:
---

 Summary: Update website to meet foundation recommendations
 Key: HBASE-20405
 URL: https://issues.apache.org/jira/browse/HBASE-20405
 Project: HBase
  Issue Type: Task
  Components: website
Reporter: Sean Busbey


The Apache Whimsy tool includes an automated checker on if projects are 
following foundation guidance for web sites:

https://whimsy.apache.org/site/project/hbase

out of 10 checks, we currently have 5 green, 4 red, and 1 orange.

The whimsy listing gives links to relevant policy and explains what it's 
looking for.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20248) [ITBLL] UNREFERENCED rows

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436484#comment-16436484
 ] 

stack commented on HBASE-20248:
---

Just finished a 4th 10B run where Master was NOT killed over the life of the 
run. It verified. Logs look ok except for (disturbing-looking) spew from 
HBASE-20404 "Ugly cleanerchore complaint that dir is not empty" and HBASE-20383 
"[AMv2] AssignmentManager: Failed transition XYZ is not OPEN".

> [ITBLL] UNREFERENCED rows
> -
>
> Key: HBASE-20248
> URL: https://issues.apache.org/jira/browse/HBASE-20248
> Project: HBase
>  Issue Type: Sub-task
>  Components: dataloss
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0
>
>
> From parent, saw unreferenced rows in a run yesterday against tip of 
> branch-2. Saw similar in a run from a week or so ago.
> Enabling DEBUG and rerunning to see if I can get to root of dataloss. See 
> https://docs.google.com/document/d/14Tvu5yWYNBDFkh8xCqLkU9tlyNWhJv3GjDGOkqZU1eE/edit#
>  for old debugging trickery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20404) Ugly cleanerchore complaint that dir is not empty

2018-04-12 Thread stack (JIRA)
stack created HBASE-20404:
-

 Summary: Ugly cleanerchore complaint that dir is not empty
 Key: HBASE-20404
 URL: https://issues.apache.org/jira/browse/HBASE-20404
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: stack


I see these big dirty exceptions in my master log during a long-run Lets 
clean them up (Are they exceptions I as an operator can actually do something 
about? Are they 'problems'? Should they be LOG.warn?)

{code}
2018-04-12 16:02:09,911 WARN  [ForkJoinPool-1-worker-15] cleaner.CleanerChore: 
Could not delete dir under 
hdfs://ve0524.halxg.cloudera.com:8020/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta;
 {}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException):
 
`/hbase/archive/data/default/IntegrationTestBigLinkedList/1e24549061df3adc4858fbcaf1929553/meta
 is non empty': Directory is not empty
  at 
org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:115)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2848)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1048)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)

  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
  at org.apache.hadoop.ipc.Client.call(Client.java:1435)
  at org.apache.hadoop.ipc.Client.call(Client.java:1345)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
  at com.sun.proxy.$Proxy26.delete(Unknown Source)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:568)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
  at com.sun.proxy.$Proxy27.delete(Unknown Source)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)

...
{code}

Looks like log format is off too...





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436476#comment-16436476
 ] 

stack commented on HBASE-18792:
---

[~davelatham] Hopefully it'll be needed less often than in the past but yeah we 
need an hbck2 (HBASE-19121). Its very vocabulary will be different because it 
all works so differently now. Idea is to start a new subproject so we can roll 
out improvement in it quickly.

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19121) HBCK for AMv2 (A.K.A HBCK2)

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19121:
--
Summary: HBCK for AMv2 (A.K.A HBCK2)  (was: HBCK for AMv2)

> HBCK for AMv2 (A.K.A HBCK2)
> ---
>
> Key: HBASE-19121
> URL: https://issues.apache.org/jira/browse/HBASE-19121
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Reporter: stack
>Priority: Major
>
> We don't have an hbck for the new AM. Old hbck may actually do damage going 
> against AMv2.
> Fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20338) WALProcedureStore#recoverLease() should have fixed sleeps for retrying rollWriter()

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436450#comment-16436450
 ] 

Wei-Chiu Chuang commented on HBASE-20338:
-

Thanks [~mdrob], [~uagashe] and [~chia7712]!

> WALProcedureStore#recoverLease() should have fixed sleeps for retrying 
> rollWriter()
> ---
>
> Key: HBASE-20338
> URL: https://issues.apache.org/jira/browse/HBASE-20338
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20338.master.001.patch, 
> HBASE-20338.master.002.patch, HBASE-20338.master.003.patch, 
> HBASE-20338.master.004.patch, HBASE-20338.master.005.patch
>
>
> In our internal testing we observed that logs are getting flooded due to 
> continuous loop in WALProcedureStore#recoverLease():
> {code}
>   while (isRunning()) {
> // Get Log-MaxID and recover lease on old logs
> try {
>   flushLogId = initOldLogs(oldLogs);
> } catch (FileNotFoundException e) {
>   LOG.warn("Someone else is active and deleted logs. retrying.", e);
>   oldLogs = getLogFiles();
>   continue;
> }
> // Create new state-log
> if (!rollWriter(flushLogId + 1)) {
>   // someone else has already created this log
>   LOG.debug("Someone else has already created log " + flushLogId);
>   continue;
> }
> {code}
> rollWriter() fails to create a new file. Error messages in HDFS namenode logs 
> around same time:
> {code}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 172.31.121.196:38508 Call#3141 Retry#0
> java.io.IOException: Exeption while contacting value generator
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:389)
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:291)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:724)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2680)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2676)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:477)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:458)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2675)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2815)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2712)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:604)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> 

[jira] [Commented] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436446#comment-16436446
 ] 

stack commented on HBASE-16689:
---

Pushing out of 2.0.0. It won't be done in time. Added reference to above in 
commit on HBASE-20329

> Durability == ASYNC_WAL means no SYNC
> -
>
> Key: HBASE-16689
> URL: https://issues.apache.org/jira/browse/HBASE-16689
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.3, 1.1.6, 1.2.3
> Environment: At least get the above doc into the refguide.
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table 
> Table appends. If all tables on a cluster have this setting, data is flushed 
> from the RS to the DN at some arbitrary time and a bunch may just hang out in 
> DFSClient buffers on the RS-side indefinitely if writes are sporadic, at 
> least until there is a WAL roll -- a log roll sends a sync through the write 
> pipeline to flush out any outstanding appends -- or a region close which does 
> similar or we crash and drop the data in buffers RS.
> This is probably not what a user expects when they set ASYNC_WAL (We don't 
> doc anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time 
> users probably associate ASYNC_WAL and DEFERRED_FLUSH, an old 
> HTableDescriptor config that was deprecated and replaced by ASYNC_WAL. 
> DEFERRED_FLUSH ran a background thread -- LogSyncer -- that on a configurable 
> interval, sent a sync down the write pipeline so any outstanding appends 
> since last last interval start get pushed out to the DN.  ASYNC_WAL doesn't 
> do this (see below for history on how we let go of the LogSyncer feature).
> Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
> are per regionserver, if other regions on the RS are from tables that have 
> sync set, these writes will push out to the DN any appends done on tables 
> that have DEFERRED/ASYNC_WAL set.
> To fix, we could do a few things:
>  * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
> is set but we let go of Handlers as soon as we write the memstore -- we don't 
> wait on the sync to complete as we do with the default setting of 
> Durability=SYNC_WAL.
>  * Be like a 'real' database and add in a sync after N bytes of data have 
> been appended (configurable) or after M milliseconds have passed, which ever 
> threshold happens first. The size check would be easy. The sync-ever-M-millis 
> would mean another thread.
>  * Doc what ASYNC_WAL means (and other durability options)
> Let me take a look and report back. Will file a bit of history on how we got 
> here in next comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16689:
--
Fix Version/s: (was: 2.0.0)

> Durability == ASYNC_WAL means no SYNC
> -
>
> Key: HBASE-16689
> URL: https://issues.apache.org/jira/browse/HBASE-16689
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.3, 1.1.6, 1.2.3
> Environment: At least get the above doc into the refguide.
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table 
> Table appends. If all tables on a cluster have this setting, data is flushed 
> from the RS to the DN at some arbitrary time and a bunch may just hang out in 
> DFSClient buffers on the RS-side indefinitely if writes are sporadic, at 
> least until there is a WAL roll -- a log roll sends a sync through the write 
> pipeline to flush out any outstanding appends -- or a region close which does 
> similar or we crash and drop the data in buffers RS.
> This is probably not what a user expects when they set ASYNC_WAL (We don't 
> doc anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time 
> users probably associate ASYNC_WAL and DEFERRED_FLUSH, an old 
> HTableDescriptor config that was deprecated and replaced by ASYNC_WAL. 
> DEFERRED_FLUSH ran a background thread -- LogSyncer -- that on a configurable 
> interval, sent a sync down the write pipeline so any outstanding appends 
> since last last interval start get pushed out to the DN.  ASYNC_WAL doesn't 
> do this (see below for history on how we let go of the LogSyncer feature).
> Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
> are per regionserver, if other regions on the RS are from tables that have 
> sync set, these writes will push out to the DN any appends done on tables 
> that have DEFERRED/ASYNC_WAL set.
> To fix, we could do a few things:
>  * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
> is set but we let go of Handlers as soon as we write the memstore -- we don't 
> wait on the sync to complete as we do with the default setting of 
> Durability=SYNC_WAL.
>  * Be like a 'real' database and add in a sync after N bytes of data have 
> been appended (configurable) or after M milliseconds have passed, which ever 
> threshold happens first. The size check would be easy. The sync-ever-M-millis 
> would mean another thread.
>  * Doc what ASYNC_WAL means (and other durability options)
> Let me take a look and report back. Will file a bit of history on how we got 
> here in next comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20329) Add note for operators to refguide on AsyncFSWAL

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-20329.
---
Resolution: Fixed

I just pushed a minor addendum:

{code}
commit d59a6c8166cf398ee62089cc35ffeddfe5824134 (HEAD -> m, origin/master, 
origin/HEAD)
Author: Michael Stack 
Date:   Thu Apr 12 15:59:00 2018 -0700

HBASE-20329 Add note for operators to refguide on AsyncFSWAL; ADDENDUM

Add small note on edits being immediately visible when Durability == 
ASYNC_WAL.

diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index bc29d4b1db..1d6fc60bad 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -1248,7 +1248,7 @@ dictionary because of an abrupt termination, a read of 
this last block may not b
 It is possible to set _durability_ on each Mutation or on a Table basis. 
Options include:

  * _SKIP_WAL_: Do not write Mutations to the WAL (See the next section, 
<>).
- * _ASYNC_WAL_: Write the WAL asynchronously; do not hold-up clients waiting 
on the sync of their write to the filesystem but return immediately; the 
Mutation will be flushed to the WAL at a later time. This option currently may 
lose data. See HBASE-16689.
+ * _ASYNC_WAL_: Write the WAL asynchronously; do not hold-up clients waiting 
on the sync of their write to the filesystem but return immediately. The edit 
becomes visible. Meanwhile, in the background, the Mutation will be flushed to 
the WAL at some time later. This option currently may lose data. See 
HBASE-16689.
  * _SYNC_WAL_: The *default*. Each edit is sync'd to HDFS before we return 
success to the client.
  * _FSYNC_WAL_: Each edit is fsync'd to HDFS and the filesystem before we 
return success to the client.
{code}




> Add note for operators to refguide on AsyncFSWAL
> 
>
> Key: HBASE-20329
> URL: https://issues.apache.org/jira/browse/HBASE-20329
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, wal
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HBASE-20329.master.001.patch
>
>
> Need a few notes in refguide on this new facility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20145) HMaster start fails with IllegalStateException when HADOOP_HOME is set

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436326#comment-16436326
 ] 

Wei-Chiu Chuang edited comment on HBASE-20145 at 4/12/18 10:55 PM:
---

It doesn't seem to reproduce for me – tried hadoop trunk + hbase master, hadoop 
3.0.0-beta1 + hbase 2.0.0-beta1. Neither reproduced.

[~rohithsharma] could you share your hadoop and hbase configuration? Also, do 
hdfs ec -getPolicy -path /
hdfs ec -getPolicy -path /hbase
See if what the output looks like for both directories.

I would actually think it makes sense to add additional check to see if the 
file system directory is erasure coded, and log extra message to avoid 
confusion.


was (Author: jojochuang):
It doesn't seem to reproduce for me – tried hadoop trunk + hbase master, hadoop 
3.0.0-beta1 + hbase 2.0.0-beta1. Neither reproduced.

[~rohithsharma] could you share your hadoop and hbase configuration?

I would actually think it makes sense to add additional check to see if the 
file system directory is erasure coded, and log extra message to avoid 
confusion.

> HMaster start fails with IllegalStateException when HADOOP_HOME is set
> --
>
> Key: HBASE-20145
> URL: https://issues.apache.org/jira/browse/HBASE-20145
> Project: HBase
>  Issue Type: Bug
> Environment: HBase-2.0-beta1.
> Hadoop trunk version.
> java version "1.8.0_144"
>Reporter: Rohith Sharma K S
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> It is observed that HMaster start is failed when HADOOP_HOME is set as env 
> while starting HMaster. HADOOP_HOME is pointing to Hadoop trunk version.
> {noformat}
> 2018-03-07 16:59:52,654 ERROR [master//10.200.4.200:16000] master.HMaster: 
> Failed to become active master
> java.lang.IllegalStateException: The procedure WAL relies on the ability to 
> hsync for proper operation during component failures, but the underlying 
> filesystem does not support doing so. Please check the config value of 
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness 
> and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount 
> that can provide it.
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1036)
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
>     at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:532)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1232)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1145)
>     at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:837)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2026)
>     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:547)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The same configs is working in HBase-1.2.6 build properly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20145) HMaster start fails with IllegalStateException when HADOOP_HOME is set

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436326#comment-16436326
 ] 

Wei-Chiu Chuang edited comment on HBASE-20145 at 4/12/18 10:55 PM:
---

It doesn't seem to reproduce for me – tried hadoop trunk + hbase master, hadoop 
3.0.0-beta1 + hbase 2.0.0-beta1, hadoop trunk + hbase 2.0.0-beta1. Neither 
reproduced.

[~rohithsharma] could you share your hadoop and hbase configuration? Also, do 
 hdfs ec -getPolicy -path /
 hdfs ec -getPolicy -path /hbase
 See if what the output looks like for both directories.

I would actually think it makes sense to add additional check to see if the 
file system directory is erasure coded, and log extra message to avoid 
confusion.


was (Author: jojochuang):
It doesn't seem to reproduce for me – tried hadoop trunk + hbase master, hadoop 
3.0.0-beta1 + hbase 2.0.0-beta1. Neither reproduced.

[~rohithsharma] could you share your hadoop and hbase configuration? Also, do 
hdfs ec -getPolicy -path /
hdfs ec -getPolicy -path /hbase
See if what the output looks like for both directories.

I would actually think it makes sense to add additional check to see if the 
file system directory is erasure coded, and log extra message to avoid 
confusion.

> HMaster start fails with IllegalStateException when HADOOP_HOME is set
> --
>
> Key: HBASE-20145
> URL: https://issues.apache.org/jira/browse/HBASE-20145
> Project: HBase
>  Issue Type: Bug
> Environment: HBase-2.0-beta1.
> Hadoop trunk version.
> java version "1.8.0_144"
>Reporter: Rohith Sharma K S
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> It is observed that HMaster start is failed when HADOOP_HOME is set as env 
> while starting HMaster. HADOOP_HOME is pointing to Hadoop trunk version.
> {noformat}
> 2018-03-07 16:59:52,654 ERROR [master//10.200.4.200:16000] master.HMaster: 
> Failed to become active master
> java.lang.IllegalStateException: The procedure WAL relies on the ability to 
> hsync for proper operation during component failures, but the underlying 
> filesystem does not support doing so. Please check the config value of 
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness 
> and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount 
> that can provide it.
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1036)
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
>     at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:532)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1232)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1145)
>     at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:837)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2026)
>     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:547)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The same configs is working in HBase-1.2.6 build properly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20324) Hbase master fails to become active in kerberos environment

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436436#comment-16436436
 ] 

stack commented on HBASE-20324:
---

Knocking this down from blocker because I see it running on internal cluster 
here.

> Hbase master fails to become active in kerberos environment
> ---
>
> Key: HBASE-20324
> URL: https://issues.apache.org/jira/browse/HBASE-20324
> Project: HBase
>  Issue Type: Bug
> Environment: Hbase 2.0.0-beta2
> zookeeper-3.5.3-beta
> 3 nodes Env
> Kdc server on namenode
> *hadoop-2.7.3*
> *--Configured with keytabs(abhishekk1/2/3 are nodes)* 
>    *abhishekk1 is namenode/hmaster*
>    *abhishekk2/3 are datanodes/regionservers*
>Reporter: Abhishek Kulkarni
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-root-master-.log, hbase-root-regionserver.log
>
>
>  
> [^hbase-root-master-.log]
> ^[^hbase-root-regionserver.log]^
>  
> ^^Trying to resolve this form last one month with different forums but not 
> able to resovleat all.^^



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20324) Hbase master fails to become active in kerberos environment

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436435#comment-16436435
 ] 

stack commented on HBASE-20324:
---

It works for me in an environment here

I see this in your logs:

2018-03-31 05:00:22,190 INFO  [master/abhishekk1:16000] master.ServerManager: 
Waiting on regionserver count=2; waited=4466ms, expecting min=1 server(s), 
max=NO_LIMIT server(s), timeout=4500ms, lastChange=-1502ms
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Finished waiting on RegionServer count=2; waited=4516ms, expected min=1 
server(s), max=NO_LIMIT server(s), master is running
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522040698709 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522041869374 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522051522575 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522063726382 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522417574534 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,241 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk2.pne.ven.veritas.com,16020,1522480363659 rejected; 
we already have abhishekk2.pne.ven.veritas.com,16020,1522486814482 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522040703686 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522041871247 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522051524387 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522063727826 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522417575830 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port
2018-03-31 05:00:22,242 INFO  [master/abhishekk1:16000] master.ServerManager: 
Server serverName=abhishekk3.pne.ven.veritas.com,16020,1522480363633 rejected; 
we already have abhishekk3.pne.ven.veritas.com,16020,1522486816915 registered 
with same hostname and port


Is there an issue w/ naming in your cluster? abhishekk3.pne.ven.veritas.com is 
where your Master is running so interesting there are these complaints coming 
in.

Root complaint seems to be this:

SaslException): GSS initiate failed

... which reading around can have myriad causes with naming being one of them.

I'm not expert in setting up these environments. See if you can provide more 
info on your context. Thanks.



> Hbase master fails to become active in kerberos environment
> ---
>
> Key: HBASE-20324
> URL: https://issues.apache.org/jira/browse/HBASE-20324
> Project: HBase
>  Issue Type: Bug
> Environment: Hbase 2.0.0-beta2
> zookeeper-3.5.3-beta
> 3 nodes Env
> Kdc server on namenode
> *hadoop-2.7.3*
> *--Configured with keytabs(abhishekk1/2/3 are nodes)* 
>    *abhishekk1 is namenode/hmaster*
>    *abhishekk2/3 are datanodes/regionservers*
>Reporter: Abhishek Kulkarni
>Priority: Blocker
> Fix For: 2.0.0
>
> 

[jira] [Updated] (HBASE-20324) Hbase master fails to become active in kerberos environment

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20324:
--
Priority: Critical  (was: Blocker)

> Hbase master fails to become active in kerberos environment
> ---
>
> Key: HBASE-20324
> URL: https://issues.apache.org/jira/browse/HBASE-20324
> Project: HBase
>  Issue Type: Bug
> Environment: Hbase 2.0.0-beta2
> zookeeper-3.5.3-beta
> 3 nodes Env
> Kdc server on namenode
> *hadoop-2.7.3*
> *--Configured with keytabs(abhishekk1/2/3 are nodes)* 
>    *abhishekk1 is namenode/hmaster*
>    *abhishekk2/3 are datanodes/regionservers*
>Reporter: Abhishek Kulkarni
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-root-master-.log, hbase-root-regionserver.log
>
>
>  
> [^hbase-root-master-.log]
> ^[^hbase-root-regionserver.log]^
>  
> ^^Trying to resolve this form last one month with different forums but not 
> able to resovleat all.^^



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20278) [DOC] include ref guide updates for HBase 2.0 HBCK

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436421#comment-16436421
 ] 

Umesh Agashe commented on HBASE-20278:
--

Yeah, its related. Will take this one. Thanks [~stack]!

> [DOC] include ref guide updates for HBase 2.0 HBCK
> --
>
> Key: HBASE-20278
> URL: https://issues.apache.org/jira/browse/HBASE-20278
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Priority: Critical
>
> Called out by [~stack] in HBASE-19158
> {quote}
> bq. HBCK tool from an earlier release against an HBase 2.0+ cluster will 
> destructively alter said cluster in unrecoverable ways.
> Footnote or callout that says something like "Unfortunately we are unable to 
> distinguish an HBCK client so cannot put in place guards against destructive 
> HBCK changes."  probably too much for this startup section on reread 
> of what I've written here.
> bq. As of HBase 2.0, HBCK is a read-only tool that can report the status of 
> some non-public system internals. You should not rely on the format nor 
> content of these internals to remain consistent across HBase releases.
> Ugh. We need HBCK2 and a pointer here to it. Will do in a follow-on.
> {quote}
> then update the upgrade section to point to the docs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20278) [DOC] include ref guide updates for HBase 2.0 HBCK

2018-04-12 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe reassigned HBASE-20278:


Assignee: Umesh Agashe

> [DOC] include ref guide updates for HBase 2.0 HBCK
> --
>
> Key: HBASE-20278
> URL: https://issues.apache.org/jira/browse/HBASE-20278
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Umesh Agashe
>Priority: Critical
>
> Called out by [~stack] in HBASE-19158
> {quote}
> bq. HBCK tool from an earlier release against an HBase 2.0+ cluster will 
> destructively alter said cluster in unrecoverable ways.
> Footnote or callout that says something like "Unfortunately we are unable to 
> distinguish an HBCK client so cannot put in place guards against destructive 
> HBCK changes."  probably too much for this startup section on reread 
> of what I've written here.
> bq. As of HBase 2.0, HBCK is a read-only tool that can report the status of 
> some non-public system internals. You should not rely on the format nor 
> content of these internals to remain consistent across HBase releases.
> Ugh. We need HBCK2 and a pointer here to it. Will do in a follow-on.
> {quote}
> then update the upgrade section to point to the docs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436410#comment-16436410
 ] 

Umesh Agashe commented on HBASE-18792:
--

bq. Are you saying it is not possible to have a bug where assignments get into 
an inconsistent state and need fixing any more?

Not at all. But fixing assignments with HBCK by HDFS directory and file layout 
manipulation will not be available. Preferred way as of 2.0+, is to let Master 
handle it by submitting operations/ procedures.

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436397#comment-16436397
 ] 

Umesh Agashe edited comment on HBASE-20403 at 4/12/18 10:26 PM:


Its possible. AFAIK, there is an optimization in the HBase code where we try to 
optimize an extra seek by reading next block's header while reading the current 
block. As header size when encrypted/ not encrypted may be different, its 
possible that HBase is trying to read more than the buffer size. Needs more 
digging in.


was (Author: uagashe):
Its possible. AFAIK, there is an optimization in the HBase code where we try to 
optimize an extra seek by reading next block's header while reading the current 
block. As header size when encrypted/ not encrypted may be different, its 
possible that HBase is trying to read more than the buffer size.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436397#comment-16436397
 ] 

Umesh Agashe commented on HBASE-20403:
--

Its possible. AFAIK, there is an optimization in the HBase code where we try to 
optimize an extra seek by reading next block's header while reading the current 
block. As header size when encrypted/ not encrypted may be different, its 
possible that HBase is trying to read more than the buffer size.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20278) [DOC] include ref guide updates for HBase 2.0 HBCK

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436399#comment-16436399
 ] 

stack commented on HBASE-20278:
---

[~uagashe] You up for this one? You have good stuff going on over in 
HBASE-18792... especially your list of what works and what does not.

> [DOC] include ref guide updates for HBase 2.0 HBCK
> --
>
> Key: HBASE-20278
> URL: https://issues.apache.org/jira/browse/HBASE-20278
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Priority: Critical
>
> Called out by [~stack] in HBASE-19158
> {quote}
> bq. HBCK tool from an earlier release against an HBase 2.0+ cluster will 
> destructively alter said cluster in unrecoverable ways.
> Footnote or callout that says something like "Unfortunately we are unable to 
> distinguish an HBCK client so cannot put in place guards against destructive 
> HBCK changes."  probably too much for this startup section on reread 
> of what I've written here.
> bq. As of HBase 2.0, HBCK is a read-only tool that can report the status of 
> some non-public system internals. You should not rely on the format nor 
> content of these internals to remain consistent across HBase releases.
> Ugh. We need HBCK2 and a pointer here to it. Will do in a follow-on.
> {quote}
> then update the upgrade section to point to the docs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20327) When qualifier is not specified, append and incr operation do not work (shell)

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436398#comment-16436398
 ] 

Hadoop QA commented on HBASE-20327:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
9s{color} | {color:red} The patch generated 1 new + 403 unchanged - 15 fixed = 
404 total (was 418) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
10s{color} | {color:red} The patch generated 6 new + 466 unchanged - 1 fixed = 
472 total (was 467) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
42s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918812/HBASE-20327.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux f8d9500214fd 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 17a29ac231 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_162 |
| rubocop | v0.54.0 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12421/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
| ruby-lint | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12421/artifact/patchprocess/diff-patch-ruby-lint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12421/testReport/ |
| Max. process+thread count | 2518 (vs. ulimit of 1) |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12421/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> When qualifier is not specified, append and incr operation do not work (shell)
> --
>
> Key: HBASE-20327
> URL: https://issues.apache.org/jira/browse/HBASE-20327
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20327.master.001.patch, 
> HBASE-20327.master.002.patch, HBASE-20327.master.003.patch
>
>
> Running the example commands specified in shell docs for "append" and "incr" 
> throw following error:
> {code:java}
> ERROR: Failed to provide both column family and column qualifier for 
> append{code}
> {code:java}
> ERROR: Failed to provide both column family and column qualifier for 
> incr{code}
> While running the same command via java does not require the user to provide 
> both column and qualifier and works smoothly.
>  
> 

[jira] [Commented] (HBASE-20382) If RSGroups not enabled, rsgroup.jsp prints stack trace

2018-04-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436394#comment-16436394
 ] 

Andrew Purtell commented on HBASE-20382:


No, we don't have rsgroup.jsp in branch-1 or branch-1.4 (and probably won't)

> If RSGroups not enabled, rsgroup.jsp prints stack trace
> ---
>
> Key: HBASE-20382
> URL: https://issues.apache.org/jira/browse/HBASE-20382
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup, UI
>Reporter: Mike Drob
>Assignee: Balazs Meszaros
>Priority: Major
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-20382.branch-2.0.001.patch
>
>
> Going to {{rsgroup.jsp?name=foo}} I get the following stack trace:
> {noformat}
> org.apache.hadoop.hbase.TableNotFoundException: hbase:rsgroup
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:842)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:733)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:719)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:690)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:571)
>   at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.getRegionLocation(ConnectionUtils.java:131)
>   at 
> org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:73)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:359)
>   at 
> org.apache.hadoop.hbase.RSGroupTableAccessor.getRSGroupInfo(RSGroupTableAccessor.java:75)
>   at 
> org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:78)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> 

[jira] [Commented] (HBASE-20274) [DOC] additional metrics related changes for 2.0 upgrade section

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436393#comment-16436393
 ] 

stack commented on HBASE-20274:
---

HBASE-20298 "Doc change in read/write/total accounting metrics" should cover 
the above {{I'm pretty sure our ops/second on master page has changed in 
nature. I need to figure it and add to this metrics section.}}

Leaving this open in case other metrics show up in meantime. Not a blocker.

> [DOC] additional metrics related changes for 2.0 upgrade section
> 
>
> Key: HBASE-20274
> URL: https://issues.apache.org/jira/browse/HBASE-20274
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Priority: Major
>
> Feedback on HBASE-19158 from [~md...@cloudera.com]
> {quote}
> Metrics:
> HBASE-17957
> {quote}
> If folks find others before this gets done, feel free to drop here in 
> comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436391#comment-16436391
 ] 

Dave Latham commented on HBASE-18792:
-

Are you saying it is not possible to have a bug where assignments get into an 
inconsistent state and need fixing any more?

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17553) Make a 2.0.0 Release

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436370#comment-16436370
 ] 

stack commented on HBASE-17553:
---

Publish compat report as part of RC'ing. See tail of HBASE-18622 for 
instruction (add it to the make_rc.sh script)

> Make a 2.0.0 Release
> 
>
> Key: HBASE-17553
> URL: https://issues.apache.org/jira/browse/HBASE-17553
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Umbrella issue to keep account of tasks to make a 2.0.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20379) shadedjars yetus plugin should add a footer link

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436367#comment-16436367
 ] 

Hadoop QA commented on HBASE-20379:
---

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/12420/console in case of 
problems.


> shadedjars yetus plugin should add a footer link
> 
>
> Key: HBASE-20379
> URL: https://issues.apache.org/jira/browse/HBASE-20379
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20379.0.patch
>
>
> investigating the failure on HBASE-20219, it would be nice if we posted a 
> footer link to what failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20379) shadedjars yetus plugin should add a footer link

2018-04-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436369#comment-16436369
 ] 

Hadoop QA commented on HBASE-20379:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20379 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918827/HBASE-20379.0.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux ddd9e93ef932 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 17a29ac231 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| shellcheck | v0.4.4 |
| Max. process+thread count | 48 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/12420/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> shadedjars yetus plugin should add a footer link
> 
>
> Key: HBASE-20379
> URL: https://issues.apache.org/jira/browse/HBASE-20379
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20379.0.patch
>
>
> investigating the failure on HBASE-20219, it would be nice if we posted a 
> footer link to what failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18622) Mitigate API compatibility concerns between branch-1 and branch-2

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436365#comment-16436365
 ] 

stack commented on HBASE-18622:
---

Reran the report

Changed the reporting tool to use 2.4 instead of 2.1 (released in January).

diff --git a/dev-support/checkcompatibility.py 
b/dev-support/checkcompatibility.py
index ea9c229344..9f0d797ff3 100755
--- a/dev-support/checkcompatibility.py
+++ b/dev-support/checkcompatibility.py
@@ -156,7 +156,7 @@ def checkout_java_acc(force):

 logging.info("Downloading Java ACC...")

-url = "https://github.com/lvc/japi-compliance-checker/archive/2.1.tar.gz;
+url = "https://github.com/lvc/japi-compliance-checker/archive/2.4.tar.gz;
 scratch_dir = get_scratch_dir()
 path = os.path.join(scratch_dir, os.path.basename(url))
 jacc = urllib2.urlopen(url)
@@ -166,7 +166,7 @@ def checkout_java_acc(force):
 subprocess.check_call(["tar", "xzf", path],
   cwd=scratch_dir)

-shutil.move(os.path.join(scratch_dir, "japi-compliance-checker-2.1"),
+shutil.move(os.path.join(scratch_dir, "japi-compliance-checker-2.4"),
 os.path.join(acc_dir))


$ ./dev-support/checkcompatibility.py --annotation 
org.apache.hadoop.hbase.classification.InterfaceAudience.Public rel/1.2.6 
2.0.0RC0


> Mitigate API compatibility concerns between branch-1 and branch-2
> -
>
> Key: HBASE-18622
> URL: https://issues.apache.org/jira/browse/HBASE-18622
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: report.1.2_2.0.html.gz
>
>
> This project is to do what [~apurtell] did in the issue "HBASE-18431 Mitigate 
> compatibility concerns between branch-1.3 and branch-1.4" only do it between 
> branch-1 and branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436362#comment-16436362
 ] 

Umesh Agashe commented on HBASE-18792:
--

Please review the patch. As [~busbey] has suggested next step is to back port 
the necessary changes to branch-1.

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18792:
-
Status: Patch Available  (was: In Progress)

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436360#comment-16436360
 ] 

Umesh Agashe commented on HBASE-18792:
--

bq. As someone who has relied on some of these fix options in the past 
(especially fixAssigments and fixMeta), what should an operator do instead if a 
table gets into a bad state?

HBase 2.0 includes AMv2 which handles assignments through procedures that are 
persisted in meta. With every restart of Master, procedures are re-tried from 
the last persisted state/ step. Whatever is in meta takes precedence over view 
that region servers have or HDFS. Same applies for .regioninfo and .tableinfo 
files. HBase 2.0 doesn't use those files. So options like: -fixAssignments, 
-fixHdfsOrphans, -fixTableOrphans are not required. Submitting/ re-submitting 
procedures (operations) can sometimes replace -fixMeta.

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18792:
-
Attachment: hbase-18792.master.001.patch

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-18792.master.001.patch
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436351#comment-16436351
 ] 

Dave Latham commented on HBASE-18792:
-

As someone who has relied on some of these fix options in the past (especially 
fixAssigments and fixMeta), what should an operator do instead if a table gets 
into a bad state?

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20338) WALProcedureStore#recoverLease() should have fixed sleeps for retrying rollWriter()

2018-04-12 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20338:
--
   Resolution: Fixed
Fix Version/s: 2.0.1
   2.1.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Awesome, thanks for the patch [~jojochuang] and for the reviews [~uagashe], 
[~chia7712]. 

FYI [~stack], I pushed this to branch-2.0 as well, please be aware in case we 
roll another RC.

> WALProcedureStore#recoverLease() should have fixed sleeps for retrying 
> rollWriter()
> ---
>
> Key: HBASE-20338
> URL: https://issues.apache.org/jira/browse/HBASE-20338
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20338.master.001.patch, 
> HBASE-20338.master.002.patch, HBASE-20338.master.003.patch, 
> HBASE-20338.master.004.patch, HBASE-20338.master.005.patch
>
>
> In our internal testing we observed that logs are getting flooded due to 
> continuous loop in WALProcedureStore#recoverLease():
> {code}
>   while (isRunning()) {
> // Get Log-MaxID and recover lease on old logs
> try {
>   flushLogId = initOldLogs(oldLogs);
> } catch (FileNotFoundException e) {
>   LOG.warn("Someone else is active and deleted logs. retrying.", e);
>   oldLogs = getLogFiles();
>   continue;
> }
> // Create new state-log
> if (!rollWriter(flushLogId + 1)) {
>   // someone else has already created this log
>   LOG.debug("Someone else has already created log " + flushLogId);
>   continue;
> }
> {code}
> rollWriter() fails to create a new file. Error messages in HDFS namenode logs 
> around same time:
> {code}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 172.31.121.196:38508 Call#3141 Retry#0
> java.io.IOException: Exeption while contacting value generator
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:389)
> at 
> org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:291)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.generateEncryptedKey(KMSClientProvider.java:724)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2680)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$2.run(FSNamesystem.java:2676)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:477)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:458)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2675)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2815)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2712)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:604)
> at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2220)
> Caused by: java.net.ConnectException: Connection refused (Connection 

[jira] [Updated] (HBASE-20379) shadedjars yetus plugin should add a footer link

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20379:

Status: Patch Available  (was: Open)

-v0
- single line addition to post link to log

> shadedjars yetus plugin should add a footer link
> 
>
> Key: HBASE-20379
> URL: https://issues.apache.org/jira/browse/HBASE-20379
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20379.0.patch
>
>
> investigating the failure on HBASE-20219, it would be nice if we posted a 
> footer link to what failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20379) shadedjars yetus plugin should add a footer link

2018-04-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20379:

Attachment: HBASE-20379.0.patch

> shadedjars yetus plugin should add a footer link
> 
>
> Key: HBASE-20379
> URL: https://issues.apache.org/jira/browse/HBASE-20379
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20379.0.patch
>
>
> investigating the failure on HBASE-20219, it would be nice if we posted a 
> footer link to what failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20403) Prefetch sometimes doesn't work with encrypted file system

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436334#comment-16436334
 ] 

Wei-Chiu Chuang commented on HBASE-20403:
-

This looks like a HDFS bug more than a HBASE one. CryptoInputStream is supposed 
to provide an abstraction such that caller shouldn't care what's the actual 
offset should be.

> Prefetch sometimes doesn't work with encrypted file system
> --
>
> Key: HBASE-20403
> URL: https://issues.apache.org/jira/browse/HBASE-20403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-beta-2
>Reporter: Umesh Agashe
>Priority: Major
> Fix For: 2.0.0
>
>
> Log from long running test has following stack trace a few times:
> {code}
> 2018-04-09 18:33:21,523 WARN 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl: Prefetch 
> path=hdfs://ns1/hbase/data/default/IntegrationTestBigLinkedList_20180409172704/35f1a7ef13b9d327665228abdbcdffae/meta/9089d98b2a6b4847b3fcf6aceb124988,
>  offset=36884200, end=231005989
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:183)
>   at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:705)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:766)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:831)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.read(CryptoInputStream.java:197)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:762)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1559)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1771)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1594)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1488)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$1.run(HFileReaderImpl.java:278)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Size on disk calculations seem to get messed up due to encryption. Possible 
> fixes can be:
> * if file is encrypted with FileStatus#isEncrypted() and do not prefetch.
> * document that hbase.rs.prefetchblocksonopen cannot be true if file is 
> encrypted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18792) hbase-2 needs to defend against hbck operations

2018-04-12 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436333#comment-16436333
 ] 

Umesh Agashe commented on HBASE-18792:
--

Here is the list of supported and unsupported hbck operations for HBase 2.0:
{code}
---
NOTE: As of HBase version 2.0, the hbck tool is significantly changed.
In general, all Read-Only options are supported and can be be used
safely. Most -fix/ -repair options are NOT supported. Please see usage
below for details on which options are not supported.
---

Usage: fsck [opts] {only tables}
 where [opts] are:
   -help Display help options (this)
   -details Display full report of all regions.
   -timelag   Process only regions that  have not experienced 
any metadata updates in the last   seconds.
   -sleepBeforeRerun  Sleep this many seconds before checking if 
the fix worked if run with -fix
   -summary Print only summary of the tables and status.
   -metaonly Only check the state of the hbase:meta table.
   -sidelineDir  HDFS path to backup existing meta.
   -boundaries Verify that regions boundaries are the same between META and 
store files.
   -exclusive Abort if another hbck is exclusive or fixing.

  Datafile Repair options: (expert features, use with caution!)
   -checkCorruptHFiles Check all Hfiles by opening them to make sure they 
are valid
   -sidelineCorruptHFiles  Quarantine corrupted HFiles.  implies 
-checkCorruptHFiles

 Replication options
   -fixReplication   Deletes replication queues for removed peers

  Metadata Repair options supported as of version 2.0: (expert features, use 
with caution!)
   -fixVersionFile   Try to fix missing hbase.version file in hdfs.
   -fixReferenceFiles  Try to offline lingering reference store files
   -fixHFileLinks  Try to offline lingering HFileLinks
   -noHdfsChecking   Don't load/check region info from HDFS. Assumes hbase:meta 
region info is good. Won't check/fix any HDFS issue, e.g. hole, orphan, or 
overlap
   -ignorePreCheckPermission  ignore filesystem permission pre-check

NOTE: Following options are NOT supported as of HBase version 2.0+.

  UNSUPPORTED Metadata Repair options: (expert features, use with caution!)
   -fix  Try to fix region assignments.  This is for backwards 
compatiblity
   -fixAssignments   Try to fix region assignments.  Replaces the old -fix
   -fixMeta  Try to fix meta problems.  This assumes HDFS region info 
is good.
   -fixHdfsHoles Try to fix region holes in hdfs.
   -fixHdfsOrphans   Try to fix region dirs with no .regioninfo file in hdfs
   -fixTableOrphans  Try to fix table dirs with no .tableinfo file in hdfs 
(online mode only)
   -fixHdfsOverlaps  Try to fix region overlaps in hdfs.
   -maxMerge  When fixing region overlaps, allow at most  regions to 
merge. (n=5 by default)
   -sidelineBigOverlaps  When fixing region overlaps, allow to sideline big 
overlaps
   -maxOverlapsToSideline   When fixing region overlaps, allow at most  
regions to sideline per group. (n=2 by default)
   -fixSplitParents  Try to force offline split parents to be online.
   -removeParentsTry to offline and sideline lingering parents and keep 
daughter regions.
   -fixEmptyMetaCells  Try to fix hbase:meta entries not referencing any region 
(empty REGIONINFO_QUALIFIER rows)

  UNSUPPORTED Metadata Repair shortcuts
   -repair   Shortcut for -fixAssignments -fixMeta -fixHdfsHoles 
-fixHdfsOrphans -fixHdfsOverlaps -fixVersionFile -sidelineBigOverlaps 
-fixReferenceFiles-fixHFileLinks
   -repairHoles  Shortcut for -fixAssignments -fixMeta -fixHdfsHoles
{code}

Please review and let me know of any changes.

> hbase-2 needs to defend against hbck operations
> ---
>
> Key: HBASE-18792
> URL: https://issues.apache.org/jira/browse/HBASE-18792
> Project: HBase
>  Issue Type: Task
>  Components: hbck
>Reporter: stack
>Assignee: Umesh Agashe
>Priority: Blocker
> Fix For: 2.0.0
>
>
> hbck needs updating to run against hbase2. Meantime, if an hbck from hbase1 
> is run against hbck2, it may do damage. hbase2 should defend itself against 
> hbck1 ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20391) close out stale or finished PRs on github

2018-04-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436332#comment-16436332
 ] 

Sean Busbey commented on HBASE-20391:
-

PR #64 has now been closed. Before pushing I'll do a pass of the linked PRs and 
remove any that are no longer open.

> close out stale or finished PRs on github
> -
>
> Key: HBASE-20391
> URL: https://issues.apache.org/jira/browse/HBASE-20391
> Project: HBase
>  Issue Type: Task
>  Components: community, documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-20391.0.patch
>
>
> Time to do another round of closing PRs via empty commit.
> * [#51|https://github.com/apache/hbase/pull/51] - > 1 month since notification
> * [#52|https://github.com/apache/hbase/pull/52] - > 1 month since notification
> * [#61|https://github.com/apache/hbase/pull/61] - HBASE-18928 has already 
> closed
> * [#62|https://github.com/apache/hbase/pull/62] - HBASE-18929 has already 
> closed
> * [#64|https://github.com/apache/hbase/pull/64] -HBASE-18901 has already 
> closed
> * [#67|https://github.com/apache/hbase/pull/67] - HBASE-19386 has already 
> closed
> * [#68|https://github.com/apache/hbase/pull/68] - HBASE-19387 has already 
> closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20383) [AMv2] AssignmentManager: Failed transition XYZ is not OPEN

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20383:
--
Fix Version/s: 2.0.0

> [AMv2] AssignmentManager: Failed transition XYZ is not OPEN
> ---
>
> Key: HBASE-20383
> URL: https://issues.apache.org/jira/browse/HBASE-20383
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20383.master.001.patch
>
>
> Seeing a bunch of this testing 2.0.0:
> {code}
> 2018-04-10 13:57:09,430 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] 
> assignment.AssignmentManager: Failed transition   
>   
>   
> org.apache.hadoop.hbase.client.DoNotRetryRegionException: 
> 19a2cd6f88abae0036415ee1ea041c2e is not OPEN
>   at 
> org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:769)
>   
>   
>  at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.updateRegionSplitTransition(AssignmentManager.java:911)
>   
>   
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.reportRegionStateTransition(AssignmentManager.java:819)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1538)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:11093)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) 
>   
>   
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}
> Looks like report back from Master OK'ing a split to go ahead but the split 
> is already running. Figure how to shut these down. They are noisy at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20383) [AMv2] AssignmentManager: Failed transition XYZ is not OPEN

2018-04-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20383:
--
Attachment: HBASE-20383.master.001.patch

> [AMv2] AssignmentManager: Failed transition XYZ is not OPEN
> ---
>
> Key: HBASE-20383
> URL: https://issues.apache.org/jira/browse/HBASE-20383
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: stack
>Priority: Major
> Attachments: HBASE-20383.master.001.patch
>
>
> Seeing a bunch of this testing 2.0.0:
> {code}
> 2018-04-10 13:57:09,430 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] 
> assignment.AssignmentManager: Failed transition   
>   
>   
> org.apache.hadoop.hbase.client.DoNotRetryRegionException: 
> 19a2cd6f88abae0036415ee1ea041c2e is not OPEN
>   at 
> org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:769)
>   
>   
>  at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.updateRegionSplitTransition(AssignmentManager.java:911)
>   
>   
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.reportRegionStateTransition(AssignmentManager.java:819)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1538)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:11093)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) 
>   
>   
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}
> Looks like report back from Master OK'ing a split to go ahead but the split 
> is already running. Figure how to shut these down. They are noisy at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20145) HMaster start fails with IllegalStateException when HADOOP_HOME is set

2018-04-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436326#comment-16436326
 ] 

Wei-Chiu Chuang commented on HBASE-20145:
-

It doesn't seem to reproduce for me – tried hadoop trunk + hbase master, hadoop 
3.0.0-beta1 + hbase 2.0.0-beta1. Neither reproduced.

[~rohithsharma] could you share your hadoop and hbase configuration?

I would actually think it makes sense to add additional check to see if the 
file system directory is erasure coded, and log extra message to avoid 
confusion.

> HMaster start fails with IllegalStateException when HADOOP_HOME is set
> --
>
> Key: HBASE-20145
> URL: https://issues.apache.org/jira/browse/HBASE-20145
> Project: HBase
>  Issue Type: Bug
> Environment: HBase-2.0-beta1.
> Hadoop trunk version.
> java version "1.8.0_144"
>Reporter: Rohith Sharma K S
>Assignee: Wei-Chiu Chuang
>Priority: Critical
>
> It is observed that HMaster start is failed when HADOOP_HOME is set as env 
> while starting HMaster. HADOOP_HOME is pointing to Hadoop trunk version.
> {noformat}
> 2018-03-07 16:59:52,654 ERROR [master//10.200.4.200:16000] master.HMaster: 
> Failed to become active master
> java.lang.IllegalStateException: The procedure WAL relies on the ability to 
> hsync for proper operation during component failures, but the underlying 
> filesystem does not support doing so. Please check the config value of 
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness 
> and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount 
> that can provide it.
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1036)
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
>     at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:532)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1232)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1145)
>     at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:837)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2026)
>     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:547)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The same configs is working in HBase-1.2.6 build properly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20383) [AMv2] AssignmentManager: Failed transition XYZ is not OPEN

2018-04-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436325#comment-16436325
 ] 

stack commented on HBASE-20383:
---

Looking at this more, it looks to be an impatient regionserver re-requesting 
that a region be split. We don't know for sure though because there is a hole 
in our logging; we report split request in log AFTER we make the request but 
here the request is failing before we can log. For some reason the original 
split request is not showing...

Adding some debug here. The exception in log is disturbing though it seems 
harmless given this is out of an ITBLL that verifies as wholesome.

> [AMv2] AssignmentManager: Failed transition XYZ is not OPEN
> ---
>
> Key: HBASE-20383
> URL: https://issues.apache.org/jira/browse/HBASE-20383
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Reporter: stack
>Priority: Major
>
> Seeing a bunch of this testing 2.0.0:
> {code}
> 2018-04-10 13:57:09,430 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] 
> assignment.AssignmentManager: Failed transition   
>   
>   
> org.apache.hadoop.hbase.client.DoNotRetryRegionException: 
> 19a2cd6f88abae0036415ee1ea041c2e is not OPEN
>   at 
> org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:769)
>   
>   
>  at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.updateRegionSplitTransition(AssignmentManager.java:911)
>   
>   
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.reportRegionStateTransition(AssignmentManager.java:819)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1538)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:11093)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) 
>   
>   
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}
> Looks like report back from Master OK'ing a split to go ahead but the split 
> is already running. Figure how to shut these down. They are noisy at least.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >