[jira] [Commented] (HBASE-21097) Flush pressure assertion may fail in testFlushThroughputTuning

2018-08-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591179#comment-16591179
 ] 

Hudson commented on HBASE-21097:


Results for branch branch-2
[build #1154 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1154/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1154//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1154//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1154//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Flush pressure assertion may fail in testFlushThroughputTuning 
> ---
>
> Key: HBASE-21097
> URL: https://issues.apache.org/jira/browse/HBASE-21097
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: 21097.v1.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/14137/artifact/patchprocess/patch-unit-hbase-server.txt
>  :
> {code}
> [ERROR] 
> testFlushThroughputTuning(org.apache.hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController)
>   Time elapsed: 17.446 s  <<< FAILURE!
> java.lang.AssertionError: expected:<0.0> but was:<1.2906294173808417E-6>
>   at 
> org.apache.hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController.testFlushThroughputTuning(TestFlushWithThroughputController.java:185)
> {code}
> Here is the related assertion:
> {code}
> assertEquals(0.0, regionServer.getFlushPressure(), EPSILON);
> {code}
> where EPSILON = 1E-6
> In the above case, due to margin of 2.9E-7, the assertion didn't pass.
> It seems the epsilon can be adjusted to accommodate different workload / 
> hardware combination.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts

2018-08-23 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21095:
---
Attachment: HBASE-21095.branch-2.0.001.patch

> The timeout retry logic for several procedures are broken after master 
> restarts
> ---
>
> Key: HBASE-21095
> URL: https://issues.apache.org/jira/browse/HBASE-21095
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, 
> HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch
>
>
> For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or 
> unassign a region, we will set the procedure to WAITING_TIMEOUT state, and 
> rely on the ProcedureEvent in RegionStateNode to wake us up later. But after 
> restarting, we do not suspend the ProcedureEvent in RSN, and also do not add 
> the procedure to the ProcedureEvent's suspending queue, so we will hang there 
> forever as no one will wake us up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts

2018-08-23 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591178#comment-16591178
 ] 

Allan Yang commented on HBASE-21095:


uploaded a simple patch for branch-2.0/2.1,[~Apache9]

> The timeout retry logic for several procedures are broken after master 
> restarts
> ---
>
> Key: HBASE-21095
> URL: https://issues.apache.org/jira/browse/HBASE-21095
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, 
> HBASE-21095-v2.patch, HBASE-21095.branch-2.0.001.patch, HBASE-21095.patch
>
>
> For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or 
> unassign a region, we will set the procedure to WAITING_TIMEOUT state, and 
> rely on the ProcedureEvent in RegionStateNode to wake us up later. But after 
> restarting, we do not suspend the ProcedureEvent in RSN, and also do not add 
> the procedure to the ProcedureEvent's suspending queue, so we will hang there 
> forever as no one will wake us up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591166#comment-16591166
 ] 

Hadoop QA commented on HBASE-20993:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
30s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
27s{color} | {color:red} hbase-client: The patch generated 1 new + 2 unchanged 
- 0 fixed = 3 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} hbase-server: The patch generated 6 new + 80 unchanged 
- 0 fixed = 86 total (was 80) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
28s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m  0s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License 

[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591168#comment-16591168
 ] 

Jack Bearden commented on HBASE-20993:
--

{quote}I tried running TestInsecureIPC in master branch but got compilation 
errors.

Please investigate how to apply the fix to master branch.
{quote}
Sure! Sounds good to me but let's get it in a new ticket if that's okay. I was 
thinking we need a ticket for branch-2 and another ticket for master that will 
serve for porting this functionality upstream. [~reidchan] may have already 
made these; I haven't checked

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, 
> HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> 

[jira] [Assigned] (HBASE-21105) TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE

2018-08-23 Thread Vishal Khandelwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Khandelwal reassigned HBASE-21105:
-

Assignee: Vishal Khandelwal

> TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE
> --
>
> Key: HBASE-21105
> URL: https://issues.apache.org/jira/browse/HBASE-21105
> Project: HBase
>  Issue Type: Bug
>  Components: hbck, test
>Affects Versions: 1.5.0, 1.3.3, 1.4.7
>Reporter: Sean Busbey
>Assignee: Vishal Khandelwal
>Priority: Major
>
> TestHBaseFsck in the mentioned branches has two tests that rely on 
> TestEndToEndSplitTransaction for blocking in the same way TestTableResource 
> used to before HBASE-21076.
> Both tests appear to specifically be testing that something happens after a 
> split, so we'll need a solution that removes the cross-test dependency but 
> still allows for "wait until this split has finished"
> example failure from branch-1
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testSplitDaughtersNotInMeta(TestHBaseFsck.java:1985)
> {code}
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testValidLingeringSplitParent(TestHBaseFsck.java:1934)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: (was: HBASE-21108.002.patch)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.002.patch
Status: Patch Available  (was: Open)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591163#comment-16591163
 ] 

Jack Bearden commented on HBASE-20993:
--

-007:
 * Various small fixes and code optimizations (thanks [~yuzhih...@gmail.com])

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, 
> HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> 

[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HBASE-20993:
-
Attachment: HBASE-20993.branch-1.007.patch

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.007.patch, HBASE-20993.branch-1.2.001.patch, 
> HBASE-20993.branch-1.wip.002.patch, HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> 

[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Status: Open  (was: Patch Available)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591156#comment-16591156
 ] 

Ted Yu commented on HBASE-20993:


I tried running TestInsecureIPC in master branch but got compilation errors.

Please investigate how to apply the fix to master branch.

Thanks

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos 

[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591154#comment-16591154
 ] 

Hadoop QA commented on HBASE-21108:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HBASE-21108 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936947/HBASE-21108.002.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14181/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: (was: HBASE-21108.002.patch)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20890) PE filterScan seems to be stuck forever

2018-08-23 Thread Vikas Vishwakarma (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591153#comment-16591153
 ] 

Vikas Vishwakarma commented on HBASE-20890:
---

LGTM +1 [~abhishek94goyal] 

I can commit if no objections [~apurtell] 

> PE filterScan seems to be stuck forever
> ---
>
> Key: HBASE-20890
> URL: https://issues.apache.org/jira/browse/HBASE-20890
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.3
>Reporter: Vikas Vishwakarma
>Assignee: Abhishek Goyal
>Priority: Minor
> Attachments: HBASE-20890.001.patch
>
>
> Command Used
> {code:java}
> ~/current/bigdata-hbase/hbase/hbase/bin/hbase pe --nomapred randomWrite 1 > 
> write 2>&1
> ~/current/bigdata-hbase/hbase/hbase/bin/hbase pe --nomapred filterScan 1 > 
> filterScan 2>&1
> {code}
>  
> Output
> This kept running for several hours just printing the below messages in logs
>  
> {code:java}
> -bash-4.1$ grep "Advancing internal scanner to startKey" filterScan.1 | head
> 2018-07-13 10:44:45,188 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-13 10:44:45,976 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-13 10:44:46,695 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> .
> -bash-4.1$ grep "Advancing internal scanner to startKey" filterScan.1 | tail
> 2018-07-15 06:20:22,353 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-15 06:20:23,044 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-15 06:20:23,768 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.002.patch

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591152#comment-16591152
 ] 

Ted Yu commented on HBASE-20993:


Looks good overall.
For BlockingRpcConnection :
{code}
+  return;
+} else {
{code}
Since the if block would return, the 'else' keyword can be omitted.
{code}
+  throw new DoNotRetryIOException("Server asked us to fall back to 
SIMPLE auth, "
++ "but we are not configured for that behavior!");
{code}
Why not use FallbackDisallowedException ?

For NettyRpcNegotiateHandler.java :
{code}
+  LOG.error("Server asked us to fall back to SIMPLE auth" +
+  "but we are not configured for that behavior!");
{code}
A space is missing in between.
{code}
+  handleFailure(new FallbackDisallowedException());
{code}
Please add message to the exception.
{code}
   if (!isSecurityEnabled && authMethod != AuthMethod.SIMPLE) {
+// Case: (!isSecurityEnabled && authMethod != AuthMethod.SIMPLE)
{code}
It seems repeating the condition in comment is not necessary.



> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   

[jira] [Commented] (HBASE-20941) Create and implement HbckService in master

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591149#comment-16591149
 ] 

Hadoop QA commented on HBASE-20941:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 0s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 15s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}291m 42s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}350m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.assignment.TestMergeTableRegionsProcedure |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | 

[jira] [Commented] (HBASE-21105) TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE

2018-08-23 Thread Vishal Khandelwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591147#comment-16591147
 ] 

Vishal Khandelwal commented on HBASE-21105:
---

Sure [~apurtell], will look into the failure  provide patch. I looked it result 
and somehow "TestHBaseFsck" did not run in results at all. I had attached 
results ([^result_HBASE-20940.branch-1.v2.log)] also with " HBASE-20940" I 
remember same thing happened with another test class as well.

> TestHBaseFsck failing in branch-1, branch-1.4, branch-1.3 with NPE
> --
>
> Key: HBASE-21105
> URL: https://issues.apache.org/jira/browse/HBASE-21105
> Project: HBase
>  Issue Type: Bug
>  Components: hbck, test
>Affects Versions: 1.5.0, 1.3.3, 1.4.7
>Reporter: Sean Busbey
>Priority: Major
>
> TestHBaseFsck in the mentioned branches has two tests that rely on 
> TestEndToEndSplitTransaction for blocking in the same way TestTableResource 
> used to before HBASE-21076.
> Both tests appear to specifically be testing that something happens after a 
> split, so we'll need a solution that removes the cross-test dependency but 
> still allows for "wait until this split has finished"
> example failure from branch-1
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testSplitDaughtersNotInMeta(TestHBaseFsck.java:1985)
> {code}
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.util.TestHBaseFsck.testValidLingeringSplitParent(TestHBaseFsck.java:1934)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591145#comment-16591145
 ] 

Hadoop QA commented on HBASE-20942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} hbase-server: The patch generated 1 new + 10 unchanged 
- 0 fixed = 11 total (was 10) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}289m 57s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}329m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionReplicaFailover |
|   | hadoop.hbase.regionserver.TestClusterId |
|   | hadoop.hbase.tool.TestLoadIncrementalHFiles |
|   | hadoop.hbase.replication.TestReplicationSmallTests |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | hadoop.hbase.quotas.TestSpaceQuotas |
|   | hadoop.hbase.client.TestAsyncTableAdminApi |
|   | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.replication.TestReplicationKillSlaveRS |
|   | hadoop.hbase.client.replication.TestReplicationAdminWithClusters |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936907/HBASE-20942.003.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3421c871649c 

[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591141#comment-16591141
 ] 

Hadoop QA commented on HBASE-21108:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 0s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}259m 19s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}295m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.procedure.TestFailedProcCleanup |
|   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
|   | 
hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting |
|   | hadoop.hbase.regionserver.TestRowTooBig |
|   | hadoop.hbase.regionserver.TestRegionReplicaFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936912/HBASE-21108.001.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ecc6c3cb85d6 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 780670ede1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default 

[jira] [Commented] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591125#comment-16591125
 ] 

Hadoop QA commented on HBASE-20942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
52s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}122m 
16s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936929/HBASE-20942.002.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 60a3be6dc42c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 780670ede1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14177/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14177/testReport/ |
| Max. process+thread count | 4969 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-21017) Revisit the expected states for open/close

2018-08-23 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591121#comment-16591121
 ] 

Duo Zhang commented on HBASE-21017:
---

What I pushed to master for finding out the race. Add an exception to show the 
stacktrace when updating meta.

> Revisit the expected states for open/close
> --
>
> Key: HBASE-21017
> URL: https://issues.apache.org/jira/browse/HBASE-21017
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21017-debug.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21017) Revisit the expected states for open/close

2018-08-23 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21017:
--
Attachment: HBASE-21017-debug.patch

> Revisit the expected states for open/close
> --
>
> Key: HBASE-21017
> URL: https://issues.apache.org/jira/browse/HBASE-21017
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21017-debug.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21101) Remove the waitUntilAllRegionsAssigned call after split in TestTruncateTableProcedure

2018-08-23 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21101:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+.

> Remove the waitUntilAllRegionsAssigned call after split in 
> TestTruncateTableProcedure
> -
>
> Key: HBASE-21101
> URL: https://issues.apache.org/jira/browse/HBASE-21101
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21101.patch
>
>
> It is useless and may cause wait timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: HBASE-20942.002.patch
Status: Patch Available  (was: Open)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.002.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: (was: HBASE-20942.002.patch)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Status: Open  (was: Patch Available)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: (was: HBASE-20942.001.patch)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21017) Revisit the expected states for open/close

2018-08-23 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591114#comment-16591114
 ] 

Duo Zhang commented on HBASE-21017:
---

I think we need to fix this.

The flaky TestEnable table is related, where we call assign on an already 
onlined region. It should be rejected but actually not. The RS will ignore the 
open request so the TRSP will hang. In a local run, it could be fixed by 
regionServerReport, where we will check the online regions. But in the failing 
runs, there maybe a race that we update the state to OPEN, without waking up 
the procedure, and then the TRSP will hang there forever.

I think we should find out where is the race, but another problem is that, we 
should not allow assigning a region which has already been online.

> Revisit the expected states for open/close
> --
>
> Key: HBASE-21017
> URL: https://issues.apache.org/jira/browse/HBASE-21017
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.002.patch
Status: Patch Available  (was: Open)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: (was: HBASE-21108.002.patch)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: (was: HBASE-21108.001.patch)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Status: Open  (was: Patch Available)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20306) LoadTestTool does not print summary at end of run

2018-08-23 Thread Colin Garcia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Garcia reassigned HBASE-20306:


Assignee: Colin Garcia  (was: Wei-Chiu Chuang)

> LoadTestTool does not print summary at end of run
> -
>
> Key: HBASE-20306
> URL: https://issues.apache.org/jira/browse/HBASE-20306
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Attachments: HBASE-20306.000.patch
>
>
> ltt currently prints status as it goes, but doesn't give a nice summary of 
> what happened so users have to infer it from the last status line printed.
> Would be nice to print a real summary with statistics about what was run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591094#comment-16591094
 ] 

Jack Bearden commented on HBASE-20993:
--

Awesome, thanks [~reidchan]. Let me know if there's anything else I missed. Do 
we have tickets for branch-2 and master yet? As soon as we get this one in, I'd 
like to start working on porting the fix to those branches

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: 

[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591072#comment-16591072
 ] 

Jack Bearden commented on HBASE-20993:
--

-006:
 * Fixed bug where EOFException may be encountered when reading off socket
 * Added InterfaceAudience.Private to NettyRpcNegotiateHandler and description 
about the class
 * Updated TestRpcServerSlowConnectionSetup test to handle the preamble 
response off the socket and assert it equals the expected value 
 * Various checkstyle fixes (may need another pass on this)

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> 

[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591071#comment-16591071
 ] 

Hadoop QA commented on HBASE-21108:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} hbase-server: The patch generated 1 new + 10 unchanged 
- 0 fixed = 11 total (was 10) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
34s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}190m 
35s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936904/HBASE-21108.001.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 643fe6d562c4 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 780670ede1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14170/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14170/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591070#comment-16591070
 ] 

Hadoop QA commented on HBASE-20942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}162m 45s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936913/HBASE-20942.003.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 2aa0fe030abe 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 780670ede1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14174/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591067#comment-16591067
 ] 

Reid Chan commented on HBASE-20993:
---

v6 LGTM, let see what QA says

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   

[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-23 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HBASE-20993:
-
Attachment: HBASE-20993.branch-1.006.patch

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, 
> HBASE-20993.003.branch-1.flowchart.png, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.003.patch, HBASE-20993.branch-1.004.patch, 
> HBASE-20993.branch-1.005.patch, HBASE-20993.branch-1.006.patch, 
> HBASE-20993.branch-1.2.001.patch, HBASE-20993.branch-1.wip.002.patch, 
> HBASE-20993.branch-1.wip.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> 

[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2

2018-08-23 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591065#comment-16591065
 ] 

Reid Chan commented on HBASE-21072:
---

ok.

> Block out HBCK1 in hbase2
> -
>
> Key: HBASE-21072
> URL: https://issues.apache.org/jira/browse/HBASE-21072
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21072.branch-2.0.001.patch, 
> HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, 
> HBASE-21072.branch-2.0.003.patch
>
>
> [~busbey] left a note in the parent issue that I only just read which has a 
> prescription for how we might block hbck1 from running against an hbase-2.x 
> (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 
> from an hbase-1.x install might run).
> Here is quote from parent issue:
> {code}
> I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, 
> we know that all HBase 1.y.z hbck instances should refuse to run if there's a 
> lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 
> places that file permanently in place and replace the contents (usually just 
> an IP address) with a note about how you must not run HBase v1 HBCK against 
> the cluster?
> {code}
> There is also the below:
> {code}
> We could pick another location for locking on HBase version 2 and start 
> building in a version check of some kind?
> {code}
> ... to which I'd answer, lets see. hbck2 is a different beast. It asks the 
> master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a 
> lock/version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-23 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591063#comment-16591063
 ] 

Mingliang Liu commented on HBASE-21071:
---

More than happy. Thanks [~stack].

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-21071.000.patch, HBASE-21071.001.patch, 
> HBASE-21071.002.patch, HBASE-21071.003.patch, HBASE-21071.004.patch, 
> HBASE-21071.005.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.006.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.006.patch, HBASE-21071.branch-2.006.patch
>
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21093) Move TestCreateTableProcedure.testMRegions to a separated file

2018-08-23 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21093:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+.

> Move TestCreateTableProcedure.testMRegions to a separated file
> --
>
> Key: HBASE-21093
> URL: https://issues.apache.org/jira/browse/HBASE-21093
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21093-v1.patch, HBASE-21093.patch
>
>
> In TestTableDDLProcedureBase we set the procedure worker number to 1, so on a 
> slow machine, we will fail to batch the remote calls to RS with the default 
> dispatch delay, and lead to a very long time to finish the bunch of sub TRSPs.
> Edit:
> It seems that the problem is that, we are slow when updating procedure store 
> sometimes. This is not easy to fix so move testMRegions to separated file 
> first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21093) Move TestCreateTableProcedure.testMRegions to a separated file

2018-08-23 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21093:
--
Fix Version/s: 2.0.2
   2.1.1
   2.2.0
   3.0.0

> Move TestCreateTableProcedure.testMRegions to a separated file
> --
>
> Key: HBASE-21093
> URL: https://issues.apache.org/jira/browse/HBASE-21093
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21093-v1.patch, HBASE-21093.patch
>
>
> In TestTableDDLProcedureBase we set the procedure worker number to 1, so on a 
> slow machine, we will fail to batch the remote calls to RS with the default 
> dispatch delay, and lead to a very long time to finish the bunch of sub TRSPs.
> Edit:
> It seems that the problem is that, we are slow when updating procedure store 
> sometimes. This is not easy to fix so move testMRegions to separated file 
> first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591049#comment-16591049
 ] 

Hadoop QA commented on HBASE-21071:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 73 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
25s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} hbase-server: The patch generated 0 new + 406 
unchanged - 48 fixed = 406 total (was 454) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hbase-rsgroup passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch hbase-shell passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hbase-rest passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch hbase-examples passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hbase-spark passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}117m  
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
11s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591047#comment-16591047
 ] 

Ted Yu commented on HBASE-21098:


Can you put the above response in release note ?

Please also look at the (many) checkstyle warnings in hbase-server module.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21095) The timeout retry logic for several procedures are broken after master restarts

2018-08-23 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591046#comment-16591046
 ] 

Duo Zhang commented on HBASE-21095:
---

Let me commit. [~stack] Let's also commit HBASE-20881 to branch-2? So that the 
fix here could also go into branch-2.

Thanks.

> The timeout retry logic for several procedures are broken after master 
> restarts
> ---
>
> Key: HBASE-21095
> URL: https://issues.apache.org/jira/browse/HBASE-21095
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.2
>
> Attachments: HBASE-21095-branch-2.0.patch, HBASE-21095-v1.patch, 
> HBASE-21095-v2.patch, HBASE-21095.patch
>
>
> For TRSP, and also RTP in branch-2.0 and branch-2.1, if we fail to assign or 
> unassign a region, we will set the procedure to WAITING_TIMEOUT state, and 
> rely on the ProcedureEvent in RegionStateNode to wake us up later. But after 
> restarting, we do not suspend the ProcedureEvent in RSN, and also do not add 
> the procedure to the ProcedureEvent's suspending queue, so we will hang there 
> forever as no one will wake us up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18549) Unclaimed replication queues can go undetected

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591045#comment-16591045
 ] 

Hadoop QA commented on HBASE-18549:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 
46s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-18549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936910/HBASE-18549-.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e0a6028082a2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591042#comment-16591042
 ] 

Hadoop QA commented on HBASE-21098:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
21s{color} | {color:red} hbase-server: The patch generated 277 new + 283 
unchanged - 1 fixed = 560 total (was 284) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} hbase-mapreduce: The patch generated 1 new + 14 
unchanged - 0 fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}122m 
34s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
28s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936906/HBASE-21098.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3f05b5adf68b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-23 Thread liubangchen (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16591041#comment-16591041
 ] 

liubangchen commented on HBASE-19008:
-

I have in bed last night, Thank you very much [~reidchan] 
[~yuzhih...@gmail.com] . If anything I can do just say it.

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Fix For: 3.0.0, 2.2.0
>
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-12.patch, HBASE-19008-13.patch, 
> HBASE-19008-14.patch, HBASE-19008-15.patch, HBASE-19008-16.patch, 
> HBASE-19008-17.patch, HBASE-19008-18.patch, HBASE-19008-2.patch, 
> HBASE-19008-20.patch, HBASE-19008-21.patch, HBASE-19008-3.patch, 
> HBASE-19008-4.patch, HBASE-19008-5.patch, HBASE-19008-6.patch, 
> HBASE-19008-7.patch, HBASE-19008-8.patch, HBASE-19008-9.patch, 
> HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: HBASE-20942.002.patch
Status: Patch Available  (was: Open)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch, HBASE-20942.002.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: (was: HBASE-20942.002.patch)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: (was: HBASE-20942.003.patch)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Status: Open  (was: Patch Available)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch, HBASE-20942.002.patch, 
> HBASE-20942.003.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590996#comment-16590996
 ] 

stack commented on HBASE-20429:
---

If for 1.x, ok to remove label.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590997#comment-16590997
 ] 

Krish Dey commented on HBASE-21108:
---

Thanks for the advice 
[elserj|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=elserj].
Uploaded the sqaushed.

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.001.patch, HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590993#comment-16590993
 ] 

Andrew Purtell commented on HBASE-20429:


Reiterating: The goal of this JIRA is stability on S3 for write workloads all 
the way down through to a new 1.x minor release.

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.002.patch

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.001.patch, HBASE-21108.002.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590991#comment-16590991
 ] 

Andrew Purtell edited comment on HBASE-20429 at 8/24/18 12:41 AM:
--

I don't want this to be considered "FSredo". We need improvements in branch-1 
for today's production. I deliberately did not file this under the related FS 
redo JIRAs. I am going to remove the tag. We can have #2, at least to some 
extent, without reinventing how we manage files in an incompatible way. For 
example HBASE-20431. It is not going to be sufficient to commit something to 
master (aka 3.0) and call it a day


was (Author: apurtell):
I don't want this to be considered "FSredo". We need improvements in branch-1 
for today's production. Sure, branch-2 is in the future sometime, but not 
today, at least not for us. So I deliberately did not file this under the 
related FS redo JIRAs. I am going to remove the tag. We can have #2, at least 
to some extent, without reinventing how we manage files in an incompatible way. 
For example HBASE-20431. It is not going to be sufficient to commit something 
to master (aka 3.0) and call it a day

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590991#comment-16590991
 ] 

Andrew Purtell edited comment on HBASE-20429 at 8/24/18 12:40 AM:
--

I don't want this to be considered "FSredo". We need improvements in branch-1 
for today's production. Sure, branch-2 is in the future sometime, but not 
today, at least not for us. So I deliberately did not file this under the 
related FS redo JIRAs. I am going to remove the tag. We can have #2, at least 
to some extent, without reinventing how we manage files in an incompatible way. 
For example HBASE-20431. It is not going to be sufficient to commit something 
to master (aka 3.0) and call it a day


was (Author: apurtell):
I don't want this to be considered "FSredo". We need improvements in branch-1 
for today's production. Sure, branch-2 is in the future sometime, but not 
today, at least not for us. So I deliberately did not file this under the 
related FS redo JIRAs. I am going to remove the tag. 

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590991#comment-16590991
 ] 

Andrew Purtell commented on HBASE-20429:


I don't want this to be considered "FSredo". We need improvements in branch-1 
for today's production. Sure, branch-2 is in the future sometime, but not 
today, at least not for us. So I deliberately did not file this under the 
related FS redo JIRAs. I am going to remove the tag. 

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-20429:
---
Labels:   (was: FSRedo)

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21107) add a metrics for netty direct memory

2018-08-23 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590969#comment-16590969
 ] 

stack commented on HBASE-21107:
---

+1. Its beautiful.

> add a metrics for netty direct memory
> -
>
> Key: HBASE-21107
> URL: https://issues.apache.org/jira/browse/HBASE-21107
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-21107-master-v1.patch, Screen Shot 2018-08-23 at 
> 4.55.01 PM.png
>
>
> This is separated from HBASE-20913 as I realized that netty direct memory 
> usage needs to show up in metrics instead of the web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21107) add a metrics for netty direct memory

2018-08-23 Thread huaxiang sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-21107:
-
Status: Patch Available  (was: Open)

> add a metrics for netty direct memory
> -
>
> Key: HBASE-21107
> URL: https://issues.apache.org/jira/browse/HBASE-21107
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-21107-master-v1.patch, Screen Shot 2018-08-23 at 
> 4.55.01 PM.png
>
>
> This is separated from HBASE-20913 as I realized that netty direct memory 
> usage needs to show up in metrics instead of the web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590959#comment-16590959
 ] 

Hudson commented on HBASE-19008:


Results for branch branch-2
[build #1153 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1153/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1153//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1153//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1153//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Fix For: 3.0.0, 2.2.0
>
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-11.patch, HBASE-19008-12.patch, HBASE-19008-13.patch, 
> HBASE-19008-14.patch, HBASE-19008-15.patch, HBASE-19008-16.patch, 
> HBASE-19008-17.patch, HBASE-19008-18.patch, HBASE-19008-2.patch, 
> HBASE-19008-20.patch, HBASE-19008-21.patch, HBASE-19008-3.patch, 
> HBASE-19008-4.patch, HBASE-19008-5.patch, HBASE-19008-6.patch, 
> HBASE-19008-7.patch, HBASE-19008-8.patch, HBASE-19008-9.patch, 
> HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21107) add a metrics for netty direct memory

2018-08-23 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590954#comment-16590954
 ] 

huaxiang sun commented on HBASE-21107:
--

Fyi [~saint@gmail.com], I put the netty dm usage under server's ipc section.

> add a metrics for netty direct memory
> -
>
> Key: HBASE-21107
> URL: https://issues.apache.org/jira/browse/HBASE-21107
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-21107-master-v1.patch, Screen Shot 2018-08-23 at 
> 4.55.01 PM.png
>
>
> This is separated from HBASE-20913 as I realized that netty direct memory 
> usage needs to show up in metrics instead of the web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21107) add a metrics for netty direct memory

2018-08-23 Thread huaxiang sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-21107:
-
Attachment: Screen Shot 2018-08-23 at 4.55.01 PM.png

> add a metrics for netty direct memory
> -
>
> Key: HBASE-21107
> URL: https://issues.apache.org/jira/browse/HBASE-21107
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-21107-master-v1.patch, Screen Shot 2018-08-23 at 
> 4.55.01 PM.png
>
>
> This is separated from HBASE-20913 as I realized that netty direct memory 
> usage needs to show up in metrics instead of the web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21107) add a metrics for netty direct memory

2018-08-23 Thread huaxiang sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-21107:
-
Attachment: HBASE-21107-master-v1.patch

> add a metrics for netty direct memory
> -
>
> Key: HBASE-21107
> URL: https://issues.apache.org/jira/browse/HBASE-21107
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Affects Versions: 2.0.1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-21107-master-v1.patch
>
>
> This is separated from HBASE-20913 as I realized that netty direct memory 
> usage needs to show up in metrics instead of the web page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590942#comment-16590942
 ] 

Josh Elser commented on HBASE-21108:


[~krish.dey], thanks for the patch. You might need to squash them down into a 
single commit.

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.001.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20429:
--
Labels: FSRedo  (was: )

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>  Labels: FSRedo
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21070) SnapshotFileCache won't update for snapshots stored in S3

2018-08-23 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21070:
--
Labels: FSRedo  (was: )

> SnapshotFileCache won't update for snapshots stored in S3
> -
>
> Key: HBASE-21070
> URL: https://issues.apache.org/jira/browse/HBASE-21070
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0, 2.1.1, 1.4.7
>Reporter: Zach York
>Assignee: Zach York
>Priority: Critical
>  Labels: FSRedo
> Attachments: HBASE-21070.master.001.patch
>
>
> The SnapshotFileCache depends on last modified time to determine whether to 
> update the Snapshot HFile cache. However, in S3, real 'folders' don't exist. 
> S3 filesystems create a dummy file in place of a folder, but the dummy file 
> last modified time is not updated when files are changed 'under' it. This 
> means that the SnapshotFileCache doesn't pick up new snapshot HFiles and 
> these files aren't removed from the HFileCleaner and can be eligible for 
> deletion.
>  
> My patch removes the lastmodified assumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20429) Support for mixed or write-heavy workloads on non-HDFS filesystems

2018-08-23 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590937#comment-16590937
 ] 

stack commented on HBASE-20429:
---

+1 for #2

> Support for mixed or write-heavy workloads on non-HDFS filesystems
> --
>
> Key: HBASE-20429
> URL: https://issues.apache.org/jira/browse/HBASE-20429
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Andrew Purtell
>Priority: Major
>
> We can support reasonably well use cases on non-HDFS filesystems, like S3, 
> where an external writer has loaded (and continues to load) HFiles via the 
> bulk load mechanism, and then we serve out a read only workload at the HBase 
> API.
> Mixed workloads or write-heavy workloads won't fare as well. In fact, data 
> loss seems certain. It will depend in the specific filesystem, but all of the 
> S3 backed Hadoop filesystems suffer from a couple of obvious problems, 
> notably a lack of atomic rename. 
> This umbrella will serve to collect some related ideas for consideration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590933#comment-16590933
 ] 

Tyler Mi commented on HBASE-21098:
--

I recommend storing the working directory on-cluster on HDFS as doing so has 
shown a strong performance increase due to data locality. It is important to 
note that the working directory should not overlap with any existing 
directories as the working directory will be cleaned out during the snapshot 
process. Beyond that, any well-named directory on HDFS should be sufficient.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2

2018-08-23 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590930#comment-16590930
 ] 

stack commented on HBASE-21072:
---

Ok if I push [~reidchan]?

> Block out HBCK1 in hbase2
> -
>
> Key: HBASE-21072
> URL: https://issues.apache.org/jira/browse/HBASE-21072
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21072.branch-2.0.001.patch, 
> HBASE-21072.branch-2.0.002.patch, HBASE-21072.branch-2.0.003.patch, 
> HBASE-21072.branch-2.0.003.patch
>
>
> [~busbey] left a note in the parent issue that I only just read which has a 
> prescription for how we might block hbck1 from running against an hbase-2.x 
> (hbck1 could damage a hbase-2Its disabled in hbase-2 but an errant hbck1 
> from an hbase-1.x install might run).
> Here is quote from parent issue:
> {code}
> I was idly thinking about how to stop HBase v1 HBCK. Thanks to HBASE-11405, 
> we know that all HBase 1.y.z hbck instances should refuse to run if there's a 
> lock file at '/hbase/hbase-hbck.lock' (given defaults). How about HBase v2 
> places that file permanently in place and replace the contents (usually just 
> an IP address) with a note about how you must not run HBase v1 HBCK against 
> the cluster?
> {code}
> There is also the below:
> {code}
> We could pick another location for locking on HBase version 2 and start 
> building in a version check of some kind?
> {code}
> ... to which I'd answer, lets see. hbck2 is a different beast. It asks the 
> master to do stuff. It doesn't do it itself, as hbck1 did. So no need of a 
> lock/version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21072) Block out HBCK1 in hbase2

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590925#comment-16590925
 ] 

Hadoop QA commented on HBASE-21072:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} hbase-server: The patch generated 0 new + 258 
unchanged - 1 fixed = 258 total (was 259) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}174m 
39s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21072 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936879/HBASE-21072.branch-2.0.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux fadb2275c609 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 89ec91608f |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14165/testReport/ |
| Max. process+thread count | 4747 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14165/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HBASE-20941) Create and implement HbckService in master

2018-08-23 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590916#comment-16590916
 ] 

Reid Chan commented on HBASE-20941:
---

{quote}
Subseuquent thought was can this duplication be avoided with combination of 
generics and common base class for all service which can use this pattern. 
Seperate JIRA can be created to explore this as this will be off topic a 
little. 
{quote}
SGTM, nice follow up.
*{color:green}+1{color}* for v4.

> Create and implement HbckService in master
> --
>
> Key: HBASE-20941
> URL: https://issues.apache.org/jira/browse/HBASE-20941
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-20941.master.001.patch, 
> hbase-20941.master.002.patch, hbase-20941.master.003.patch, 
> hbase-20941.master.004.patch
>
>
> Create HbckService in master and implement following methods:
>  # setTableState(): If table state are inconsistent with action/ procedures 
> working on them, sometimes manipulating their states in meta fix things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: (was: HBASE-20942.003.patch)

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch, HBASE-20942.002.patch, 
> HBASE-20942.003.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: HBASE-20942.003.patch

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch, HBASE-20942.002.patch, 
> HBASE-20942.003.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.001.patch

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.001.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: (was: HBASE-21108.001.patch)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Description: 
When using Apache HBase, the snapshot feature can be used to make a point in 
time recovery. To do this, HBase creates a manifest of all the files in all of 
the Regions so that those files can be referenced again when a user restores a 
snapshot. With HBase's S3 storage mode, developers can store their data 
off-cluster on Amazon S3. However, utilizing S3 as a file system is inefficient 
in some operations, namely renames. Most Hadoop ecosystem applications use an 
atomic rename as a method of committing data. However, with S3, a rename is a 
separate copy and then a delete of every file which is no longer atomic and, in 
fact, quite costly. In addition, puts and deletes on S3 have latency issues 
that traditional filesystems do not encounter when manipulating the region 
snapshots to consolidate into a single manifest. When HBase on S3 users have a 
significant amount of regions, puts, deletes, and renames (the final commit 
stage of the snapshot) become the bottleneck causing snapshots to take many 
minutes or even hours to complete.

The purpose of this patch is to increase the overall performance of snapshots 
while utilizing HBase on S3 through the use of a temporary directory for the 
snapshots that exists on a traditional filesystem like HDFS to circumvent the 
bottlenecks.

  was:
When using Apache HBase, the snapshot feature can be used to make a point in 
time recovery. To do this, HBase creates a manifest of all the files in all of 
the Regions so that those files can be referenced again when a user restores a 
snapshot. With HBase storage mode S3, developers can store their data 
off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is inefficient 
in some operations, namely renames. Most Hadoop ecosystem applications use an 
atomic rename as a method of committing data. However, with S3, a rename is a 
separate copy and then a delete of every file which is no longer atomic and, in 
fact, quite costly. In addition, puts and deletes on S3 have latency issues 
that traditional filesystems do not encounter when manipulating the region 
snapshots. When HBase on S3 customers have a significant amount of regions, 
puts, deletes, and renames (the final commit stage of the snapshot) become the 
bottleneck causing snapshots to take many minutes or even hours to complete.

The purpose of this patch is to increase the overall performance of snapshots 
while utilizing HBase on S3 through the use of a temporary directory for the 
snapshots that exists on a traditional filesystem to circumvent the bottlenecks.


> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18549) Unclaimed replication queues can go undetected

2018-08-23 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-18549:

Attachment: HBASE-18549-.master.003.patch

> Unclaimed replication queues can go undetected
> --
>
> Key: HBASE-18549
> URL: https://issues.apache.org/jira/browse/HBASE-18549
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Ashu Pachauri
>Assignee: Xu Cang
>Priority: Critical
> Fix For: 1.5.0, 1.3.3, 1.4.8
>
> Attachments: HBASE-18549-.master.001.patch, 
> HBASE-18549-.master.002.patch, HBASE-18549-.master.003.patch
>
>
> We have come across this situation multiple times where a zookeeper issues 
> can cause NodeFailoverWorker to fail picking up replication queue for a dead 
> region server silently. One example is when the znode size for a particular 
> queue exceed jute.maxBuffer value.
> There can be other situations that may lead to this and just go undetected. 
> We need to have a metric for number of unclaimed replication queues. This 
> will help in mitigating the problem through alerting on the metric and 
> identifying underlying issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20890) PE filterScan seems to be stuck forever

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590907#comment-16590907
 ] 

Hadoop QA commented on HBASE-20890:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
29s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20890 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936898/HBASE-20890.001.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c47f9e936307 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 780670ede1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14166/testReport/ |
| Max. process+thread count | 4121 (vs. ulimit of 1) |
| modules | C: hbase-mapreduce U: hbase-mapreduce |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14166/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> PE filterScan seems to be 

[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590904#comment-16590904
 ] 

Ted Yu commented on HBASE-21098:


{code}
  public static final String SNAPSHOT_WORKING_DIR = 
"hbase.snapshot.working.dir";
{code}
Is there recommendation on where the working directory should reside ?

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Affects Version/s: (was: 2.1.0)
   2.1.1
   3.0.0

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Status: Patch Available  (was: Open)

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20942) Make RpcServer trace log length configurable

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-20942:
--
Attachment: HBASE-20942.003.patch

> Make RpcServer trace log length configurable
> 
>
> Key: HBASE-20942
> URL: https://issues.apache.org/jira/browse/HBASE-20942
> Project: HBase
>  Issue Type: Task
>Reporter: Esteban Gutierrez
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-20942.001.patch, HBASE-20942.002.patch, 
> HBASE-20942.003.patch
>
>
> We truncate RpcServer output to 1000 characters for trace logging. Would be 
> better if that value was configurable.
> Esteban mentioned this to me earlier, so I'm crediting him as the reporter.
> cc: [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Attachment: HBASE-21098.master.003.patch

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krish Dey updated HBASE-21108:
--
Attachment: HBASE-21108.001.patch
Status: Patch Available  (was: Open)

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
> Attachments: HBASE-21108.001.patch
>
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Attachment: HBASE-21098.master.002.patch

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Attachment: HBASE-21098.master.001.patch

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590889#comment-16590889
 ] 

Krish Dey commented on HBASE-21108:
---

As discussed here is the JIRA,
[elserj|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=elserj]

> RpcServer when trace enabled might get the following exception. 
> java.lang.StringIndexOutOfBoundsException: 
> ---
>
> Key: HBASE-21108
> URL: https://issues.apache.org/jira/browse/HBASE-21108
> Project: HBase
>  Issue Type: Bug
>Reporter: Krish Dey
>Assignee: Krish Dey
>Priority: Major
>
> In the RpcServer#logResponse method the logic
> stringifiedParam = stringifiedParam.subSequence(
> 0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
> If the trace is on and the length of stringifiedParam is 150< 
> stringifiedParam.length() < 1000  will get the following exception.
> java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-08-23 Thread Tyler Mi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Mi updated HBASE-21098:
-
Summary: Improve Snapshot Performance with Temporary Snapshot Directory 
when rootDir on S3  (was: HBase on S3 Snapshot Performance Increase)

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Tyler Mi
>Priority: Major
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase storage mode S3, developers can store their 
> data off-cluster in Amazon S3. However, utilizing S3 as a FileSystem is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots. When HBase on S3 customers have a 
> significant amount of regions, puts, deletes, and renames (the final commit 
> stage of the snapshot) become the bottleneck causing snapshots to take many 
> minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20941) Create and implement HbckService in master

2018-08-23 Thread Umesh Agashe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-20941:
-
Attachment: hbase-20941.master.004.patch

> Create and implement HbckService in master
> --
>
> Key: HBASE-20941
> URL: https://issues.apache.org/jira/browse/HBASE-20941
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-20941.master.001.patch, 
> hbase-20941.master.002.patch, hbase-20941.master.003.patch, 
> hbase-20941.master.004.patch
>
>
> Create HbckService in master and implement following methods:
>  # setTableState(): If table state are inconsistent with action/ procedures 
> working on them, sometimes manipulating their states in meta fix things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21108) RpcServer when trace enabled might get the following exception. java.lang.StringIndexOutOfBoundsException:

2018-08-23 Thread Krish Dey (JIRA)
Krish Dey created HBASE-21108:
-

 Summary: RpcServer when trace enabled might get the following 
exception. java.lang.StringIndexOutOfBoundsException: 
 Key: HBASE-21108
 URL: https://issues.apache.org/jira/browse/HBASE-21108
 Project: HBase
  Issue Type: Bug
Reporter: Krish Dey
Assignee: Krish Dey


In the RpcServer#logResponse method the logic
stringifiedParam = stringifiedParam.subSequence(
0, LOG.isTraceEnabled() ? 1000 : 150) + " ";
If the trace is on and the length of stringifiedParam is 150< 
stringifiedParam.length() < 1000  will get the following exception.
java.lang.StringIndexOutOfBoundsException: String index out of range: 1000



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18549) Unclaimed replication queues can go undetected

2018-08-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590881#comment-16590881
 ] 

Hadoop QA commented on HBASE-18549:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
10s{color} | {color:red} hbase-server: The patch generated 1 new + 12 unchanged 
- 0 fixed = 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hbase-replication generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}118m 
21s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-18549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936875/HBASE-18549-.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 

[jira] [Updated] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-23 Thread Mingliang Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HBASE-21071:
--
Attachment: HBASE-21071.006.patch

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-21071.000.patch, HBASE-21071.001.patch, 
> HBASE-21071.002.patch, HBASE-21071.003.patch, HBASE-21071.004.patch, 
> HBASE-21071.005.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.006.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.006.patch, HBASE-21071.branch-2.006.patch
>
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21071) HBaseTestingUtility::startMiniCluster() to use builder pattern

2018-08-23 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590877#comment-16590877
 ] 

Mingliang Liu commented on HBASE-21071:
---

Interesting. {{TestFlushWithThroughputController}} is not happy consistently. 
Well I'd like to resubmit as [HBASE-21097] has been committed after our last 
Jenkins run.

> HBaseTestingUtility::startMiniCluster() to use builder pattern
> --
>
> Key: HBASE-21071
> URL: https://issues.apache.org/jira/browse/HBASE-21071
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Attachments: HBASE-21071.000.patch, HBASE-21071.001.patch, 
> HBASE-21071.002.patch, HBASE-21071.003.patch, HBASE-21071.004.patch, 
> HBASE-21071.005.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.006.patch, HBASE-21071.006.patch, HBASE-21071.006.patch, 
> HBASE-21071.branch-2.006.patch
>
>
> Currently there are 13 {{startMiniCluster()}} methods to set up a mini 
> cluster. I'm not surprised if we have a few more in future. It's good to 
> support different combination of optional parameters. We have to pick up one 
> of them carefully while still wondering the default values of other 
> parameters; if we add a new option, we may bring more new methods.
> One solution is to use builder pattern: create a class {{MiniClusterOptions}} 
> along with a static class {{MiniClusterOptionsBuilder}}, create a new method  
> {{startMiniCluster(MiniClusterOptions)}}. In {{master}} we delete the old 13 
> methods while in branch-2, we deprecate the old 13 methods.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20890) PE filterScan seems to be stuck forever

2018-08-23 Thread Abhishek Goyal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Goyal updated HBASE-20890:
---
Attachment: HBASE-20890.001.patch
Status: Patch Available  (was: Reopened)

> PE filterScan seems to be stuck forever
> ---
>
> Key: HBASE-20890
> URL: https://issues.apache.org/jira/browse/HBASE-20890
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.3
>Reporter: Vikas Vishwakarma
>Assignee: Abhishek Goyal
>Priority: Minor
> Attachments: HBASE-20890.001.patch
>
>
> Command Used
> {code:java}
> ~/current/bigdata-hbase/hbase/hbase/bin/hbase pe --nomapred randomWrite 1 > 
> write 2>&1
> ~/current/bigdata-hbase/hbase/hbase/bin/hbase pe --nomapred filterScan 1 > 
> filterScan 2>&1
> {code}
>  
> Output
> This kept running for several hours just printing the below messages in logs
>  
> {code:java}
> -bash-4.1$ grep "Advancing internal scanner to startKey" filterScan.1 | head
> 2018-07-13 10:44:45,188 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-13 10:44:45,976 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-13 10:44:46,695 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> .
> -bash-4.1$ grep "Advancing internal scanner to startKey" filterScan.1 | tail
> 2018-07-15 06:20:22,353 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-15 06:20:23,044 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> 2018-07-15 06:20:23,768 DEBUG [TestClient-0] client.ClientScanner - Advancing 
> internal scanner to startKey at '52359'
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >