[jira] [Created] (HBASE-21047) Object creation of StoreFileScanner thru constructor and close may leave refCount to -1

2018-08-13 Thread Vishal Khandelwal (JIRA)
Vishal Khandelwal created HBASE-21047:
-

 Summary: Object creation of StoreFileScanner thru constructor and 
close may leave refCount to -1
 Key: HBASE-21047
 URL: https://issues.apache.org/jira/browse/HBASE-21047
 Project: HBase
  Issue Type: Bug
Reporter: Vishal Khandelwal
Assignee: Vishal Khandelwal


During object creation  "*StoreFileScanner*", it does not increase the refCount 
whereas while close it decrements the reader refCount. This will cause refCount 
to -1 and isReadReference method was returning true (refCount.get() != 0 This 
is causing store file not to be deleted. This may also cause issue in situation 
when some thread is holding a scanner but it may actually be closed due to 
above bug. Impact of this would be really high. Attatching patch for the fix 
which makes sure if reference is held either thru getScanner method or 
constructor, ref is always updated. Patch also contains a test which validates 
the issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21041) Memstore's heap size will be decreased to minus zero after flush

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579289#comment-16579289
 ] 

Hadoop QA commented on HBASE-21041:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
24s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}162m  
5s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935463/HBASE-21041.branch-2.0.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux ae7a1d0654b4 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 2b7450fe60 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14029/testReport/ |
| Max. process+thread count | 4534 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14029/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579282#comment-16579282
 ] 

Reid Chan commented on HBASE-20993:
---

{quote} handling it before the SASL logic seemed less complicated. {quote}
Original code base it is also handling it before the SASL as your said, 
preamble is handled before SASL. Sorry, i may not get your point.

{code}
 writeConnectionHeaderPreamble(outStream);
+// Read and verify preamble reply from server
+ByteBuffer preambleBuffer = ByteBuffer.allocate(6);
+readPreambleReply(inStream, preambleBuffer);
+verifyPreambleReply(preambleBuffer);
+
 if (useSasl) {
{code}
Client side is right.  The problem is on server side, 
{code}
 // this branch is duplicated of my quoted code.
 // and it doesn't cover client with DIGEST...
 // hbase.ipc.server.fallback-to-simple-auth-allowed parameter should be used 
in simple client against kerberized server.
 if (authMethod == AuthMethod.KERBEROS && !isSecurityEnabled) { 
{code}

netty client as you mentioned,  i'm curious about the error log and could you 
post it up?

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> 

[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579281#comment-16579281
 ] 

Hudson commented on HBASE-21005:


Results for branch branch-2
[build #1106 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1106/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1106//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1106//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1106//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21046) Set version to 2.0.2 on branch-2.0 in prep for first RC

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-21046.
---
Resolution: Fixed

Ran {{mvn clean org.codehaus.mojo:versions-maven-plugin:2.5:set 
-DnewVersion=2.0.2}} and pushed the change on branch-2.0.

> Set version to 2.0.2 on branch-2.0 in prep for first RC
> ---
>
> Key: HBASE-21046
> URL: https://issues.apache.org/jira/browse/HBASE-21046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21046) Set version to 2.0.2 on branch-2.0 in prep for first RC

2018-08-13 Thread stack (JIRA)
stack created HBASE-21046:
-

 Summary: Set version to 2.0.2 on branch-2.0 in prep for first RC
 Key: HBASE-21046
 URL: https://issues.apache.org/jira/browse/HBASE-21046
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack
 Fix For: 2.0.2






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21045) Make a 2.0.2 release

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21045:
--
Fix Version/s: 2.0.2

> Make a 2.0.2 release
> 
>
> Key: HBASE-21045
> URL: https://issues.apache.org/jira/browse/HBASE-21045
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21045) Make a 2.0.2 release

2018-08-13 Thread stack (JIRA)
stack created HBASE-21045:
-

 Summary: Make a 2.0.2 release
 Key: HBASE-21045
 URL: https://issues.apache.org/jira/browse/HBASE-21045
 Project: HBase
  Issue Type: Bug
Reporter: stack






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Jack Bearden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579263#comment-16579263
 ] 

Jack Bearden commented on HBASE-20993:
--

{quote}Not understand why you add your own which i don't think necessary, 
better you can explain more.
{quote}
Perhaps you're right... it may not necessary to be honest. My reasoning was 
just that handling it before the SASL logic seemed less complicated. Rather 
than jumping around all those conditionals in the RpcServer SASL code, I can 
just handle it before even going in -- as I have the authcode at that point in 
the execution. 

I do see your point, however. It should make the netty client changes a lot 
more straightforward too. I'll move some things around and use those functions 
we have already. Thanks for the review!

 

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> 

[jira] [Updated] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21044:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-2.0 and branch-2.1 only. Left it in place on master.

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch, 
> HBASE-21044.branch-2.0.002.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSE

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579261#comment-16579261
 ] 

Hudson commented on HBASE-20772:


Results for branch branch-2.1
[build #185 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579260#comment-16579260
 ] 

Hudson commented on HBASE-21005:


Results for branch branch-2.1
[build #185 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/185//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579259#comment-16579259
 ] 

Hadoop QA commented on HBASE-19008:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} hbase-client: The patch generated 34 new + 342 
unchanged - 79 fixed = 376 total (was 421) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
19s{color} | {color:red} hbase-server: The patch generated 6 new + 392 
unchanged - 14 fixed = 398 total (was 406) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hbase-spark: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
19s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
19s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
52s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-19008 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579255#comment-16579255
 ] 

Hadoop QA commented on HBASE-21044:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} rubocop {color} | {color:green}  0m  
3s{color} | {color:green} The patch generated 0 new + 0 unchanged - 6 fixed = 0 
total (was 6) {color} |
| {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green}  0m  
1s{color} | {color:green} The patch generated 0 new + 0 unchanged - 14 fixed = 
0 total (was 14) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
55s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21044 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935476/HBASE-21044.branch-2.0.002.patch
 |
| Optional Tests |  asflicense  unit  rubocop  ruby_lint  |
| uname | Linux 0b55e6749f86 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 2b7450fe60 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| rubocop | v0.58.2 |
| ruby-lint | v2.3.1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14032/testReport/ |
| Max. process+thread count | 2041 (vs. ulimit of 1) |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14032/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch, 
> HBASE-21044.branch-2.0.002.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579253#comment-16579253
 ] 

Hadoop QA commented on HBASE-21042:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 
53s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:61288f8 |
| JIRA Issue | 

[jira] [Commented] (HBASE-21041) Memstore's heap size will be decreased to minus zero after flush

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579247#comment-16579247
 ] 

Ted Yu commented on HBASE-21041:


Looks good overall.
{code}
+// Record the MutableSegment' heap overhead when initialing
{code}
initialing -> typo



> Memstore's heap size will be decreased to minus zero after flush
> 
>
> Key: HBASE-21041
> URL: https://issues.apache.org/jira/browse/HBASE-21041
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21041.branch-2.0.001.patch, 
> HBASE-21041.branch-2.0.002.patch
>
>
> When creating an active mutable segment (MutableSegment) in memstore, 
> MutableSegment's deep overheap (208 bytes) was added to its heap size, but 
> not to the region's memstore's heap size. And so was the immutable 
> segment(CSLMImmutableSegment) which the mutable segment turned into 
> (additional 8 bytes ) later. So after one flush, the memstore's heapsize will 
> be decreased to -216 bytes, The minus number will accumulate after every 
> flush. 
> CompactingMemstore has this problem too.
> We need to record the overhead for CSLMImmutableSegment and MutableSegment to 
> the corresponding region's memstore size.
> For CellArrayImmutableSegment,  CellChunkImmutableSegment and 
> CompositeImmutableSegment , it is not necessary to do so, because inside 
> CompactingMemstore, the overheads are already be taken care of when transfer 
> a CSLMImmutableSegment into them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSE

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579246#comment-16579246
 ] 

Hudson commented on HBASE-20772:


Results for branch branch-2.0
[build #673 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/673/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/673//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/673//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/673//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579244#comment-16579244
 ] 

stack commented on HBASE-21044:
---

.002 just removes the test. omit isn't available according to ruby-cop and skip 
ain't there either.

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch, 
> HBASE-21044.branch-2.0.002.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21044:
--
Attachment: HBASE-21044.branch-2.0.002.patch

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch, 
> HBASE-21044.branch-2.0.002.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21044:
--
Fix Version/s: (was: 3.0.0)

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579226#comment-16579226
 ] 

stack commented on HBASE-21044:
---

bq. My only concern with just omitting it is that folks seem marginally more 
likely to fix things on the flaky list than things that are marked ignore.

Smile. In theory.

I'll disable it on branch 2.1 and 2.0 only then. Will remain on master so those 
perusing the master branch failures will notice it... but it won't destabilize 
2.0 and 2.1 builds.

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579225#comment-16579225
 ] 

Sean Busbey commented on HBASE-21044:
-

I was just making sure it was an issue with not having a branch-specific flaky 
list, since I'm trying to land them this week.

My only concern with just omitting it is that folks seem marginally more likely 
to fix things on the flaky list than things that are marked ignore. I'll add 
"make a report of ignored tests" to my wishlist of branch information and we 
can work on improving that trend later.

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579225#comment-16579225
 ] 

Sean Busbey edited comment on HBASE-21044 at 8/14/18 3:56 AM:
--

I was just making sure it wasn't an issue with not having a branch-specific 
flaky list, since I'm trying to land them this week.

My only concern with just omitting it is that folks seem marginally more likely 
to fix things on the flaky list than things that are marked ignore. I'll add 
"make a report of ignored tests" to my wishlist of branch information and we 
can work on improving that trend later.


was (Author: busbey):
I was just making sure it was an issue with not having a branch-specific flaky 
list, since I'm trying to land them this week.

My only concern with just omitting it is that folks seem marginally more likely 
to fix things on the flaky list than things that are marked ignore. I'll add 
"make a report of ignored tests" to my wishlist of branch information and we 
can work on improving that trend later.

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579206#comment-16579206
 ] 

Hadoop QA commented on HBASE-21044:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
4s{color} | {color:red} The patch generated 2 new + 6 unchanged - 0 fixed = 8 
total (was 6) {color} |
| {color:orange}-0{color} | {color:orange} ruby-lint {color} | {color:orange}  
0m  2s{color} | {color:orange} The patch generated 1 new + 14 unchanged - 0 
fixed = 15 total (was 14) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
56s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21044 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935469/HBASE-21044.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  unit  rubocop  ruby_lint  |
| uname | Linux 98af46393ccc 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 2b7450fe60 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| rubocop | v0.58.2 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14031/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
| ruby-lint | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14031/artifact/patchprocess/diff-patch-ruby-lint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14031/testReport/ |
| Max. process+thread count | 2094 (vs. ulimit of 1) |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14031/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579201#comment-16579201
 ] 

Hudson commented on HBASE-21028:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #450 (See 
[https://builds.apache.org/job/HBase-1.3-IT/450/])
HBASE-21028 Backport HBASE-18633 to branch-1.3 (vikasv: rev 
fe55991b8ea242bc9a460f83602753d63c8e33df)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiLogThreshold.java


> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch, 
> HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18633) Add more info to understand the source/scenario of large batch requests exceeding threshold

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579202#comment-16579202
 ] 

Hudson commented on HBASE-18633:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #450 (See 
[https://builds.apache.org/job/HBase-1.3-IT/450/])
HBASE-21028 Backport HBASE-18633 to branch-1.3 (vikasv: rev 
fe55991b8ea242bc9a460f83602753d63c8e33df)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiLogThreshold.java


> Add more info to understand the source/scenario of large batch requests 
> exceeding threshold
> ---
>
> Key: HBASE-18633
> URL: https://issues.apache.org/jira/browse/HBASE-18633
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.4.0, 1.3.1, 2.0.0-alpha-3
>Reporter: Vikas Vishwakarma
>Assignee: Vikas Vishwakarma
>Priority: Major
> Fix For: 1.4.0, 2.0.0-alpha-3, 2.0.0
>
> Attachments: HBASE-18633.branch-1.001.patch, 
> HBASE-18633.master.001.patch, HBASE-18633.master.001.patch
>
>
> In our controlled test env, we are seeing frequent Large batch operation 
> detected warnings (as implemented in HBASE-18023). 
> We are not running any client with large batch sizes on this test env, but we 
> start seeing these warnings after some runtime. Maybe it is caused due to 
> some error / retry scenario. Could also be related to Phoenix index updates 
> based on surrounding activity in the logs. Need to add more info like 
> table/region name and anything else that will enable debugging the source or 
> the scenario in which these warnings occur. 
> 2017-08-12 03:40:33,919 WARN  [7,queue=0,port=16020] 
> regionserver.RSRpcServices - Large batch operation detected (greater than 
> 5000) (HBASE-18023). Requested Number of Rows: 7108 Client: xxx
> 2017-08-12 03:40:34,476 WARN  [7,queue=0,port=16020] 
> regionserver.RSRpcServices - Large batch operation detected (greater than 
> 5000) (HBASE-18023). Requested Number of Rows: 7096 Client: xxx
> 2017-08-12 03:40:34,483 WARN  [4,queue=0,port=16020] 
> regionserver.RSRpcServices - Large batch operation detected (greater than 
> 5000) (HBASE-18023). Requested Number of Rows: 7091 Client: xxx
> 2017-08-12 03:40:35,728 WARN  [3,queue=0,port=16020] 
> regionserver.RSRpcServices - Large batch operation detected (greater than 
> 5000) (HBASE-18023). Requested Number of Rows: 7102 Client: xxx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579193#comment-16579193
 ] 

Reid Chan commented on HBASE-19008:
---

Still lots of problems. :).


{quote}we have a formatter template in dev-support directory, FYI{quote}
this {{dev-support}} is under your hbase project root directory, please check, 
again.

hbase doesn't follow {{google-code-style}} :), you'd better read this chapter 
before next patch [IDEs|https://hbase.apache.org/book.html#_ides]

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579191#comment-16579191
 ] 

Ted Yu commented on HBASE-19008:


[~liuml07]:
Can you see if your comments have been addressed ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579190#comment-16579190
 ] 

stack commented on HBASE-21044:
---

Thinking more, the check runs before the procedure finishes is the more likely 
explanation given we dump all recent Procedures

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-13 Thread Vikas Vishwakarma (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579186#comment-16579186
 ] 

Vikas Vishwakarma commented on HBASE-21028:
---

[~dbwong] committed it to 1.3.3. Please assign to yourself once you get the 
added to the contributor list. 

[~apurtell] [~taklwu]

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch, 
> HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21044:
--
Status: Patch Available  (was: Open)

Added 

+  omit('Flakey -- see HBASE-21043')

.. under the define_test. Seems to turn the list_procedures test OFF.

Added in print of output and it looks like it is dumping all procedures so 
perhaps something more interesting going on in here (Not going to take the time 
to look just now).

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21044:
--
Attachment: HBASE-21044.branch-2.0.001.patch

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21044.branch-2.0.001.patch
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-13 Thread Vikas Vishwakarma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma reassigned HBASE-21028:
-

Assignee: (was: Vikas Vishwakarma)

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch, 
> HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-13 Thread Vikas Vishwakarma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-21028:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch, 
> HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-13 Thread Vikas Vishwakarma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma reassigned HBASE-21028:
-

Assignee: Vikas Vishwakarma

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Assignee: Vikas Vishwakarma
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch, 
> HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579182#comment-16579182
 ] 

stack commented on HBASE-21044:
---

bq. is this to get TestShell as a class out of the flaky list?

Yes. Its failing on branch-2.0. I want to cut an RC. Want to down the rate of 
failures (its been creeping up).

What you thinking [~busbey]?

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579176#comment-16579176
 ] 

Reid Chan edited comment on HBASE-20993 at 8/14/18 3:13 AM:


Lots of problems on server side,
 {{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

Let me be clear and point it out, there's already a snippet:
{code:java}
  if (!isSecurityEnabled   // server is 
simple
  && authMethod != AuthMethod.SIMPLE // but client is kerberized
  ) {
// so here server asks clients to fall back to simple.
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?


was (Author: reidchan):
Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

Let me be clear and point it out, there's already a snippet:
{code}
  if (!isSecurityEnabled   // server is 
simple
  && authMethod != AuthMethod.SIMPLE // but client is kerberized
  ) {
// so here server asks clients to fall back to simple.
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> 

[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579178#comment-16579178
 ] 

Ted Yu commented on HBASE-19008:


[~reidchan]:
Can you see if your comments have been addressed ?

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579176#comment-16579176
 ] 

Reid Chan edited comment on HBASE-20993 at 8/14/18 3:08 AM:


Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

Let me be clear and point it out, there's already a snippet:
{code}
  if (!isSecurityEnabled   // server is 
simple
  && authMethod != AuthMethod.SIMPLE // but client is kerberized
  ) {
// so here server asks clients to fall back to simple.
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?


was (Author: reidchan):
Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

There's already a snippet
{code}
  if (!isSecurityEnabled   // server is 
simple
  && authMethod != AuthMethod.SIMPLE // but client is kerberized
  ) {
// so here server asks clients to fall back to simple.
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> 

[jira] [Comment Edited] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579176#comment-16579176
 ] 

Reid Chan edited comment on HBASE-20993 at 8/14/18 3:07 AM:


Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

There's already a snippet
{code}
  if (!isSecurityEnabled   // server is 
simple
  && authMethod != AuthMethod.SIMPLE // but client is kerberized
  ) {
// so here server asks clients to fall back to simple.
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?


was (Author: reidchan):
Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

There's already a snippet
{code}
  if (!isSecurityEnabled && authMethod != AuthMethod.SIMPLE) {
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> 

[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579176#comment-16579176
 ] 

Reid Chan commented on HBASE-20993:
---

Lots of problems in server side,
{{allowFallbackToSimpleAuth}} in {{RpcServer}} is controlled by 
{{hbase.ipc.server.fallback-to-simple-auth-allowed}} parameter, i'm afraid you 
misused it.

There's already a snippet
{code}
  if (!isSecurityEnabled && authMethod != AuthMethod.SIMPLE) {
doRawSaslReply(SaslStatus.SUCCESS, new IntWritable(
SaslUtil.SWITCH_TO_SIMPLE_AUTH), null, null);
authMethod = AuthMethod.SIMPLE;
// client has already sent the initial Sasl message and we
// should ignore it. Both client and server should fall back
// to simple auth from now on.
skipInitialSaslHandshake = true;
  }
{code}
Not understand why you add your own which i don't think necessary, better you 
can explain more.

Tests are good. Add one more? kerberized client against kerberized server?

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> 

[jira] [Commented] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579175#comment-16579175
 ] 

Sean Busbey commented on HBASE-21044:
-

is this to get TestShell as a class out of the flaky list?

> Disable flakey TestShell list_procedures
> 
>
> Key: HBASE-21044
> URL: https://issues.apache.org/jira/browse/HBASE-21044
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
>
> Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20968) list_procedures_test fails due to no matching regex

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579173#comment-16579173
 ] 

Ted Yu commented on HBASE-20968:


Verified locally that the fix allows list_procedures_test to pass.

> list_procedures_test fails due to no matching regex
> ---
>
> Key: HBASE-20968
> URL: https://issues.apache.org/jira/browse/HBASE-20968
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20968.v2.txt, 
> org.apache.hadoop.hbase.client.TestShell-output.txt
>
>
> From test output against hadoop3:
> {code}
> 2018-07-28 12:04:24,838 DEBUG [Time-limited test] 
> procedure2.ProcedureExecutor(948): Stored pid=12, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.  
> ShellTestProcedure
> 2018-07-28 12:04:24,864 INFO  [RS-EventLoopGroup-1-3] 
> ipc.ServerRpcConnection(556): Connection from 172.18.128.12:46918, 
> version=3.0.0-SNAPSHOT, sasl=false, ugi=hbase (auth: SIMPLE), 
> service=MasterService
> 2018-07-28 12:04:24,900 DEBUG [Thread-114] master.MasterRpcServices(1157): 
> Checking to see if procedure is done pid=11
> ^[[38;5;196mF^[[0m
> ===
> Failure: 
> ^[[48;5;124;38;5;231;1mtest_list_procedures(Hbase::ListProceduresTest)^[[0m
> src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
> test_list_procedures'
>  62: end
>  63:   end
>  64:
> ^[[48;5;124;38;5;231;1m  => 65:   assert_equal(1, matching_lines)^[[0m
>  66: end
>  67:   end
>  68: end
> <^[[48;5;34;38;5;231;1m1^[[0m> expected but was
> <^[[48;5;124;38;5;231;1m0^[[0m>
> ===
> ...
> 2018-07-28 12:04:25,374 INFO  [PEWorker-9] 
> procedure2.ProcedureExecutor(1316): Finished pid=12, state=SUCCESS, 
> hasLock=false; org.apache.hadoop.hbase.client.procedure.   
> ShellTestProcedure in 336msec
> {code}
> The completion of the ShellTestProcedure was after the assertion was raised.
> {code}
> def create_procedure_regexp(table_name)
>   regexp_string = '[0-9]+ .*ShellTestProcedure SUCCESS.*' \
> {code}
> The regex used by the test isn't found in test output either.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20175) hbase-spark needs scala dependency convergance

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579171#comment-16579171
 ] 

Hadoop QA commented on HBASE-20175:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
19s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestReplicationDroppedTables |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20175 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935451/HBASE-20175.v05.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  |
| uname | Linux c81d57896958 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / e705cf1447 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14026/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14026/testReport/ |
| Max. process+thread count | 4537 (vs. ulimit of 1) |
| modules | C: hbase-spark . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14026/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Created] (HBASE-21044) Disable flakey TestShell list_procedures

2018-08-13 Thread stack (JIRA)
stack created HBASE-21044:
-

 Summary: Disable flakey TestShell list_procedures
 Key: HBASE-21044
 URL: https://issues.apache.org/jira/browse/HBASE-21044
 Project: HBase
  Issue Type: Sub-task
  Components: test
Reporter: stack
Assignee: stack
 Fix For: 3.0.0, 2.0.2, 2.1.1


Disable the test for now until the parent issue is addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21043) TestShell list_procedures is flakey

2018-08-13 Thread stack (JIRA)
stack created HBASE-21043:
-

 Summary: TestShell list_procedures is flakey
 Key: HBASE-21043
 URL: https://issues.apache.org/jira/browse/HBASE-21043
 Project: HBase
  Issue Type: Bug
  Components: shell, test
Affects Versions: 2.0.1
Reporter: stack


Fails 30% of the time in list_procedures. Fails creating a Procedure then 
trying to capture shell output to confirm it sees the just-queued Procedure 
only it looks like the Procedure finishes too quickly. It works for a while 
then there are a spate of failures. Then it works again.

Here is how it looks in test output:

{code}
Took 5.6355 secondsTook 0.0561 seconds...F
===
Failure: test_list_procedures(Hbase::ListProceduresTest)
src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
test_list_procedures'
 62: end
 63:   end
 64: 
  => 65:   assert_equal(1, matching_lines)
 66: end
 67:   end
 68: end
<1> expected but was
<0>
{code"


Then in the log output for the test, I see this for the running of the 
Procedure:

{code}
2018-08-14 00:42:50,381 DEBUG [Time-limited test] 
procedure2.ProcedureExecutor(948): Stored pid=27, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.client.procedure.ShellTestProcedure
2018-08-14 00:42:50,397 INFO  [RS-EventLoopGroup-1-10] 
ipc.ServerRpcConnection(556): Connection from 67.195.81.150:50597, 
version=2.0.2-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), 
service=MasterService
F
===
Failure: test_list_procedures(Hbase::ListProceduresTest)
src/test/ruby/shell/list_procedures_test.rb:65:in `block in 
test_list_procedures'
2018-08-14 00:42:50,586 INFO  [PEWorker-16] procedure2.ProcedureExecutor(1316): 
Finished pid=27, state=SUCCESS, hasLock=false; 
org.apache.hadoop.hbase.client.procedure.ShellTestProcedure in 234msec
 62: end
 63:   end
 64: 
  => 65:   assert_equal(1, matching_lines)
 66: end
 67:   end
 68: end
<1> expected but was
<0>
===
{code}

The Procedure runs successfully but the regex test on the other end of the 
Admin is not finding what it expects -- the Procedure ran in 234ms.

Will disable in a subprocedure for now till someone has time to play w/ this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: 21042.branch-1.txt

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: (was: 21042.branch-1.txt)

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579146#comment-16579146
 ] 

Hadoop QA commented on HBASE-21042:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
6s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-1 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-1 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
5s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_181. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_181. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_191. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 29s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_191. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  1m 
49s{color} | {color:red} patch has 16 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  1m  
7s{color} | {color:red} The patch causes 16 errors with Hadoop v2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Updated] (HBASE-21041) Memstore's heap size will be decreased to minus zero after flush

2018-08-13 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21041:
---
Attachment: HBASE-21041.branch-2.0.002.patch

> Memstore's heap size will be decreased to minus zero after flush
> 
>
> Key: HBASE-21041
> URL: https://issues.apache.org/jira/browse/HBASE-21041
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21041.branch-2.0.001.patch, 
> HBASE-21041.branch-2.0.002.patch
>
>
> When creating an active mutable segment (MutableSegment) in memstore, 
> MutableSegment's deep overheap (208 bytes) was added to its heap size, but 
> not to the region's memstore's heap size. And so was the immutable 
> segment(CSLMImmutableSegment) which the mutable segment turned into 
> (additional 8 bytes ) later. So after one flush, the memstore's heapsize will 
> be decreased to -216 bytes, The minus number will accumulate after every 
> flush. 
> CompactingMemstore has this problem too.
> We need to record the overhead for CSLMImmutableSegment and MutableSegment to 
> the corresponding region's memstore size.
> For CellArrayImmutableSegment,  CellChunkImmutableSegment and 
> CompositeImmutableSegment , it is not necessary to do so, because inside 
> CompactingMemstore, the overheads are already be taken care of when transfer 
> a CSLMImmutableSegment into them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Attachment: 21042.branch-1.txt

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Fix Version/s: 1.4.7

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.7
>
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21042:
---
Status: Patch Available  (was: Open)

> processor.getRowsToLock() always assumes there is some row being locked in 
> HRegion#processRowsWithLocks
> ---
>
> Key: HBASE-21042
> URL: https://issues.apache.org/jira/browse/HBASE-21042
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 21042.branch-1.txt
>
>
> [~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
> missed finally block of HRegion#processRowsWithLocks
> This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579123#comment-16579123
 ] 

Ted Yu commented on HBASE-18998:


Logged HBASE-21042 with patch coming soon.

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4, 2.0.0
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1650)
> {code}
> Here is code from branch-1.1 :
> {code}
> if (!mutations.isEmpty() && !walSyncSuccessful) {
>   LOG.warn("Wal sync failed. Roll back " + mutations.size() +
>

[jira] [Created] (HBASE-21042) processor.getRowsToLock() always assumes there is some row being locked in HRegion#processRowsWithLocks

2018-08-13 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21042:
--

 Summary: processor.getRowsToLock() always assumes there is some 
row being locked in HRegion#processRowsWithLocks
 Key: HBASE-21042
 URL: https://issues.apache.org/jira/browse/HBASE-21042
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


[~tdsilva] reported at the tail of HBASE-18998 that the fix for HBASE-18998 
missed finally block of HRegion#processRowsWithLocks

This is to fix that remaining call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2018-08-13 Thread Chia-Ping Tsai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579118#comment-16579118
 ] 

Chia-Ping Tsai commented on HBASE-18998:


[~tdsilva] Nice catching! Since some fix versions of this Jira are released, 
would you mind open another Jira to fix what you found?

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4, 2.0.0
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1650)
> {code}
> Here is code from branch-1.1 :
> {code}
> if 

[jira] [Updated] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread liubangchen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-19008:

Attachment: HBASE-19008-10.patch

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-10.patch, 
> HBASE-19008-2.patch, HBASE-19008-3.patch, HBASE-19008-4.patch, 
> HBASE-19008-5.patch, HBASE-19008-6.patch, HBASE-19008-7.patch, 
> HBASE-19008-8.patch, HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2018-08-13 Thread Thomas D'Silva (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579109#comment-16579109
 ] 

Thomas D'Silva commented on HBASE-18998:


[~yuzhih...@gmail.com] [~elserj] 
I think the fix was missed in the final block of
{code}
@Override
  public void processRowsWithLocks(RowProcessor processor, long timeout,
  long nonceGroup, long nonce) throws IOException 
{code}

{code}
if (!mutations.isEmpty() && !walSyncSuccessful) {
  LOG.warn("Wal sync failed. Roll back " + mutations.size() +
  " memstore keyvalues for row(s):" + StringUtils.byteToHexString(
  processor.getRowsToLock().iterator().next()) + "...");
  for (Mutation m : mutations) {
for (CellScanner cellScanner = m.cellScanner(); 
cellScanner.advance();) {
  Cell cell = cellScanner.current();
  getStore(cell).rollback(cell);
}
  }
  if (writeEntry != null) {
mvcc.complete(writeEntry);
writeEntry = null;
  }
}
{code}

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4, 2.0.0
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> 

[jira] [Commented] (HBASE-20175) hbase-spark needs scala dependency convergance

2018-08-13 Thread Artem Ervits (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579054#comment-16579054
 ] 

Artem Ervits commented on HBASE-20175:
--

[~mdrob] rebased the patch, thank you.

> hbase-spark needs scala dependency convergance
> --
>
> Key: HBASE-20175
> URL: https://issues.apache.org/jira/browse/HBASE-20175
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, spark
>Reporter: Mike Drob
>Assignee: Artem Ervits
>Priority: Major
> Attachments: HBASE-20175.v01.patch, HBASE-20175.v04.patch, 
> HBASE-20175.v05.patch
>
>
> This is a follow-on to HBASE-16179 - I think we might need to specify an 
> exclude in the dependency management.
> {noformat}
> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ 
> hbase-spark ---
> [WARNING]  Expected all dependencies to require Scala version: 2.11.8
> [WARNING]  org.apache.hbase:hbase-spark:3.0.0-SNAPSHOT requires scala 
> version: 2.11.8
> [WARNING]  org.apache.spark:spark-streaming_2.11:2.1.1 requires scala 
> version: 2.11.8
> [WARNING]  org.apache.spark:spark-streaming_2.11:2.1.1 requires scala 
> version: 2.11.8
> [WARNING]  org.scalatest:scalatest_2.11:2.2.4 requires scala version: 2.11.2
> {noformat}
> [~tedyu] - since you're already fiddling in this area, do you want to take a 
> look?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20175) hbase-spark needs scala dependency convergance

2018-08-13 Thread Artem Ervits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Ervits updated HBASE-20175:
-
Attachment: HBASE-20175.v05.patch

> hbase-spark needs scala dependency convergance
> --
>
> Key: HBASE-20175
> URL: https://issues.apache.org/jira/browse/HBASE-20175
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, spark
>Reporter: Mike Drob
>Assignee: Artem Ervits
>Priority: Major
> Attachments: HBASE-20175.v01.patch, HBASE-20175.v04.patch, 
> HBASE-20175.v05.patch
>
>
> This is a follow-on to HBASE-16179 - I think we might need to specify an 
> exclude in the dependency management.
> {noformat}
> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @ 
> hbase-spark ---
> [WARNING]  Expected all dependencies to require Scala version: 2.11.8
> [WARNING]  org.apache.hbase:hbase-spark:3.0.0-SNAPSHOT requires scala 
> version: 2.11.8
> [WARNING]  org.apache.spark:spark-streaming_2.11:2.1.1 requires scala 
> version: 2.11.8
> [WARNING]  org.apache.spark:spark-streaming_2.11:2.1.1 requires scala 
> version: 2.11.8
> [WARNING]  org.scalatest:scalatest_2.11:2.2.4 requires scala version: 2.11.2
> {noformat}
> [~tedyu] - since you're already fiddling in this area, do you want to take a 
> look?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579035#comment-16579035
 ] 

stack commented on HBASE-21005:
---

+1 for branch-2.0. thanks.

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSED

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20772:
--
   Resolution: Fixed
Fix Version/s: 2.1.1
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSE

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579018#comment-16579018
 ] 

stack commented on HBASE-20772:
---

Pushing this small change. Shout if want me to change it or suggested 
improvements.

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20976) SCP can be scheduled multiple times for the same RS

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20976:
--
Fix Version/s: (was: 2.0.2)

> SCP can be scheduled multiple times for the same RS
> ---
>
> Key: HBASE-20976
> URL: https://issues.apache.org/jira/browse/HBASE-20976
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20976.branch-2.0.001.patch, 
> HBASE-20976.branch-2.0.002.patch
>
>
> SCP can be scheduled multiple times for the same RS:
> 1. a RS crashed, a SCP was submitted for it
> 2. before this SCP finish, the Master crashed
> 3. The new master will scan the meta table and find some region is still open 
> on a dead server
> 4. The new master submit a SCP for the dead server again
> The two SCP for the same RS can even execute concurrently if without 
> HBASE-20846…
> Provided a test case to reproduce this issue and a fix solution in the patch.
> Another case that SCP might be scheduled multiple times for the same RS(with 
> HBASE-20708.):
> 1.  a RS crashed, a SCP was submitted for it
> 2. A new RS on the same host started, the old RS's Serveranme was remove from 
> DeadServer.deadServers
> 3. after the SCP passed the Handle_RIT state, a UnassignProcedure need to 
> send a close region operation to the crashed RS
> 4. The UnassignProcedure's dispatch failed since 'NoServerDispatchException'
> 5. Begin to expire the RS, but only find it not online and not in deadServer 
> list, so a SCP was submitted for the same RS again
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20976) SCP can be scheduled multiple times for the same RS

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579017#comment-16579017
 ] 

stack commented on HBASE-20976:
---

bq. IIRC, the deadservers are removed so that the master Web UI won't show a 
dead server foverever there...

Yes. Could age them out instead... i.e. a deadserver needs to stick around for 
an hour at least?

> SCP can be scheduled multiple times for the same RS
> ---
>
> Key: HBASE-20976
> URL: https://issues.apache.org/jira/browse/HBASE-20976
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-20976.branch-2.0.001.patch, 
> HBASE-20976.branch-2.0.002.patch
>
>
> SCP can be scheduled multiple times for the same RS:
> 1. a RS crashed, a SCP was submitted for it
> 2. before this SCP finish, the Master crashed
> 3. The new master will scan the meta table and find some region is still open 
> on a dead server
> 4. The new master submit a SCP for the dead server again
> The two SCP for the same RS can even execute concurrently if without 
> HBASE-20846…
> Provided a test case to reproduce this issue and a fix solution in the patch.
> Another case that SCP might be scheduled multiple times for the same RS(with 
> HBASE-20708.):
> 1.  a RS crashed, a SCP was submitted for it
> 2. A new RS on the same host started, the old RS's Serveranme was remove from 
> DeadServer.deadServers
> 3. after the SCP passed the Handle_RIT state, a UnassignProcedure need to 
> send a close region operation to the crashed RS
> 4. The UnassignProcedure's dispatch failed since 'NoServerDispatchException'
> 5. Begin to expire the RS, but only find it not online and not in deadServer 
> list, so a SCP was submitted for the same RS again
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579011#comment-16579011
 ] 

stack commented on HBASE-20671:
---

Downing priority and moving out of 2.0.2. Josh hasn't seen this recently. We 
have put in a bit of a workaround too. Leaving open in case we get a fresh 
instance.

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: 0001-Test-for-HBASE-20671.patch, 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip,
>  workaround.txt
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20671:
--
Priority: Major  (was: Critical)

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: 0001-Test-for-HBASE-20671.patch, 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip,
>  workaround.txt
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20671:
--
Fix Version/s: (was: 2.0.2)

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: 0001-Test-for-HBASE-20671.patch, 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip,
>  workaround.txt
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579000#comment-16579000
 ] 

stack commented on HBASE-20657:
---

What to do here? I tried the test. It passes whether with or without taking 
lock on ModifyTableProcedure. Looking in logs it looks like the two modify 
table procedures succeed. This is tip of branch-2.0 after all the recent 
backports. What you reckon [~sergey.soldatov]. There is issue here still but 
maybe not so bad?

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 3.0.0, 2.1.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-4-master.patch, HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20657:
--
Fix Version/s: (was: 2.0.2)

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 3.0.0, 2.1.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-4-master.patch, HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21005:
---
Fix Version/s: 2.1.1
   2.2.0
   3.0.0

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578996#comment-16578996
 ] 

Josh Elser commented on HBASE-21005:


Thanks Sean and Mike.

No worries on the review part, Duo. That's covered :)

Pushed to 2.1, 2.2, and master. Will wait on an ack from Stack for 2.0

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20387) flaky infrastructure should work for all branches

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578990#comment-16578990
 ] 

Hudson commented on HBASE-20387:


Results for branch HBASE-20387
[build #9 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20387/9/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20387/9//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20387/9//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20387/9//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> flaky infrastructure should work for all branches
> -
>
> Key: HBASE-20387
> URL: https://issues.apache.org/jira/browse/HBASE-20387
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> We need a flaky list per-branch, since what does/does not work reliably on 
> master isn't really relevant to our older maintenance release lines.
> We should just make the invocation a step in the current per-branch nightly 
> jobs, prior to when we need the list in the stages that run unit tests. We 
> can publish it in the nightly job as well so that precommit can still get it. 
> (and can fetch it per-branch if needed)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578981#comment-16578981
 ] 

Hadoop QA commented on HBASE-20993:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
57s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
32s{color} | {color:red} hbase-client: The patch generated 1 new + 2 unchanged 
- 0 fixed = 3 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
26s{color} | {color:red} hbase-server: The patch generated 114 new + 76 
unchanged - 0 fixed = 190 total (was 76) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 55s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 21s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF 

[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578970#comment-16578970
 ] 

Duo Zhang commented on HBASE-21005:
---

Not an expert for this but I think this should go into 2.1, as I have also 
gotten the strange directory locally. +1.

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578967#comment-16578967
 ] 

Mike Drob commented on HBASE-21005:
---

+1 on my end

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21005) Maven site configuration causes downstream projects to get a directory named ${project.basedir}

2018-08-13 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578964#comment-16578964
 ] 

Josh Elser commented on HBASE-21005:


Thanks for the review, Sean. You OK, [~mdrob]?

RM [~stack], [~Apache9]: request for your respective branches. IMO, safe and 
desirable for all.

> Maven site configuration causes downstream projects to get a directory named 
> ${project.basedir}
> ---
>
> Key: HBASE-21005
> URL: https://issues.apache.org/jira/browse/HBASE-21005
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Matt Burgess
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-21005.001.patch, HBASE-21005.002.patch
>
>
> Matt told me about this interesting issue they see down in Apache Nifi's build
> NiFi depends on HBase for some code that they provide to their users. As a 
> part of the build process of NiFi, they are seeing a directory named 
> {{$\{project.basedir}}} get created the first time they build with an empty 
> Maven repo. Matt reports that after a javax.el artifact is cached, Maven will 
> stop creating the directory; however, if you wipe that artifact from the 
> Maven repo, the next build will end up re-creating it.
> I believe I've seen this with Phoenix, too, but never investigated why it was 
> actually happening.
> My hunch is that it's related to the local maven repo that we create to 
> "patch" in our custom maven-fluido-skin jar (HBASE-14785). I'm not sure if we 
> can "work" around this by pushing the custom local repo into a profile and 
> only activating that for the mvn-site. Another solution would be to publish 
> the maven-fluido-jar to central with custom coordinates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSE

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578955#comment-16578955
 ] 

Hadoop QA commented on HBASE-20772:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}108m 
26s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-20772 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935423/HBASE-20772.branch-2.0.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a4aabd92ecf9 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / ce8904034e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14024/testReport/ |
| Max. process+thread count | 4528 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Comment Edited] (HBASE-20713) Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578811#comment-16578811
 ] 

stack edited comment on HBASE-20713 at 8/13/18 9:42 PM:


You are right [~zghaobac]. Rare is the case of a three-tiered Procedure but 
possible: e.g. ModifyTableProcedure does three tiers.


was (Author: stack):
You are right [~zghaobac]. Rare is the case of a three-tiered Procedure but 
possible (I seem to remember we have at least one instance but can't recall at 
the moment).

> Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method 
> -
>
> Key: HBASE-20713
> URL: https://issues.apache.org/jira/browse/HBASE-20713
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
>
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java#L210
> {code:java}
> if (rq.isEmpty() || xlockReq) {
>   removeFromRunQueue(fairq, rq);
> } else if (rq.getLockStatus().hasParentLock(pollResult)) {
>   // if the rq is in the fairq because of runnable child
>   // check if the next procedure is still a child.
>   // if not, remove the rq from the fairq and go back to the xlock state
>   Procedure nextProc = rq.peek();
>   if (nextProc != null && !Procedure.haveSameParent(nextProc, 
> pollResult)) {
> removeFromRunQueue(fairq, rq);
>   }
> }
> {code}
> Here is the comment of why to remove from run queue. If I am not wrong, 
> here's assumption is the parent procedure should require exclusive lock. So 
> if the nextProc is a child but has different parent with current procedure, 
> we can remove it from run queue.
> But there maybe three type procedure. Procedure A's child is Procedure B. 
> Procedure's child is Procedure C. And only Procedure A need exclusive lock 
> and Procedure B,C don't require exclusive lock. The 
> condition(!Procedure.haveSameParent(nextProc, pollResult)) is not right for 
> this case?
> FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20728) Failure and recovery of all RSes in a RSgroup requires master restart for region assignments

2018-08-13 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578930#comment-16578930
 ] 

Sakthi commented on HBASE-20728:


Was able to recreate the scenario

> Failure and recovery of all RSes in a RSgroup requires master restart for 
> region assignments
> 
>
> Key: HBASE-20728
> URL: https://issues.apache.org/jira/browse/HBASE-20728
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Minor
>
> If all the RSes in a RSgroup hosting user tables fail and recover, master 
> still looks for old RSes (with old timestamp in the RS identifier) to assign 
> regions. i.e. Regions are left in transition making the tables in the RSGroup 
> unavailable. User need to restart {{master}} or manually assign the regions 
> to make the tables available. Steps to recreate the scenario in a local 
> cluster
>  - Add required properties to {{site.xml}} to enable {{rsgroup}} and start 
> hbase
>  - Bring up multiple region servers using {{local-regionservers.sh start}}
>  - Create a {{rsgroup}} and move a subset of  {{regionservers}} to the group
>  - Create a table, move it to the group and put some data
>  - Stop the {{regionservers}} in the group and restart them
>  - From the {{master UI}}, we can see that the region for the table in 
> transition and the RS name in the {{RIT}} message has the old timestamp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20978) [amv2] Worker terminating UNNATURALLY during MoveRegionProcedure

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578923#comment-16578923
 ] 

stack commented on HBASE-20978:
---

Hmmm...  HBASE-20846 might have 'fixed' this. Seems to happen if we are 
suspended when crash happens. Let me see if I can repro post-HBASE-20846. I 
thought I had this morning but it was from an old run...

> [amv2] Worker terminating UNNATURALLY during MoveRegionProcedure
> 
>
> Key: HBASE-20978
> URL: https://issues.apache.org/jira/browse/HBASE-20978
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.2
>
>
> Testing tip of branch-2.0, ran into this:
> {code}
> 2018-07-29 01:45:33,002 INFO  [master/ve0524:16000] master.HMaster: Master 
> has completed initialization 13.854sec
>2018-07-29 
> 01:45:33,003 INFO  [PEWorker-4] procedure.MasterProcedureScheduler: pid=1820, 
> state=WAITING:MOVE_REGION_ASSIGN; MoveRegionProcedure 
> hri=533fb79ba23b27e9e0715b51daeb30c1, 
> source=ve0538.halxg.cloudera.com,16020,1532847421672, 
> destination=ve0540.halxg.cloudera.com,16020,1532853151031 checking lock on 
> 533fb79ba23b27e9e0715b51daeb30c1  
> 2018-07-29 01:45:33,003 
> WARN  [PEWorker-4] procedure2.ProcedureExecutor: Worker terminating 
> UNNATURALLY null
> java.lang.IllegalArgumentException: pid=1820, 
> state=WAITING:MOVE_REGION_ASSIGN; MoveRegionProcedure 
> hri=533fb79ba23b27e9e0715b51daeb30c1, 
> source=ve0538.halxg.cloudera.com,16020,1532847421672, 
> destination=ve0540.halxg.cloudera.com,16020,1532853151031
>   at 
> org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:134)
>   
>  at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1458)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1249)
>   
>  at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1763)
> {code}
> It then shows as the below in the UI:
> {code}
> IdParent  State   Owner   TypeStart Time  Last Update Errors  
> Parameters
> 1820  WAITING stack   MoveRegionProcedure 
> hri=533fb79ba23b27e9e0715b51daeb30c1, 
> source=ve0538.halxg.cloudera.com,16020,1532847421672, 
> destination=ve0540.halxg.cloudera.com,16020,1532853151031   Sun Jul 29 
> 01:33:37 PDT 2018Sun Jul 29 01:33:38 PDT 2018[ { state => [ 
> '1', '2' ] }, { regionId => '1532851768240', tableName => { namespace => 
> 'ZGVmYXVsdA==', qualifier => 'SW50ZWdyYXRpb25UZXN0QmlnTGlua2VkTGlzdA==' }, 
> startKey => 'VttDLvXHdcmzwqNdrNoUFg==', endKey => 'WGFV8k+hFqhcIJGiKZ8L4Q==', 
> offline => 'false', split => 'false', replicaId => '0' }, { sourceServer => { 
> hostName => 've0538.halxg.cloudera.com', port => '16020', startCode => 
> '1532847421672' }, destinationServer => { hostName => 
> 've0540.halxg.cloudera.com', port => '16020', startCode => '1532853151031' } 
> } ]
> {code}
> This is what we'd just read from hbase:meta:
> {code}
> 2018-07-29 01:45:32,802 INFO  [master/ve0524:16000] 
> assignment.RegionStateStore: Load hbase:meta entry 
> region=533fb79ba23b27e9e0715b51daeb30c1, regionState=CLOSED, 
> lastHost=ve0538.halxg.cloudera.com,16020,1532847421672, 
> regionLocation=ve0538.halxg.cloudera.com,16020,1532847421672, 
> openSeqNum=1544600
> {code}
> Before this, we'd just logged this:
> 2018-07-29 01:33:39,786 INFO  [PEWorker-14] assignment.RegionStateStore: 
> pid=1823 updating hbase:meta row=533fb79ba23b27e9e0715b51daeb30c1, 
> regionState=CLOSED
> Going back in history, we do the above each time the Master gets restarted so 
> the region is offlined and never brought back online.
> It is failing here:
> {code}
>   private void execProcedure(final RootProcedureState procStack,
>   final Procedure procedure) {
> Preconditions.checkArgument(procedure.getState() == 
> ProcedureState.RUNNABLE,
> procedure.toString());
> {code}
> Its the parent move region that is trying to run and failing. It is not 
> RUNNABLE? Because the subprocedure was 'done' but not fully?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578912#comment-16578912
 ] 

Ted Yu commented on HBASE-19008:


Please fix 8 checkstyle warnings in hbase-server module, 3 in hbase-spark 
module and 1 javadoc warning in hbase-client module.

> Add missing equals or hashCode method(s) to stock Filter implementations
> 
>
> Key: HBASE-19008
> URL: https://issues.apache.org/jira/browse/HBASE-19008
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: liubangchen
>Priority: Major
>  Labels: filter
> Attachments: Filters.png, HBASE-19008-1.patch, HBASE-19008-2.patch, 
> HBASE-19008-3.patch, HBASE-19008-4.patch, HBASE-19008-5.patch, 
> HBASE-19008-6.patch, HBASE-19008-7.patch, HBASE-19008-8.patch, 
> HBASE-19008-9.patch, HBASE-19008.patch
>
>
> In HBASE-15410, [~mdrob] reminded me that Filter implementations may not 
> write {{equals}} or {{hashCode}} method(s).
> This issue is to add missing {{equals}} or {{hashCode}} method(s) to stock 
> Filter implementations such as KeyOnlyFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HBASE-20993:
-
Attachment: HBASE-20993.branch-1.002.patch
Status: Patch Available  (was: Open)

All right, I have patch 002 ready. I have implemented the following changes:
 * Improved preamble handling for BlockingRpcConnection and RpcServer that will 
now fallback to simple auth when told by the server. This was done by having 
the server respond to the preamble sent by the client and the client processing 
that response
 * Added test for fallback with Kerberos client
 * Added test for simple client against simple server

Note: This does in fact break the netty async client. I have began working on a 
fix for handling the preamble in the Netty client, however still ramping on 
that. Would appreciate feedback, thanks!

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.002.patch, 
> HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> 

[jira] [Updated] (HBASE-20993) [Auth] IPC client fallback to simple auth allowed doesn't work

2018-08-13 Thread Jack Bearden (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HBASE-20993:
-
Status: Open  (was: Patch Available)

> [Auth] IPC client fallback to simple auth allowed doesn't work
> --
>
> Key: HBASE-20993
> URL: https://issues.apache.org/jira/browse/HBASE-20993
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 1.2.6
>Reporter: Reid Chan
>Assignee: Jack Bearden
>Priority: Critical
> Attachments: HBASE-20993.001.patch, HBASE-20993.branch-1.2.001.patch
>
>
> It is easily reproducible.
> client's hbase-site.xml: hadoop.security.authentication:kerberos, 
> hbase.security.authentication:kerberos, 
> hbase.ipc.client.fallback-to-simple-auth-allowed:true, keytab and principal 
> are right set
> A simple auth hbase cluster, a kerberized hbase client application. 
> application trying to r/w/c/d table will have following exception:
> {code}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
>   at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1592)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1530)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1552)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1581)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1738)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4297)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4289)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:753)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:674)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:607)
>   at 
> org.playground.hbase.KerberizedClientFallback.main(KerberizedClientFallback.java:55)
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at 
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at 
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>   at 
> 

[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-08-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578861#comment-16578861
 ] 

Hudson commented on HBASE-18477:


Results for branch HBASE-18477
[build #294 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/294/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/294//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/294//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/294//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/294//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19034) Implement "optimize SEEK to SKIP" in storefile scanner

2018-08-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578837#comment-16578837
 ] 

Lars Hofhansl commented on HBASE-19034:
---

Thinking about this one now. Bit more trick than anticipated since the 
information about whether we're skipping a row or column is lost at the 
\{StoreFile|HFile}Scanner or reader levels.

> Implement "optimize SEEK to SKIP" in storefile scanner
> --
>
> Key: HBASE-19034
> URL: https://issues.apache.org/jira/browse/HBASE-19034
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Priority: Major
>
> {code}
>   protected boolean trySkipToNextRow(Cell cell) throws IOException {
> Cell nextCell = null;
> do { 
>   Cell nextIndexedKey = getNextIndexedKey();
>   if (nextIndexedKey != null && nextIndexedKey != 
> KeyValueScanner.NO_NEXT_INDEXED_KEY
>   && matcher.compareKeyForNextRow(nextIndexedKey, cell) >= 0) { 
> this.heap.next();
> ++kvsScanned;
>   } else {
> return false;
>   }
> } while ((nextCell = this.heap.peek()) != null && 
> CellUtil.matchingRows(cell, nextCell));
> return true;
>   }
> {code}
> When SQM return a SEEK_NEXT_ROW, the store scanner will seek to the cell from 
> next row. HBASE-13109 optimized the SEEK to SKIP when we can read the cell in 
> current loaded block. So it will skip by call heap.next to the cell from next 
> row. But the problem is it compare too many times with the nextIndexedKey in 
> the while loop. We plan move the compare outside the loop to reduce compare 
> times. One problem is the nextIndexedKey maybe changed when call heap.peek, 
> because the current storefile scanner was changed. So my proposal is to move 
> the "optimize SEEK to SKIP" to storefile scanner. When we call seek for 
> storefile scanner, it may real seek or implement seek by several times skip.
> Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21033) Separate the StoreHeap from StoreFileHeap

2018-08-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578836#comment-16578836
 ] 

Lars Hofhansl commented on HBASE-21033:
---

Any comments. The patch can be smaller if I do not rename the main classes, but 
just add the single new one with the next(...) methods moved.

> Separate the StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21036) Upgrade our hadoop 3 dependency to 3.1.1

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21036:
--
Fix Version/s: (was: 2.0.2)

> Upgrade our hadoop 3 dependency to 3.1.1
> 
>
> Key: HBASE-21036
> URL: https://issues.apache.org/jira/browse/HBASE-21036
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, hadoop3
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
>
> https://lists.apache.org/thread.html/895f28e0941b37f006812afa383ff8ff9148fafc4a5be385aebd0fa1@%3Cgeneral.hadoop.apache.org%3E
> 3.1.1 is a stable release. And since 3.0.x is not compatible with hbase due 
> to YARN-7190, we should upgrade our dependency directly to 3.1.x line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20540) [umbrella] Hadoop 3 compatibility

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20540:
--
Fix Version/s: (was: 2.0.2)
   3.0.0

> [umbrella] Hadoop 3 compatibility
> -
>
> Key: HBASE-20540
> URL: https://issues.apache.org/jira/browse/HBASE-20540
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.1.1
>
>
> There are known issues about the hadoop 3 compatibility for hbase 2. But 
> hadoop 3 is still not production ready. So we will link the issues here and 
> once there is a production ready hadoop 3 release, we will fix these issues 
> soon and upgrade our dependencies on hadoop, and also update the support 
> matrix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20145) HMaster start fails with IllegalStateException when HADOOP_HOME is set

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20145:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HBASE-20540)

> HMaster start fails with IllegalStateException when HADOOP_HOME is set
> --
>
> Key: HBASE-20145
> URL: https://issues.apache.org/jira/browse/HBASE-20145
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0-beta-1
> Environment: HBase-2.0-beta1.
> Hadoop trunk version.
> java version "1.8.0_144"
>Reporter: Rohith Sharma K S
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-20145.master.001.patch
>
>
> It is observed that HMaster start is failed when HADOOP_HOME is set as env 
> while starting HMaster. HADOOP_HOME is pointing to Hadoop trunk version.
> {noformat}
> 2018-03-07 16:59:52,654 ERROR [master//10.200.4.200:16000] master.HMaster: 
> Failed to become active master
> java.lang.IllegalStateException: The procedure WAL relies on the ability to 
> hsync for proper operation during component failures, but the underlying 
> filesystem does not support doing so. Please check the config value of 
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness 
> and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount 
> that can provide it.
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1036)
>     at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
>     at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:532)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1232)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1145)
>     at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:837)
>     at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2026)
>     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:547)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The same configs is working in HBase-1.2.6 build properly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20713) Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20713:
--
Fix Version/s: (was: 2.0.2)

> Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method 
> -
>
> Key: HBASE-20713
> URL: https://issues.apache.org/jira/browse/HBASE-20713
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
>
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java#L210
> {code:java}
> if (rq.isEmpty() || xlockReq) {
>   removeFromRunQueue(fairq, rq);
> } else if (rq.getLockStatus().hasParentLock(pollResult)) {
>   // if the rq is in the fairq because of runnable child
>   // check if the next procedure is still a child.
>   // if not, remove the rq from the fairq and go back to the xlock state
>   Procedure nextProc = rq.peek();
>   if (nextProc != null && !Procedure.haveSameParent(nextProc, 
> pollResult)) {
> removeFromRunQueue(fairq, rq);
>   }
> }
> {code}
> Here is the comment of why to remove from run queue. If I am not wrong, 
> here's assumption is the parent procedure should require exclusive lock. So 
> if the nextProc is a child but has different parent with current procedure, 
> we can remove it from run queue.
> But there maybe three type procedure. Procedure A's child is Procedure B. 
> Procedure's child is Procedure C. And only Procedure A need exclusive lock 
> and Procedure B,C don't require exclusive lock. The 
> condition(!Procedure.haveSameParent(nextProc, pollResult)) is not right for 
> this case?
> FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20713) Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578811#comment-16578811
 ] 

stack commented on HBASE-20713:
---

You are right [~zghaobac]. Rare is the case of a three-tiered Procedure but 
possible (I seem to remember we have at least one instance but can't recall at 
the moment).

> Revisit why to removeFromRunQueue in MasterProcedureExecutor's doPoll method 
> -
>
> Key: HBASE-20713
> URL: https://issues.apache.org/jira/browse/HBASE-20713
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
> Fix For: 2.0.2
>
>
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureScheduler.java#L210
> {code:java}
> if (rq.isEmpty() || xlockReq) {
>   removeFromRunQueue(fairq, rq);
> } else if (rq.getLockStatus().hasParentLock(pollResult)) {
>   // if the rq is in the fairq because of runnable child
>   // check if the next procedure is still a child.
>   // if not, remove the rq from the fairq and go back to the xlock state
>   Procedure nextProc = rq.peek();
>   if (nextProc != null && !Procedure.haveSameParent(nextProc, 
> pollResult)) {
> removeFromRunQueue(fairq, rq);
>   }
> }
> {code}
> Here is the comment of why to remove from run queue. If I am not wrong, 
> here's assumption is the parent procedure should require exclusive lock. So 
> if the nextProc is a child but has different parent with current procedure, 
> we can remove it from run queue.
> But there maybe three type procedure. Procedure A's child is Procedure B. 
> Procedure's child is Procedure C. And only Procedure A need exclusive lock 
> and Procedure B,C don't require exclusive lock. The 
> condition(!Procedure.haveSameParent(nextProc, pollResult)) is not right for 
> this case?
> FYI [~stack]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSED

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20772:
--
 Assignee: stack
Affects Version/s: 2.0.1
   Status: Patch Available  (was: Open)

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Affects Versions: 2.0.1
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSE

2018-08-13 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578804#comment-16578804
 ] 

stack commented on HBASE-20772:
---

.002 Look for unusual case where region is closed w/o Master being involved and 
emit custom log message in this case. See below for example.

2018-08-13 11:59:41,841 INFO  
[RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000] 
assignment.AssignmentManager: RegionServer CLOSED 
c092ea7b14ceada4a8f7af2e62a9cdd7

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSED

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20772:
--
Attachment: HBASE-20772.branch-2.0.002.patch

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20772.branch-2.0.001.patch, 
> HBASE-20772.branch-2.0.002.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19008) Add missing equals or hashCode method(s) to stock Filter implementations

2018-08-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578798#comment-16578798
 ] 

Hadoop QA commented on HBASE-19008:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} hbase-client: The patch generated 24 new + 337 
unchanged - 84 fixed = 361 total (was 421) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
18s{color} | {color:red} hbase-server: The patch generated 10 new + 404 
unchanged - 2 fixed = 414 total (was 406) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hbase-spark: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  2s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hbase-client generated 1 new + 2 unchanged - 0 fixed = 
3 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
28s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
56s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-19008 |
| JIRA Patch URL | 

[jira] [Updated] (HBASE-20772) Controlled shutdown fills Master log with the disturbing message "No matching procedure found for rit=OPEN, location=ZZZZ, table=YYYYY, region=XXXX transition to CLOSED

2018-08-13 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20772:
--
Attachment: HBASE-20772.branch-2.0.001.patch

> Controlled shutdown fills Master log with the disturbing message "No matching 
> procedure found for rit=OPEN, location=, table=Y, region= 
> transition to CLOSED
> 
>
> Key: HBASE-20772
> URL: https://issues.apache.org/jira/browse/HBASE-20772
> Project: HBase
>  Issue Type: Bug
>  Components: logging
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2
>
> Attachments: HBASE-20772.branch-2.0.001.patch
>
>
> I just saw this and was disturbed but this is a controlled shutdown. Change 
> message so it doesn't freak out the poor operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21018) RS crashed because AsyncFS was unable to update HDFS data encryption key

2018-08-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578794#comment-16578794
 ] 

Wei-Chiu Chuang commented on HBASE-21018:
-

Thanks [~stack], [~yuzhih...@gmail.com] and [~Apache9] for the review!

> RS crashed because AsyncFS was unable to update HDFS data encryption key
> 
>
> Key: HBASE-21018
> URL: https://issues.apache.org/jira/browse/HBASE-21018
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
> Environment: Hadoop 3.0.0, HBase 2.0.0, 
> HDFS configuration dfs.encrypt.data.transfer = true
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21018.master.001.patch
>
>
> We (+[~uagashe]) found HBase RegionServer doesn't update HDFS data encryption 
> key correctly, and in some cases after retry 10 times, it aborts.
> {noformat}
> 2018-08-03 17:37:03,233 WARN 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper: create 
> fan-out dfs output 
> /hbase/WALs/rs1.example.com,22101,1533318719239/rs1.example.com%2C22101%2C1533318719239.rs1.example.com%2C22101%2C1533318719239.regiongroup-0.1533343022981
>  failed, retry = 1
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=1685436998) doesn't exist. Current key: 1085959374
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.check(FanOutOneBlockAsyncDFSOutputSaslHelper.java:399)
> at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper$SaslNegotiateHandler.channelRead(FanOutOneBlockAsyncDFSOutputSaslHelper.java:470)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
> at 
> 

[jira] [Commented] (HBASE-20705) Having RPC Quota on a table prevents Space quota to be recreated/removed

2018-08-13 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578786#comment-16578786
 ] 

Sakthi commented on HBASE-20705:


[~elserj], I agree and I'll keep that in mind. Thanks for your advice and for 
confirming as well. I should have taken a look at the flaky beforehand. My bad.

If the patch looks ok, could you please +1?

> Having RPC Quota on a table prevents Space quota to be recreated/removed
> 
>
> Key: HBASE-20705
> URL: https://issues.apache.org/jira/browse/HBASE-20705
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Major
> Attachments: hbase-20705.master.001.patch
>
>
> * Property {{hbase.quota.remove.on.table.delete}} is set to {{true}} by 
> default
>  * Create a table and set RPC and Space quota
> {noformat}
> hbase(main):022:0> create 't2','cf1'
> Created table t2
> Took 0.7420 seconds
> => Hbase::Table - t2
> hbase(main):023:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0105 seconds
> hbase(main):024:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 
> '10M/sec'
> Took 0.0186 seconds
> hbase(main):025:0> list_quotas
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, VIOLATION_POLICY 
> => NO_WRITES{noformat}
>  * Drop the table and the Space quota is set to {{REMOVE => true}}
> {noformat}
> hbase(main):026:0> disable 't2'
> Took 0.4363 seconds
> hbase(main):027:0> drop 't2'
> Took 0.2344 seconds
> hbase(main):028:0> list_quotas
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true
> USER => u1 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, 
> SCOPE => MACHINE{noformat}
>  * Recreate the table and set Space quota back. The Space quota on the table 
> is still set to {{REMOVE => true}}
> {noformat}
> hbase(main):029:0> create 't2','cf1'
> Created table t2
> Took 0.7348 seconds
> => Hbase::Table - t2
> hbase(main):031:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0088 seconds
> hbase(main):032:0> list_quotas
> OWNER QUOTAS
> TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 
> 10M/sec, SCOPE => MACHINE
> TABLE => t2 TYPE => SPACE, TABLE => t2, REMOVE => true{noformat}
>  * Remove RPC quota and drop the table, the Space Quota is not removed
> {noformat}
> hbase(main):033:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => NONE
> Took 0.0193 seconds
> hbase(main):036:0> disable 't2'
> Took 0.4305 seconds
> hbase(main):037:0> drop 't2'
> Took 0.2353 seconds
> hbase(main):038:0> list_quotas
> OWNER QUOTAS
> TABLE => t2                               TYPE => SPACE, TABLE => t2, REMOVE 
> => true{noformat}
>  * Deleting the quota entry from {{hbase:quota}} seems to be the option to 
> reset it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-08-13 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20257:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Artem.

Thanks for the review, Sean.

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch, HBASE-20257.v05.patch, 
> HBASE-20257.v06.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >