[GitHub] [hbase] Apache9 commented on pull request #3325: HBASE-25934 Add username for RegionScannerHolder

2021-06-22 Thread GitBox


Apache9 commented on pull request #3325:
URL: https://github.com/apache/hbase/pull/3325#issuecomment-866550414


   Sorry for the delay...
   
   @virajjasani Please help merging this PR? I have a meeting soon.
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] tomscut commented on pull request #3325: HBASE-25934 Add username for RegionScannerHolder

2021-06-22 Thread GitBox


tomscut commented on pull request #3325:
URL: https://github.com/apache/hbase/pull/3325#issuecomment-866523876


   > +1
   
   Thanks @virajjasani for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26024) Region server JVM Crash - A fatal error has been detected by the Java Runtime Environment

2021-06-22 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367840#comment-17367840
 ] 

Anoop Sam John commented on HBASE-26024:


bq.we used the workaround by switching hbase.rpc.server.impl back to 
SimpleRpcServer as mentioned in the following JIRA
After that this crash happening?

> Region server JVM Crash - A fatal error has been detected by the Java Runtime 
> Environment
> -
>
> Key: HBASE-26024
> URL: https://issues.apache.org/jira/browse/HBASE-26024
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.9
>Reporter: Mohamed Mohideen Meeran
>Priority: Major
> Attachments: error.log
>
>
> Our production Region servers JVM crashed with the following error logs.
>  
> Register to memory mapping:
> RAX=0x2eea8f42 is an unknown value
> RBX=0x7f1f8c7900d6 is an unknown value
> RCX=0x0021 is an unknown value
> RDX=0x is an unknown value
> RSP=0x7f1fe3092200 is pointing into the stack for thread: 
> 0x7f29775fb000
> RBP=0x7f1fe3092200 is pointing into the stack for thread: 
> 0x7f29775fb000
> RSI=0x7f1f8c7900cc is an unknown value
> RDI=0x2eea8f38 is an unknown value
> R8 =0x7f28e14a3a38 is an oop
> java.nio.DirectByteBuffer
>  - klass: 'java/nio/DirectByteBuffer'
> R9 =0x7f1f8c790094 is an unknown value
> R10=0x7f2965053400 is at begin+0 in a stub
> StubRoutines::unsafe_arraycopy [0x7f2965053400, 0x7f296505343b[ (59 
> bytes)
> R11=0x7f28e14a3a38 is an oop
> java.nio.DirectByteBuffer
>  - klass: 'java/nio/DirectByteBuffer'
> R12=
> [error occurred during error reporting (printing register info), id 0xb]
>  
> Stack: [0x7f1fe2f93000,0x7f1fe3094000],  sp=0x7f1fe3092200,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> v  ~StubRoutines::jshort_disjoint_arraycopy
> J 18388 C2 
> org.apache.hadoop.hbase.io.ByteBufferListOutputStream.write(Ljava/nio/ByteBuffer;II)V
>  (53 bytes) @ 0x7f2967fa0ea2 [0x7f2967fa0d40+0x162]
> J 11722 C2 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyBufferToStream(Ljava/io/OutputStream;Ljava/nio/ByteBuffer;II)V
>  (75 bytes) @ 0x7f29670aa0fc [0x7f29670a9fa0+0x15c]
> J 13251 C2 
> org.apache.hadoop.hbase.ByteBufferKeyValue.write(Ljava/io/OutputStream;Z)I 
> (21 bytes) @ 0x7f2965cbe87c [0x7f2965cbe820+0x5c]
> J 8703 C2 
> org.apache.hadoop.hbase.KeyValueUtil.oswrite(Lorg/apache/hadoop/hbase/Cell;Ljava/io/OutputStream;Z)I
>  (259 bytes) @ 0x7f296684a2d4 [0x7f296684a140+0x194]
> J 15474 C2 
> org.apache.hadoop.hbase.ipc.CellBlockBuilder.buildCellBlockStream(Lorg/apache/hadoop/hbase/codec/Codec;Lorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/io/ByteBufferPool;)Lorg/apache/hadoop/hbase/io/ByteBufferListOutputStream;
>  (75 bytes) @ 0x7f29675f9dc8 [0x7f29675f7c80+0x2148]
> J 14260 C2 
> org.apache.hadoop.hbase.ipc.ServerCall.setResponse(Lorg/apache/hbase/thirdparty/com/google/protobuf/Message;Lorg/apache/hadoop/hbase/CellScanner;Ljava/lang/Throwable;Ljava/lang/String;)V
>  (408 bytes) @ 0x7f29678ad11c [0x7f29678acec0+0x25c]
> J 14732 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1376 bytes) @ 
> 0x7f296797f690 [0x7f296797e6a0+0xff0]
> J 14293 C2 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
>  (268 bytes) @ 0x7f29667b7464 [0x7f29667b72e0+0x184]
> J 17796% C1 org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run()V (72 bytes) 
> @ 0x7f2967c9cbe4 [0x7f2967c9caa0+0x144]
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x65ebbb]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x108b
> V  [libjvm.so+0x65ffd7]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
> Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x2f7
> V  [libjvm.so+0x660497]  JavaCalls::call_virtual(JavaValue*, Handle, 
> KlassHandle, Symbol*, Symbol*, Thread*)+0x47
> V  [libjvm.so+0x6ada71]  thread_entry(JavaThread*, Thread*)+0x91
> V  [libjvm.so+0x9f24f1]  JavaThread::thread_main_inner()+0xf1
> V  [libjvm.so+0x9f26d8]  JavaThread::run()+0x1b8
> V  [libjvm.so+0x8af502]  java_start(Thread*)+0x122
> C  [libpthread.so.0+0x7dc5]  start_thread+0xc5
>  
> we used the workaround by switching *hbase.rpc.server.impl* back to 
> SimpleRpcServer as mentioned in the following JIRA
> https://issues.apache.org/jira/browse/HBASE-22539?focusedCommentId=16855688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16855688
> Also, attached the error logs during JVM crash. Any help?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3413: HBASE-21674 complement the admin operations in thrift2

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3413:
URL: https://github.com/apache/hbase/pull/3413#issuecomment-866490203


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 13s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 33s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  branch-1 passed  |
   | -1 :x: |  shadedjars  |   0m 23s |  branch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   2m 45s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   2m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 13s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedjars  |   0m 13s |  patch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 31s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   2m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 35s |  hbase-thrift in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  36m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3413 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 4cd42266a8f9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3413/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / fd2f8a5 |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/artifact/out/branch-shadedjars.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/artifact/out/patch-shadedjars.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/artifact/out/patch-unit-hbase-thrift.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/testReport/
 |
   | Max. process+thread count | 87 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/2/console
 |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   

[GitHub] [hbase] ddupg commented on a change in pull request #3405: HBASE-26011 Introduce a new API to sync the live region server list m…

2021-06-22 Thread GitBox


ddupg commented on a change in pull request #3405:
URL: https://github.com/apache/hbase/pull/3405#discussion_r656720923



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
##
@@ -163,6 +169,44 @@
   private final ConcurrentNavigableMap 
onlineServers =
 new ConcurrentSkipListMap<>();
 
+  /**
+   * Store the snapshot of the current region server list, for improving read 
performance.
+   * 
+   * The hashCode is used to determine whether there are changes to the region 
servers.
+   */
+  private static final class OnlineServerListSnapshot {
+
+private static final HashFunction HASH = Hashing.murmur3_128();
+
+final List servers;
+
+final long hashCode;
+
+public OnlineServerListSnapshot(List servers) {
+  this.servers = Collections.unmodifiableList(servers);
+  Hasher hasher = HASH.newHasher();
+  for (ServerName server : servers) {
+hasher.putString(server.getServerName(), StandardCharsets.UTF_8);
+  }
+  this.hashCode = hasher.hash().asLong();

Review comment:
   Does hash conflict need to be considered? Although the probability is 
small.
   And if a server go online and offline between a client two calls, the 
hashCode is same for the client, is this expected? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367821#comment-17367821
 ] 

Hudson commented on HBASE-25877:


Results for branch branch-2.4
[build #148 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25937) Clarify UnknownRegionException

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367820#comment-17367820
 ] 

Hudson commented on HBASE-25937:


Results for branch branch-2.4
[build #148 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/148/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Clarify UnknownRegionException
> --
>
> Key: HBASE-25937
> URL: https://issues.apache.org/jira/browse/HBASE-25937
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> UnknownRegionException seems to accept a "region name" but it's actually a 
> normal Exception message (and is used that way).  Fix this to be "message" 
> and add a "cause" capability as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26019.
-
Resolution: Fixed

Thanks again for the review, [~wchevreuil] [~vjasani] and [~tomscut]

The commit is cherrypicked to branch-2. Let me know if it should go to lower 
branches.

> Remove reflections used in HBaseConfiguration.getPassword()
> ---
>
> Key: HBASE-26019
> URL: https://issues.apache.org/jira/browse/HBASE-26019
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword(). 
>  The API was added in Hadoop 2.6.0. Reflection was used to access the API. 
> It's time to remove the reflection and invoke the API directly. (HBase 3.0 as 
> well as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] tomscut commented on pull request #3325: HBASE-25934 Add username for RegionScannerHolder

2021-06-22 Thread GitBox


tomscut commented on pull request #3325:
URL: https://github.com/apache/hbase/pull/3325#issuecomment-866451946


   Hi @jojochuang , could you please take a look at this? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] jojochuang merged pull request #3408: HBASE-26019 Remove reflections used in HBaseConfiguration.getPassword()

2021-06-22 Thread GitBox


jojochuang merged pull request #3408:
URL: https://github.com/apache/hbase/pull/3408


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] jojochuang commented on pull request #3408: HBASE-26019 Remove reflections used in HBaseConfiguration.getPassword()

2021-06-22 Thread GitBox


jojochuang commented on pull request #3408:
URL: https://github.com/apache/hbase/pull/3408#issuecomment-866445027


   Thanks a lot for the review, @wchevreuil  @virajjasani  and @tomscut . I'll 
merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-22769) Runtime Error on join (with filter) when using hbase-spark connector

2021-06-22 Thread Nero Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367794#comment-17367794
 ] 

Nero Wang commented on HBASE-22769:
---

I'm also put scala-library-2.12.13 and SparkHbase_Connector under Hbase/lib and 
problem solved agree with Luca Canali.

[https://github.com/HAHAHAHA123456/SparkHbaseConnector-3.1.1] here can download 
the connector fixed by apache project.

In another problem it is unreasonable to put jar into Service, makes the 
Service messy. Is there another way to fix this problem?

 

> Runtime Error on join (with filter) when using hbase-spark connector
> 
>
> Key: HBASE-22769
> URL: https://issues.apache.org/jira/browse/HBASE-22769
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-connectors
>Affects Versions: connector-1.0.0
> Environment: Built using maven scala plugin on intellij IDEA with 
> Maven 3.3.9. Ran on Azure HDInsight Spark cluster using Yarn. 
> Spark version: 2.4.0
> Scala version: 2.11.12
> hbase-spark version: 1.0.0
>Reporter: Noah Banholzer
>Priority: Blocker
>
> I am attempting to do a left outer join (though any join with a push down 
> filter causes this issue) between a Spark Structured Streaming DataFrame and 
> a DataFrame read from HBase. I get the following stack trace when running a 
> simple spark app that reads from a streaming source and attempts to left 
> outer join with a dataframe read from HBase:
> {{19/07/30 18:30:25 INFO DAGScheduler: ShuffleMapStage 1 (start at 
> SparkAppTest.scala:88) failed in 3.575 s due to Job aborted due to stage 
> failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 
> 0.3 in stage 1.0 (TID 10, 
> wn5-edpspa.hnyo2upsdeau1bffc34wwrkgwc.ex.internal.cloudapp.net, executor 2): 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> java.lang.reflect.InvocationTargetException at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1609)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:1154)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2967)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3301)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) 
> Caused by: java.lang.reflect.InvocationTargetException at 
> sun.reflect.GeneratedMethodAccessor15461.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1605)
>  }}
> {{... 8 more }}
> {{Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hbase/spark/datasources/JavaBytesEncoder$ at 
> org.apache.hadoop.hbase.spark.datasources.JavaBytesEncoder.create(JavaBytesEncoder.scala)
>  at 
> org.apache.hadoop.hbase.spark.SparkSQLPushDownFilter.parseFrom(SparkSQLPushDownFilter.java:196)
>  }}
> {{... 12 more }}
> {{at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>  at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:347)
>  at 
> org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344)
>  at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
>  at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
>  at 
> org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
>  at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>  at 
> 

[jira] [Commented] (HBASE-26023) Overhaul of test cluster set up for table skew

2021-06-22 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367770#comment-17367770
 ] 

Clara Xiong commented on HBASE-26023:
-

 
{code:java}
2021-06-22T15:51:12,959 INFO  [Time-limited test] 
balancer.TestStochasticLoadBalancerBalanceCluster(67): Mock Balance : { 
srv1410554708:1 , srv2138765763:1 }
2021-06-22T15:51:12,959 INFO  [Time-limited test] 
balancer.BaseLoadBalancer(611): Start Generate Balance plan for cluster.
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.BalancerClusterState(293): number of table = 2
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.BalancerClusterState(313): max updated to 1 for table 0
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.BalancerClusterState(313): max updated to 1 for table 1
2021-06-22T15:51:12,960 DEBUG [Time-limited test] 
balancer.RegionCountSkewCostFunction(53): RegionCountSkewCostFunction sees a 
total of 2 servers and 2 regions.
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.TableSkewCostFunction(51): min = 1.0, max = 2.0, cost= 2.0
2021-06-22T15:51:12,960 DEBUG [Time-limited test] 
balancer.StochasticLoadBalancer(347): We need to load balance cluster; total 
cost=35.0, sum multiplier=582.0; cost/multiplier to need a balance is 0.05
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.TableSkewCostFunction(51): min = 1.0, max = 2.0, cost= 2.0
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.TableSkewCostFunction(51): min = 1.0, max = 2.0, cost= 2.0
2021-06-22T15:51:12,960 INFO  [Time-limited test] 
balancer.StochasticLoadBalancer(426): start StochasticLoadBalancer.balancer, 
initCost=35.0, functionCost=RegionCountSkewCostFunction : (500.0, 0.0); 
PrimaryRegionCountSkewCostFunction : (not needed); MoveCostFunction : (7.0, 
0.0); RackLocalityCostFunction : (15.0, 0.0); TableSkewCostFunction : (35.0, 
1.0); RegionReplicaHostCostFunction : (not needed); 
RegionReplicaRackCostFunction : (not needed); ReadRequestCostFunction : (5.0, 
0.0); CPRequestCostFunction : (5.0, 0.0); WriteRequestCostFunction : (5.0, 
0.0); MemStoreSizeCostFunction : (5.0, 0.0); StoreFileCostFunction : (5.0, 
0.0);  computedMaxSteps: 3200
{code}
 

> Overhaul of test cluster set up for table skew
> --
>
> Key: HBASE-26023
> URL: https://issues.apache.org/jira/browse/HBASE-26023
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, test
> Environment: {code:java}
>  {code}
>Reporter: Clara Xiong
>Priority: Major
>
> There is another bug in the original tableSkew cost function for aggregation 
> of the cost per table:
> If we have 10 regions, one per table, evenly distributed on 10 nodes, the 
> cost is scale to 1.0.
> The more tables we have, the closer the value will be to 1.0. The cost 
> function becomes useless.
> All the balancer tests were set up with large numbers of tables with minimal 
> regions per table. This artificially inflates the total cost and trigger 
> balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
> tests too. We also need to add tests that reflect more diversified scenarios 
> for table distribution such as large tables with large numbers of regions.
> {code:java}
> protected double cost() {
>  double max = cluster.numRegions;
>  double min = ((double) cluster.numRegions) / cluster.numServers;
>  double value = 0;
>  for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
>  value += cluster.numMaxRegionsPerTable[i];
>  }
>  LOG.info("min = {}, max = {}, cost= {}", min, max, value);
>  return scale(min, max, value);
>  }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] z-york commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


z-york commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656625420



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
##
@@ -297,6 +297,30 @@ public void enableCacheOnWrite() {
 this.cacheBloomsOnWrite = true;
   }
 
+  /**
+   * If hbase.rs.cachecompactedblocksonwrite configuration is set to true and
+   * 'totalCompactedFilesSize' is lower than 
'cacheCompactedDataOnWriteThreshold',
+   * enables cache on write for below properties:
+   * - cacheDataOnWrite
+   * - cacheIndexesOnWrite
+   * - cacheBloomsOnWrite
+   *
+   * Otherwise, sets 'cacheDataOnWrite' only to false.
+   *
+   * @param totalCompactedFilesSize the total size of compacted files.
+   * @return true if the checks mentioned above pass and the cache is enabled, 
false otherwise.
+   */
+  public boolean enableCacheOnWrite(long totalCompactedFilesSize) {

Review comment:
   Should we include something to distinguish that this is specifically for 
compactions in the method name since the CacheConfig is used in more places 
than just compactions?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectStoreCompactor extends DefaultCompactor {
+  public DirectStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0,
+  shouldDropBehind, fd.totalCompactedFilesSize);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind, long totalCompactedFilesSize) throws 
IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.

Review comment:
   Update/remove this comment since now it depends on the configs




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] z-york commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


z-york commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656623653



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.
  */
 @InterfaceAudience.Private
-public class PersistedStoreFlushContext extends DefaultStoreFlushContext {
+public class DirectInStoreFlushContext extends DefaultStoreFlushContext {

Review comment:
   I feel like the "In" is a bit confusing. Originally everything was 
Persisted due to the feature of persisting the SFM, but I get the confusion for 
some of the internal classes. "Direct" should be fine




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] z-york commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


z-york commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656622944



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectInStoreCompactor extends DefaultCompactor {
+  public DirectInStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0, shouldDropBehind);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind) throws IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.
+writerCacheConf = new CacheConfig(store.getCacheConfig());
+writerCacheConf.setCacheDataOnWrite(false);
+
+InetSocketAddress[] favoredNodes = null;
+if (store.getHRegion().getRegionServerServices() != null) {
+  favoredNodes = 
store.getHRegion().getRegionServerServices().getFavoredNodesForRegion(
+store.getHRegion().getRegionInfo().getEncodedName());

Review comment:
   The StoreContext is initialized with the store so there is no need to 
create a context. I'm specifically suggesting to use the 
store.getStoreContext().getFavoredNodes() instead of having to expose the 
RSServices and have all this extra code here. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26024) Region server JVM Crash - A fatal error has been detected by the Java Runtime Environment

2021-06-22 Thread Mohamed Mohideen Meeran (Jira)
Mohamed Mohideen Meeran created HBASE-26024:
---

 Summary: Region server JVM Crash - A fatal error has been detected 
by the Java Runtime Environment
 Key: HBASE-26024
 URL: https://issues.apache.org/jira/browse/HBASE-26024
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.1.9
Reporter: Mohamed Mohideen Meeran
 Attachments: error.log

Our production Region servers JVM crashed with the following error logs.

 

Register to memory mapping:

RAX=0x2eea8f42 is an unknown value

RBX=0x7f1f8c7900d6 is an unknown value

RCX=0x0021 is an unknown value

RDX=0x is an unknown value

RSP=0x7f1fe3092200 is pointing into the stack for thread: 0x7f29775fb000

RBP=0x7f1fe3092200 is pointing into the stack for thread: 0x7f29775fb000

RSI=0x7f1f8c7900cc is an unknown value

RDI=0x2eea8f38 is an unknown value

R8 =0x7f28e14a3a38 is an oop

java.nio.DirectByteBuffer

 - klass: 'java/nio/DirectByteBuffer'

R9 =0x7f1f8c790094 is an unknown value

R10=0x7f2965053400 is at begin+0 in a stub

StubRoutines::unsafe_arraycopy [0x7f2965053400, 0x7f296505343b[ (59 
bytes)

R11=0x7f28e14a3a38 is an oop

java.nio.DirectByteBuffer

 - klass: 'java/nio/DirectByteBuffer'

R12=

[error occurred during error reporting (printing register info), id 0xb]

 

Stack: [0x7f1fe2f93000,0x7f1fe3094000],  sp=0x7f1fe3092200,  free 
space=1020k

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)

v  ~StubRoutines::jshort_disjoint_arraycopy

J 18388 C2 
org.apache.hadoop.hbase.io.ByteBufferListOutputStream.write(Ljava/nio/ByteBuffer;II)V
 (53 bytes) @ 0x7f2967fa0ea2 [0x7f2967fa0d40+0x162]

J 11722 C2 
org.apache.hadoop.hbase.util.ByteBufferUtils.copyBufferToStream(Ljava/io/OutputStream;Ljava/nio/ByteBuffer;II)V
 (75 bytes) @ 0x7f29670aa0fc [0x7f29670a9fa0+0x15c]

J 13251 C2 
org.apache.hadoop.hbase.ByteBufferKeyValue.write(Ljava/io/OutputStream;Z)I (21 
bytes) @ 0x7f2965cbe87c [0x7f2965cbe820+0x5c]

J 8703 C2 
org.apache.hadoop.hbase.KeyValueUtil.oswrite(Lorg/apache/hadoop/hbase/Cell;Ljava/io/OutputStream;Z)I
 (259 bytes) @ 0x7f296684a2d4 [0x7f296684a140+0x194]

J 15474 C2 
org.apache.hadoop.hbase.ipc.CellBlockBuilder.buildCellBlockStream(Lorg/apache/hadoop/hbase/codec/Codec;Lorg/apache/hadoop/io/compress/CompressionCodec;Lorg/apache/hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/io/ByteBufferPool;)Lorg/apache/hadoop/hbase/io/ByteBufferListOutputStream;
 (75 bytes) @ 0x7f29675f9dc8 [0x7f29675f7c80+0x2148]

J 14260 C2 
org.apache.hadoop.hbase.ipc.ServerCall.setResponse(Lorg/apache/hbase/thirdparty/com/google/protobuf/Message;Lorg/apache/hadoop/hbase/CellScanner;Ljava/lang/Throwable;Ljava/lang/String;)V
 (408 bytes) @ 0x7f29678ad11c [0x7f29678acec0+0x25c]

J 14732 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1376 bytes) @ 
0x7f296797f690 [0x7f296797e6a0+0xff0]

J 14293 C2 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
 (268 bytes) @ 0x7f29667b7464 [0x7f29667b72e0+0x184]

J 17796% C1 org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run()V (72 bytes) @ 
0x7f2967c9cbe4 [0x7f2967c9caa0+0x144]

v  ~StubRoutines::call_stub

V  [libjvm.so+0x65ebbb]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
JavaCallArguments*, Thread*)+0x108b

V  [libjvm.so+0x65ffd7]  JavaCalls::call_virtual(JavaValue*, KlassHandle, 
Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x2f7

V  [libjvm.so+0x660497]  JavaCalls::call_virtual(JavaValue*, Handle, 
KlassHandle, Symbol*, Symbol*, Thread*)+0x47

V  [libjvm.so+0x6ada71]  thread_entry(JavaThread*, Thread*)+0x91

V  [libjvm.so+0x9f24f1]  JavaThread::thread_main_inner()+0xf1

V  [libjvm.so+0x9f26d8]  JavaThread::run()+0x1b8

V  [libjvm.so+0x8af502]  java_start(Thread*)+0x122

C  [libpthread.so.0+0x7dc5]  start_thread+0xc5

 

we used the workaround by switching *hbase.rpc.server.impl* back to 
SimpleRpcServer as mentioned in the following JIRA

https://issues.apache.org/jira/browse/HBASE-22539?focusedCommentId=16855688=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16855688

Also, attached the error logs during JVM crash. Any help?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] joshelser commented on pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…

2021-06-22 Thread GitBox


joshelser commented on pull request #2113:
URL: https://github.com/apache/hbase/pull/2113#issuecomment-866287839


   Mentioning here as the recommendation of Zach, I'm trying to see if we can 
get an answer as to whether or not we think a default=false configuration 
option to automatically schedule SCPs when unknown servers are seen, as 
described in #2114 
   
   I agree/acknowledge that other solutions to this also exist (like Stack 
nicely wrote up), but those would require a bit more automation to implement.
   
   I don't want to bulldoze the issue, but this is an open wound for me that 
keeps getting more salt rubbed into it :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25739) TableSkewCostFunction need to use aggregated deviation

2021-06-22 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367625#comment-17367625
 ] 

Clara Xiong commented on HBASE-25739:
-

Because of the fix, the default 0.05 minCostNeedBalance will not quite work. As 
a gap-stopper before I check in auto-tuning threshold, should I just reduce the 
default value? So people won't be caught off guard? The broken 
TableSkewCostFunction artificially inflate the total cost. So if the fix is in 
and we don't change threshold, people will be badly surprised that balancer 
gets stuck.

> TableSkewCostFunction need to use aggregated deviation
> --
>
> Key: HBASE-25739
> URL: https://issues.apache.org/jira/browse/HBASE-25739
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, master
>Reporter: Clara Xiong
>Assignee: Clara Xiong
>Priority: Major
> Attachments: 
> TEST-org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.xml,
>  
> org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.txt
>
>
> TableSkewCostFunction uses the sum of the max deviation region per server for 
> all tables as the measure of unevenness. It doesn't work in a very common 
> scenario in operations. Say we have 100 regions on 50 nodes, two on each. We 
> add 50 new nodes and they have 0 each. The max deviation from the mean is 1, 
> compared to 99 in the worst case scenario of 100 regions on a single server. 
> The normalized cost is 1/99 = 0.011 < default threshold of 0.05. Balancer 
> wouldn't move.  The proposal is to use aggregated deviation of the count per 
> region server to detect this scenario, generating a cost of 100/198 = 0.5 in 
> this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26023) Overhaul of test cluster set up for table skew

2021-06-22 Thread Clara Xiong (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-26023:

Environment: 
{code:java}
 {code}

  was:
There is another bug in the original tableSkew cost function for aggregation of 
the cost per table:

If we have 10 regions, one per table, evenly distributed on 10 nodes, the cost 
is scale to 1.0.

The more tables we have, the closer the value will be to 1.0. The cost function 
becomes useless.

All the balancer tests were set up with large numbers of tables with minimal 
regions per table. This artificially inflates the total cost and trigger 
balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
tests too. We also need to add tests that reflect more diversified scenarios 
for table distribution such as large tables with large numbers of regions.
{code:java}
protected double cost() {
 double max = cluster.numRegions;
 double min = ((double) cluster.numRegions) / cluster.numServers;
 double value = 0;

 for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
 value += cluster.numMaxRegionsPerTable[i];
 }
 LOG.info("min = {}, max = {}, cost= {}", min, max, value);
 return scale(min, max, value);
 }
}{code}


> Overhaul of test cluster set up for table skew
> --
>
> Key: HBASE-26023
> URL: https://issues.apache.org/jira/browse/HBASE-26023
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, test
> Environment: {code:java}
>  {code}
>Reporter: Clara Xiong
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26023) Overhaul of test cluster set up for table skew

2021-06-22 Thread Clara Xiong (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-26023:

Description: 
There is another bug in the original tableSkew cost function for aggregation of 
the cost per table:

If we have 10 regions, one per table, evenly distributed on 10 nodes, the cost 
is scale to 1.0.

The more tables we have, the closer the value will be to 1.0. The cost function 
becomes useless.

All the balancer tests were set up with large numbers of tables with minimal 
regions per table. This artificially inflates the total cost and trigger 
balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
tests too. We also need to add tests that reflect more diversified scenarios 
for table distribution such as large tables with large numbers of regions.
{code:java}
protected double cost() {
 double max = cluster.numRegions;
 double min = ((double) cluster.numRegions) / cluster.numServers;
 double value = 0;

 for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
 value += cluster.numMaxRegionsPerTable[i];
 }
 LOG.info("min = {}, max = {}, cost= {}", min, max, value);
 return scale(min, max, value);
 }
}{code}

> Overhaul of test cluster set up for table skew
> --
>
> Key: HBASE-26023
> URL: https://issues.apache.org/jira/browse/HBASE-26023
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, test
> Environment: {code:java}
>  {code}
>Reporter: Clara Xiong
>Priority: Major
>
> There is another bug in the original tableSkew cost function for aggregation 
> of the cost per table:
> If we have 10 regions, one per table, evenly distributed on 10 nodes, the 
> cost is scale to 1.0.
> The more tables we have, the closer the value will be to 1.0. The cost 
> function becomes useless.
> All the balancer tests were set up with large numbers of tables with minimal 
> regions per table. This artificially inflates the total cost and trigger 
> balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
> tests too. We also need to add tests that reflect more diversified scenarios 
> for table distribution such as large tables with large numbers of regions.
> {code:java}
> protected double cost() {
>  double max = cluster.numRegions;
>  double min = ((double) cluster.numRegions) / cluster.numServers;
>  double value = 0;
>  for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
>  value += cluster.numMaxRegionsPerTable[i];
>  }
>  LOG.info("min = {}, max = {}, cost= {}", min, max, value);
>  return scale(min, max, value);
>  }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25739) TableSkewCostFunction need to use aggregated deviation

2021-06-22 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367582#comment-17367582
 ] 

Clara Xiong commented on HBASE-25739:
-

subtask created.

 

https://issues.apache.org/jira/browse/HBASE-26023

> TableSkewCostFunction need to use aggregated deviation
> --
>
> Key: HBASE-25739
> URL: https://issues.apache.org/jira/browse/HBASE-25739
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, master
>Reporter: Clara Xiong
>Assignee: Clara Xiong
>Priority: Major
> Attachments: 
> TEST-org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.xml,
>  
> org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.txt
>
>
> TableSkewCostFunction uses the sum of the max deviation region per server for 
> all tables as the measure of unevenness. It doesn't work in a very common 
> scenario in operations. Say we have 100 regions on 50 nodes, two on each. We 
> add 50 new nodes and they have 0 each. The max deviation from the mean is 1, 
> compared to 99 in the worst case scenario of 100 regions on a single server. 
> The normalized cost is 1/99 = 0.011 < default threshold of 0.05. Balancer 
> wouldn't move.  The proposal is to use aggregated deviation of the count per 
> region server to detect this scenario, generating a cost of 100/198 = 0.5 in 
> this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26023) Overhaul of test cluster set up for table skew

2021-06-22 Thread Clara Xiong (Jira)
Clara Xiong created HBASE-26023:
---

 Summary: Overhaul of test cluster set up for table skew
 Key: HBASE-26023
 URL: https://issues.apache.org/jira/browse/HBASE-26023
 Project: HBase
  Issue Type: Sub-task
  Components: Balancer, test
 Environment: There is another bug in the original tableSkew cost 
function for aggregation of the cost per table:

If we have 10 regions, one per table, evenly distributed on 10 nodes, the cost 
is scale to 1.0.

The more tables we have, the closer the value will be to 1.0. The cost function 
becomes useless.

All the balancer tests were set up with large numbers of tables with minimal 
regions per table. This artificially inflates the total cost and trigger 
balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
tests too. We also need to add tests that reflect more diversified scenarios 
for table distribution such as large tables with large numbers of regions.
{code:java}
protected double cost() {
 double max = cluster.numRegions;
 double min = ((double) cluster.numRegions) / cluster.numServers;
 double value = 0;

 for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
 value += cluster.numMaxRegionsPerTable[i];
 }
 LOG.info("min = {}, max = {}, cost= {}", min, max, value);
 return scale(min, max, value);
 }
}{code}
Reporter: Clara Xiong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25739) TableSkewCostFunction need to use aggregated deviation

2021-06-22 Thread Clara Xiong (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367577#comment-17367577
 ] 

Clara Xiong commented on HBASE-25739:
-

There is another bug in the original tableSkew cost function for aggregation of 
the cost per table:

If we have 10 regions, one per table, evenly distributed on 10 nodes, the cost 
is scale to 1.0.

The more tables we have, the closer the value will be to 1.0. The cost function 
becomes useless.

All the balancer tests were set up with large numbers of tables with minimal 
regions per table. This artificially inflates the total cost and trigger 
balancer runs. With this fix on TableSkewFunction, we need to overhaul the 
tests too.
{code:java}
protected double cost() {
 double max = cluster.numRegions;
 double min = ((double) cluster.numRegions) / cluster.numServers;
 double value = 0;

 for (int i = 0; i < cluster.numMaxRegionsPerTable.length; i++) {
 value += cluster.numMaxRegionsPerTable[i];
 }
 LOG.info("min = {}, max = {}, cost= {}", min, max, value);
 return scale(min, max, value);
 }
}{code}

> TableSkewCostFunction need to use aggregated deviation
> --
>
> Key: HBASE-25739
> URL: https://issues.apache.org/jira/browse/HBASE-25739
> Project: HBase
>  Issue Type: Sub-task
>  Components: Balancer, master
>Reporter: Clara Xiong
>Assignee: Clara Xiong
>Priority: Major
> Attachments: 
> TEST-org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.xml,
>  
> org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancerBalanceCluster.txt
>
>
> TableSkewCostFunction uses the sum of the max deviation region per server for 
> all tables as the measure of unevenness. It doesn't work in a very common 
> scenario in operations. Say we have 100 regions on 50 nodes, two on each. We 
> add 50 new nodes and they have 0 each. The max deviation from the mean is 1, 
> compared to 99 in the worst case scenario of 100 regions on a single server. 
> The normalized cost is 1/99 = 0.011 < default threshold of 0.05. Balancer 
> wouldn't move.  The proposal is to use aggregated deviation of the count per 
> region server to detect this scenario, generating a cost of 100/198 = 0.5 in 
> this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25992) Polish the ReplicationSourceWALReader code for 2.x after HBASE-25596

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367575#comment-17367575
 ] 

Hudson commented on HBASE-25992:


Results for branch branch-2
[build #283 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console 
output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/283//console].


> Polish the ReplicationSourceWALReader code for 2.x after HBASE-25596
> 
>
> Key: HBASE-25992
> URL: https://issues.apache.org/jira/browse/HBASE-25992
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5
>
>
> The code are very different for 2.x and 1.x, and the original code for 
> HBASE-25596 is for 1.x, so create this issue to polish the code to make it 
> more suitable for 2.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#issuecomment-866153764


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-24749 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  HBASE-24749 passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  HBASE-24749 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 19s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  HBASE-24749 passed  |
   | -0 :warning: |  patch  |   9m 11s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 37s |  hbase-server generated 1 new + 20 
unchanged - 0 fixed = 21 total (was 20)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 151m  6s |  hbase-server in the patch passed.  
|
   |  |   | 181m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3389 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0284e2630de0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24749 / 49b68b0e00 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/testReport/
 |
   | Max. process+thread count | 3794 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#issuecomment-866150388


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-24749 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 12s |  HBASE-24749 passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  HBASE-24749 passed  |
   | +1 :green_heart: |  shadedjars  |   8m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  HBASE-24749 passed  |
   | -0 :warning: |  patch  |   9m  0s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 146m  0s |  hbase-server in the patch passed.  
|
   |  |   | 176m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3389 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 04b2e3bcd76f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24749 / 49b68b0e00 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/testReport/
 |
   | Max. process+thread count | 4456 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25877:
--
Release Note: 
Now calling RSRpcService.compactionSwitch, i.e, Admin.compactionSwitch at 
client side, requires ADMIN permission.
This is an incompatible change but it is also a bug, as we should not allow any 
users to disable compaction on a regionserver, so we apply this to all active 
branches.

  was:
Now call RSRpcService.compactionSwitch, i.e, Admin.compactionSwitch at client 
side, requires ADMIN permission.
This is an incompatible change but it is also a bug, as we should not allow any 
users to disable compaction on a regionserver, so we apply this to all active 
branches.


> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-25877.
---
Resolution: Fixed

Pushed to branch-2.3+.

Thanks [~xiaoheipangzi] for contributing.

> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-25902) 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is earlier than hbase-2.3.0 first

2021-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367088#comment-17367088
 ] 

Viraj Jasani edited comment on HBASE-25902 at 6/22/21, 3:47 PM:


[~stack] -Curious if you have encountered- HBASE-26021 -issue during this 
upgrade-.

Edit: Just realized this upgrade was from hbase-1.2 so HBASE-26021 should not 
be relevant to this specific upgrade.


was (Author: vjasani):
[~stack] Curious if you have encountered HBASE-26021 issue during this upgrade.

> 1.x to 2.3.x upgrade does not work; you must install an hbase2 that is 
> earlier than hbase-2.3.0 first
> -
>
> Key: HBASE-25902
> URL: https://issues.apache.org/jira/browse/HBASE-25902
> Project: HBase
>  Issue Type: Bug
>  Components: meta, Operability
>Affects Versions: 2.3.0, 2.4.0
>Reporter: Michael Stack
>Priority: Critical
> Attachments: NoSuchColumnFamilyException.png
>
>
> Making note of this issue in case others run into it. At my place of employ, 
> we tried to upgrade a cluster that was an hbase-1.2.x version to an 
> hbase-2.3.5 but it failed because meta didn't have the 'table' column family.
> Up to 2.3.0, hbase:meta was hardcoded. HBASE-12035 added the 'table' CF for 
> hbase-2.0.0. HBASE-23782 (2.3.0) undid hardcoding of the hbase:meta schema; 
> i.e. reading hbase:meta schema from the filesystem. The hbase:meta schema is 
> only created on initial install. If an upgrade over existing data, the 
> hbase-1 hbase:meta will not be suitable for hbase-2.3.x context as it will be 
> missing columnfamilies needed to run (HBASE-23055 made it so hbase:meta could 
> be altered (2.3.0) but probably of no use since Master won't come up).
> It would be a nice-to-have if a user could go from hbase1 to hbase.2.3.0 w/o 
> having to first install an hbase2 that is earlier than 2.3.0 but needs to be 
> demand before we would work on it; meantime, install an intermediate hbase2 
> version before going to hbase-2.3.0+ if coming from hbase-1.x



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25877:
--
Hadoop Flags: Incompatible change,Reviewed
Release Note: 
Now call RSRpcService.compactionSwitch, i.e, Admin.compactionSwitch at client 
side, requires ADMIN permission.
This is an incompatible change but it is also a bug, as we should not allow any 
users to disable compaction on a regionserver, so we apply this to all active 
branches.

> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] GeorryHuang commented on pull request #3406: HBASE-26015 Should implement getRegionServers(boolean) method in Asyn…

2021-06-22 Thread GitBox


GeorryHuang commented on pull request #3406:
URL: https://github.com/apache/hbase/pull/3406#issuecomment-866095465


   
   > Oh, please fix the checkstyle issue?
   > 
   > Thanks.
   
   OK! I hope I won’t forget checkstyle next time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25877:
--
Fix Version/s: 2.4.5
   2.3.6
   2.5.0
   3.0.0-alpha-1

> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 merged pull request #3253: HBASE-25877:add access check for compactionSwitch

2021-06-22 Thread GitBox


Apache9 merged pull request #3253:
URL: https://github.com/apache/hbase/pull/3253


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] GeorryHuang commented on a change in pull request #3373: HBASE-25980 Master table.jsp pointed at meta throws 500 when no all r…

2021-06-22 Thread GitBox


GeorryHuang commented on a change in pull request #3373:
URL: https://github.com/apache/hbase/pull/3373#discussion_r656342795



##
File path: hbase-server/src/main/resources/hbase-webapps/master/table.jsp
##
@@ -268,7 +269,12 @@
   for (int j = 0; j < numMetaReplicas; j++) {
 RegionInfo meta = RegionReplicaUtil.getRegionInfoForReplica(
 RegionInfoBuilder.FIRST_META_REGIONINFO, j);
-ServerName metaLocation = 
MetaTableLocator.waitMetaRegionLocation(master.getZooKeeper(), j, 1);
+ServerName metaLocation = null;
+try {
+  metaLocation = 
MetaTableLocator.waitMetaRegionLocation(master.getZooKeeper(), j, 1);
+} catch (NotAllMetaRegionsOnlineException e) {
+  //Should ignore this Exception for we don't need to display 
rit meta region info in UI
+}

Review comment:
   My mistake! After checking again, I found that the detailed page does 
display info of region in transaction. The new commit  will submit immediately




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-21652) Refactor ThriftServer making thrift2 server inherited from thrift1 server

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367460#comment-17367460
 ] 

Hudson commented on HBASE-21652:


Results for branch branch-1
[build #137 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Refactor ThriftServer making thrift2 server inherited from thrift1 server
> -
>
> Key: HBASE-21652
> URL: https://issues.apache.org/jira/browse/HBASE-21652
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.2.0
>
> Attachments: HBASE-21652.addendum.patch, HBASE-21652.branch-2.patch, 
> HBASE-21652.patch, HBASE-21652.v2.patch, HBASE-21652.v3.patch, 
> HBASE-21652.v4.patch, HBASE-21652.v5.patch, HBASE-21652.v6.patch, 
> HBASE-21652.v7.patch
>
>
> Except the different protocol, thrift2 Server should have no much difference 
> from thrift1 Server.  So refactoring the thrift server, making thrift2 server 
> inherit from thrift1 server. Getting rid of many duplicated code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25677) Server+table counters on each scan #nextRaw invocation becomes a bottleneck when heavy load

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367462#comment-17367462
 ] 

Hudson commented on HBASE-25677:


Results for branch branch-1
[build #137 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Server+table counters on each scan #nextRaw invocation becomes a bottleneck 
> when heavy load
> ---
>
> Key: HBASE-25677
> URL: https://issues.apache.org/jira/browse/HBASE-25677
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 2.3.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.3.5, 2.4.3
>
>
> On a heavily loaded server mostly doing reads/scan, I saw that 90+% of 
> handlers were BLOCKED in this fashion in thread dumps:
> {code}
> "RpcServer.default.FPBQ.Fifo.handler=117,queue=17,port=16020" #161 daemon 
> prio=5 os_prio=0 tid=0x7f748757f000 nid=0x73e9 waiting for monitor entry 
> [0x7f74783e]
>   java.lang.Thread.State: BLOCKED (on object monitor)
>at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1674)
>- waiting to lock <0x7f7647e3cc38> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.getOrCreateTableMeter(MetricsTableQueryMeterImpl.java:80)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.updateTableReadQueryMeter(MetricsTableQueryMeterImpl.java:90)
>at 
> org.apache.hadoop.hbase.regionserver.RegionServerTableMetrics.updateTableReadQueryMeter(RegionServerTableMetrics.java:89)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updateReadQueryMeter(MetricsRegionServer.java:274)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6742)
>at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3319)
>- locked <0x7f896c0165a0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
>at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3566)
>at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858)
>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
>at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> {code}
> It kept up for good periods of time.
> I saw it to a leser extent on other servers, with less load.
> These RS had 400+ Regions a good few of which were serving out scan reads; 
> the server was doing ~1M hits a second. In this scenario, I saw the above 
> bottleneck.
> Looking at it, it came in w/ when the parent issue feature was added. There 
> are these read counts and then there were also write counts. The write counts 
> are mostly batch-based. Let me do same thing here for the read update the 
> central server+table count after scan is done rather than per invocation of 
> #nextRaw.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21674) Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from thrift1 server) to branch-1

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367459#comment-17367459
 ] 

Hudson commented on HBASE-21674:


Results for branch branch-1
[build #137 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Port HBASE-21652 (Refactor ThriftServer making thrift2 server inherited from 
> thrift1 server) to branch-1
> 
>
> Key: HBASE-21674
> URL: https://issues.apache.org/jira/browse/HBASE-21674
> Project: HBase
>  Issue Type: Sub-task
>  Components: Thrift
>Reporter: Andrew Kyle Purtell
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.1
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26013) Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367461#comment-17367461
 ] 

Hudson commented on HBASE-26013:


Results for branch branch-1
[build #137 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-1/137//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Get operations readRows metrics becomes zero after HBASE-25677
> --
>
> Key: HBASE-26013
> URL: https://issues.apache.org/jira/browse/HBASE-26013
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 1.7.1
>
>
> After HBASE-25677, Server+table counters on each scan are extracted from 
> #nextRaw to rsServices scan. In this case, the get operation will not count 
> the read rows. So that the readRows metrics becomes zero. Should add counter 
> in metricsUpdateForGet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25877) Add access check for compactionSwitch

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25877:
--
Component/s: security

> Add access  check for compactionSwitch
> --
>
> Key: HBASE-25877
> URL: https://issues.apache.org/jira/browse/HBASE-25877
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: lujie
>Assignee: lujie
>Priority: Major
>
> Should we add access check for 
> org.apache.hadoop.hbase.regionserver.RSRpcServices#compactionSwitch
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on pull request #3366: HBASE-25985 ReplicationSourceWALReader#run - Reset sleepMultiplier in loop once out of any IOE

2021-06-22 Thread GitBox


Apache9 commented on pull request #3366:
URL: https://github.com/apache/hbase/pull/3366#issuecomment-866082026


   After HBASE-25992, this PR is no longer needed I think. Close?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation

2021-06-22 Thread GitBox


Apache9 commented on pull request #3370:
URL: https://github.com/apache/hbase/pull/3370#issuecomment-866080751


   Any progress here?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #3349: HBASE-25966 Fix typo in NOTICE.vm

2021-06-22 Thread GitBox


Apache9 commented on pull request #3349:
URL: https://github.com/apache/hbase/pull/3349#issuecomment-866078326


   Let's get this in? @ndimiduk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #3406: HBASE-26015 Should implement getRegionServers(boolean) method in Asyn…

2021-06-22 Thread GitBox


Apache9 commented on pull request #3406:
URL: https://github.com/apache/hbase/pull/3406#issuecomment-866068609


   Oh, please fix the checkstyle issue?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3406: HBASE-26015 Should implement getRegionServers(boolean) method in Asyn…

2021-06-22 Thread GitBox


Apache9 commented on a change in pull request #3406:
URL: https://github.com/apache/hbase/pull/3406#discussion_r656315666



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
##
@@ -1084,6 +1085,32 @@
 .thenApply(ClusterMetrics::getServersName);
   }
 
+  default CompletableFuture> getRegionServers(
+boolean excludeDecommissionedRS) {
+CompletableFuture> future = new 
CompletableFuture<>();
+addListener(
+  
getClusterMetrics(EnumSet.of(Option.SERVERS_NAME)).thenApply(ClusterMetrics::getServersName),

Review comment:
   Could use getRegionServers directly?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3397: HBASE-26012 Improve logging and dequeue logic in DelayQueue

2021-06-22 Thread GitBox


Apache9 commented on a change in pull request #3397:
URL: https://github.com/apache/hbase/pull/3397#discussion_r656314926



##
File path: 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/util/DelayedUtil.java
##
@@ -79,7 +84,13 @@ public String toString() {
*/
   public static  E takeWithoutInterrupt(final DelayQueue 
queue) {
 try {
-  return queue.take();
+  E element = queue.poll(10, TimeUnit.SECONDS);
+  if (element == null && queue.size() > 0) {
+LOG.error("DelayQueue is not empty when timed waiting elapsed. If this 
is repeated for"

Review comment:
   I do not think we should output a warn message if it is not a problem. 
As I said above, is it possible to move the warn log to upper layer? The 
DelayedUtil seems a general name, not only for dispatcher.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3373: HBASE-25980 Master table.jsp pointed at meta throws 500 when no all r…

2021-06-22 Thread GitBox


Apache9 commented on a change in pull request #3373:
URL: https://github.com/apache/hbase/pull/3373#discussion_r656312395



##
File path: hbase-server/src/main/resources/hbase-webapps/master/table.jsp
##
@@ -268,7 +269,12 @@
   for (int j = 0; j < numMetaReplicas; j++) {
 RegionInfo meta = RegionReplicaUtil.getRegionInfoForReplica(
 RegionInfoBuilder.FIRST_META_REGIONINFO, j);
-ServerName metaLocation = 
MetaTableLocator.waitMetaRegionLocation(master.getZooKeeper(), j, 1);
+ServerName metaLocation = null;
+try {
+  metaLocation = 
MetaTableLocator.waitMetaRegionLocation(master.getZooKeeper(), j, 1);
+} catch (NotAllMetaRegionsOnlineException e) {
+  //Should ignore this Exception for we don't need to display 
rit meta region info in UI
+}

Review comment:
   Oh, this is the current behavior? In the table detailed page, usually we 
will sort the regions by start key, and if we miss some regions because they 
are in transition, there will be holes on the page? Seems a bit strange.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-25937) Clarify UnknownRegionException

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-25937.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Pushed to branch-2.3+.

Thanks [~belugabehr] for contributing.

> Clarify UnknownRegionException
> --
>
> Key: HBASE-25937
> URL: https://issues.apache.org/jira/browse/HBASE-25937
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> UnknownRegionException seems to accept a "region name" but it's actually a 
> normal Exception message (and is used that way).  Fix this to be "message" 
> and add a "cause" capability as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25937) Clarify UnknownRegionException

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25937:
--
Fix Version/s: 2.4.5
   2.3.6
   2.5.0
   3.0.0-alpha-1

> Clarify UnknownRegionException
> --
>
> Key: HBASE-25937
> URL: https://issues.apache.org/jira/browse/HBASE-25937
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5
>
>
> UnknownRegionException seems to accept a "region name" but it's actually a 
> normal Exception message (and is used that way).  Fix this to be "message" 
> and add a "cause" capability as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25937) Clarify UnknownRegionException

2021-06-22 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25937:
--
Component/s: Client

> Clarify UnknownRegionException
> --
>
> Key: HBASE-25937
> URL: https://issues.apache.org/jira/browse/HBASE-25937
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>
> UnknownRegionException seems to accept a "region name" but it's actually a 
> normal Exception message (and is used that way).  Fix this to be "message" 
> and add a "cause" capability as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 merged pull request #3330: HBASE-25937: Clarify UnknownRegionException

2021-06-22 Thread GitBox


Apache9 merged pull request #3330:
URL: https://github.com/apache/hbase/pull/3330


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#issuecomment-866037140


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-24749 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  HBASE-24749 passed  |
   | +1 :green_heart: |  compile  |   3m 34s |  HBASE-24749 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  HBASE-24749 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 13s |  HBASE-24749 passed  |
   | -0 :warning: |  patch  |   2m 21s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 31s |  the patch passed  |
   | -0 :warning: |  javac  |   3m 31s |  hbase-server generated 1 new + 192 
unchanged - 1 fixed = 193 total (was 193)  |
   | -0 :warning: |  checkstyle  |   1m 12s |  hbase-server: The patch 
generated 4 new + 204 unchanged - 2 fixed = 208 total (was 206)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 13s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  49m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3389 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux af4a24392ac6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24749 / 49b68b0e00 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3389/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3372: HBASE-25986 set default value of normalization enabled from hbase site

2021-06-22 Thread GitBox


Apache9 commented on a change in pull request #3372:
URL: https://github.com/apache/hbase/pull/3372#discussion_r656281439



##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptor.java
##
@@ -275,11 +275,11 @@
 
   /**
* Check if normalization enable flag of the table is true. If flag is false
-   * then no region normalizer won't attempt to normalize this table.
+   * then region normalizer won't attempt to normalize this table.
*
-   * @return true if region normalization is enabled for this table
+   * @return value of NORMALIZATION_ENABLED key for this table if present else 
return defaultValue
*/
-  boolean isNormalizationEnabled();
+  boolean isNormalizationEnabled(boolean defaulValue);

Review comment:
   This is the only place we use this pattern to get a config in 
TableDescriptor? I uggest we align these methods with the same pattern.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (HBASE-25393) Support split and merge region with direct insert into CF directory

2021-06-22 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil reassigned HBASE-25393:


Assignee: Wellington Chevreuil

> Support split and merge region with direct insert into CF directory
> ---
>
> Key: HBASE-25393
> URL: https://issues.apache.org/jira/browse/HBASE-25393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Wellington Chevreuil
>Priority: Major
>
> {color:#00}Support region SPLIT and MERGE with direct insert HFiles into 
> column family directory{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HBASE-25393) Support split and merge region with direct insert into CF directory

2021-06-22 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-25393 started by Wellington Chevreuil.

> Support split and merge region with direct insert into CF directory
> ---
>
> Key: HBASE-25393
> URL: https://issues.apache.org/jira/browse/HBASE-25393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Wellington Chevreuil
>Priority: Major
>
> {color:#00}Support region SPLIT and MERGE with direct insert HFiles into 
> column family directory{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] belugabehr commented on pull request #3330: HBASE-25937: Clarify UnknownRegionException

2021-06-22 Thread GitBox


belugabehr commented on pull request #3330:
URL: https://github.com/apache/hbase/pull/3330#issuecomment-865978483


   @busbey @Apache9  Are you able to take a look at this once more?
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25992) Polish the ReplicationSourceWALReader code for 2.x after HBASE-25596

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367309#comment-17367309
 ] 

Hudson commented on HBASE-25992:


Results for branch master
[build #329 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Polish the ReplicationSourceWALReader code for 2.x after HBASE-25596
> 
>
> Key: HBASE-25992
> URL: https://issues.apache.org/jira/browse/HBASE-25992
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5
>
>
> The code are very different for 2.x and 1.x, and the original code for 
> HBASE-25596 is for 1.x, so create this issue to polish the code to make it 
> more suitable for 2.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25677) Server+table counters on each scan #nextRaw invocation becomes a bottleneck when heavy load

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367311#comment-17367311
 ] 

Hudson commented on HBASE-25677:


Results for branch master
[build #329 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Server+table counters on each scan #nextRaw invocation becomes a bottleneck 
> when heavy load
> ---
>
> Key: HBASE-25677
> URL: https://issues.apache.org/jira/browse/HBASE-25677
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Affects Versions: 2.3.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.3.5, 2.4.3
>
>
> On a heavily loaded server mostly doing reads/scan, I saw that 90+% of 
> handlers were BLOCKED in this fashion in thread dumps:
> {code}
> "RpcServer.default.FPBQ.Fifo.handler=117,queue=17,port=16020" #161 daemon 
> prio=5 os_prio=0 tid=0x7f748757f000 nid=0x73e9 waiting for monitor entry 
> [0x7f74783e]
>   java.lang.Thread.State: BLOCKED (on object monitor)
>at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1674)
>- waiting to lock <0x7f7647e3cc38> (a 
> java.util.concurrent.ConcurrentHashMap$Node)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.getOrCreateTableMeter(MetricsTableQueryMeterImpl.java:80)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsTableQueryMeterImpl.updateTableReadQueryMeter(MetricsTableQueryMeterImpl.java:90)
>at 
> org.apache.hadoop.hbase.regionserver.RegionServerTableMetrics.updateTableReadQueryMeter(RegionServerTableMetrics.java:89)
>at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServer.updateReadQueryMeter(MetricsRegionServer.java:274)
>at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6742)
>at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3319)
>- locked <0x7f896c0165a0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
>at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3566)
>at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44858)
>at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:393)
>at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> {code}
> It kept up for good periods of time.
> I saw it to a leser extent on other servers, with less load.
> These RS had 400+ Regions a good few of which were serving out scan reads; 
> the server was doing ~1M hits a second. In this scenario, I saw the above 
> bottleneck.
> Looking at it, it came in w/ when the parent issue feature was added. There 
> are these read counts and then there were also write counts. The write counts 
> are mostly batch-based. Let me do same thing here for the read update the 
> central server+table count after scan is done rather than per invocation of 
> #nextRaw.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26013) Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367310#comment-17367310
 ] 

Hudson commented on HBASE-26013:


Results for branch master
[build #329 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Get operations readRows metrics becomes zero after HBASE-25677
> --
>
> Key: HBASE-26013
> URL: https://issues.apache.org/jira/browse/HBASE-26013
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 1.7.1
>
>
> After HBASE-25677, Server+table counters on each scan are extracted from 
> #nextRaw to rsServices scan. In this case, the get operation will not count 
> the read rows. So that the readRows metrics becomes zero. Should add counter 
> in metricsUpdateForGet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25698) Persistent IllegalReferenceCountException at scanner open when using TinyLfuBlockCache

2021-06-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367308#comment-17367308
 ] 

Hudson commented on HBASE-25698:


Results for branch master
[build #329 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/329/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Persistent IllegalReferenceCountException at scanner open when using 
> TinyLfuBlockCache
> --
>
> Key: HBASE-25698
> URL: https://issues.apache.org/jira/browse/HBASE-25698
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, HFile, Scanners
>Affects Versions: 2.4.2
>Reporter: Andrew Kyle Purtell
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5
>
>
> Persistent scanner open failure with offheap read path enabled.
> Not sure how it happened. Test scenario was HBase 1 cluster replicating to 
> HBase 2 cluster. ITBLL as data generator at source, calm policy only. Scanner 
> open errors on sink HBase 2 cluster later during ITBLL verify phase. Sink 
> schema settings bloom=ROW encoding=FAST_DIFF compression=NONE.
> {noformat}
> Caused by: 
> org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException: 
> refCnt: 0, decrement: 1
> at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ReferenceCountUpdater.toLiveRealRefCnt(ReferenceCountUpdater.java:74)
> at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ReferenceCountUpdater.release(ReferenceCountUpdater.java:138)
> at 
> org.apache.hbase.thirdparty.io.netty.util.AbstractReferenceCounted.release(AbstractReferenceCounted.java:76)
> at org.apache.hadoop.hbase.nio.ByteBuff.release(ByteBuff.java:79)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.release(HFileBlock.java:429)
> at 
> org.apache.hadoop.hbase.io.hfile.CompoundBloomFilter.contains(CompoundBloomFilter.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileReader.checkGeneralBloomFilter(StoreFileReader.java:433)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileReader.passesGeneralRowBloomFilter(StoreFileReader.java:322)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileReader.passesBloomFilter(StoreFileReader.java:251)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:491)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:471)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:249)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2177)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2168)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:7172)
> {noformat}
> Bloom filter type on all files here is ROW, block encoding is FAST_DIFF:
> {noformat}
> hbase:017:0> describe "IntegrationTestBigLinkedList"
> Table IntegrationTestBigLinkedList is ENABLED 
>   
> IntegrationTestBigLinkedList  
>   
> COLUMN FAMILIES DESCRIPTION   
>   
> {NAME => 'big', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIF
> F', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE 
> => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'} 
> {NAME => 'meta', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DI
> FF', COMPRESSION => 'NONE', 

[GitHub] [hbase] Apache-HBase commented on pull request #3413: HBASE-21674 complement the admin operations in thrift2

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3413:
URL: https://github.com/apache/hbase/pull/3413#issuecomment-865894963


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   7m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m  3s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 34s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  branch-1 passed  |
   | -1 :x: |  shadedjars  |   0m 19s |  branch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   2m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 12s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 36s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 36s |  hbase-thrift: The patch generated 1 
new + 4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedjars  |   0m 13s |  patch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 29s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   2m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 34s |  hbase-thrift in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  43m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3413 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 5be2f12cd282 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3413/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / fd2f8a5 |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/artifact/out/branch-shadedjars.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/artifact/out/diff-checkstyle-hbase-thrift.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/artifact/out/patch-shadedjars.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/artifact/out/patch-unit-hbase-thrift.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3413/1/testReport/
 |
   | Max. process+thread count | 87 (vs. ulimit of 1) |
   | modules | C: hbase-thrift U: hbase-thrift |
   | Console output | 

[jira] [Comment Edited] (HBASE-20503) [AsyncFSWAL] Failed to get sync result after 300000 ms for txid=160912, WAL system stuck?

2021-06-22 Thread Emil Kleszcz (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17365637#comment-17365637
 ] 

Emil Kleszcz edited comment on HBASE-20503 at 6/22/21, 11:11 AM:
-

Hi, we experienced the same issue in HBase 2.3.4 on one of our production 
clusters this week. This happened a few weeks after upgrading HBase from 2.2.4 
where we never observed this problem.
 We run on the HDP 3.2.1. On average we have around 800 regions per RS and the 
workload was, as usual, these days.

This problem started on one of the RSs where meta region was residing. We could 
observe the following in the RS log:
{code:java}
<2021-06-15T10:31:28.284+0200>  :   : 
java.io.IOException: stream already broken
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:420)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:509)
(...)
<2021-06-15T11:11:39.744+0200>  : 
java.io.FileNotFoundException: File does not exist: /hbase/WALs/
(...)
<2021-06-15T11:15:59.241+0200>  
:   : 
java.io.IOException: stream already broken
(...)
<2021-06-15T11:39:39.986+0200>  : 
org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync 
result after 30 ms for txid=54056852, WAL system stuck?
(...)
<2021-06-16T18:43:05.220+0200>  : 

{code}
Since then compaction started failing on many regions including meta. 2 days 
later we could HMaster struggling with updating hbase:meta while updating state 
of one region...
 This triggered an avalanche of stuck procedures in HMaster
{code:java}
<2021-06-17T08:53:13.862+0200>  : 

<2021-06-17T08:53:13.862+0200>  : 

<2021-06-17T08:53:13.866+0200>  : 

<2021-06-17T08:53:13.867+0200>  : 

<2021-06-17T08:53:13.867+0200>  : 
 
<2021-06-17T08:53:28.443+0200>  : {code}
In HA the Hmasters started flipping over and we could observe more and more 
RITs with OPENING and CLOSING states pointing to stale RSs (old timestamps or 
null). Only the manual fix (forcing states for tables/regions) helped to 
recover the cluster.
{code:java}
Failed transition b31a3040431b34256e265cd6c5a0c4e6 is not OPEN; 
state=CLOSING>{code}
I hope we have some patch for this soon.


was (Author: tr0k):
Hi, we experienced the same issue in HBase 2.3.4 on one of our production 
clusters this week. This happened a few weeks after upgrading HBase from 2.2.4 
where we never observed this problem.
 We run on the HDP 3.2.1. On average we have around 800 regions per RS and the 
workload was, as usual, these days.

This problem started on one of the RSs where meta region was residing. We could 
observe the following in the RS log:
{code:java}
<2021-06-15T10:31:28.284+0200>  :   : 
java.io.IOException: stream already broken
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:420)
at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:509)
(...)
<2021-06-15T11:11:39.744+0200>  : 
java.io.FileNotFoundException: File does not exist: /hbase/WALs/
(...)
<2021-06-15T11:15:59.241+0200>  
:   : 
java.io.IOException: stream already broken
(...)
<2021-06-15T11:39:39.986+0200>  : 
org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync 
result after 30 ms for txid=54056852, WAL system stuck?
(...)
<2021-06-16T18:43:05.220+0200>  : 

{code}
Since then compaction started failing on many regions including meta. 2 days 
later we could see one RS going down...
 This triggered an avalanche of stuck procedures in HMaster
{code:java}
<2021-06-17T08:53:13.862+0200>  : 

<2021-06-17T08:53:13.862+0200>  : 

<2021-06-17T08:53:13.866+0200>  : 

<2021-06-17T08:53:13.867+0200>  : 

<2021-06-17T08:53:13.867+0200>  : 
 
<2021-06-17T08:53:28.443+0200>  : {code}
In HA the Hmasters started flipping over and we could observe more and more 
RITs with OPENING and CLOSING states pointing to stale RSs (old timestamps or 
null). Only the manual fix (forcing states for tables/regions) helped to 
recover the cluster.

I hope we have some patch for this soon.

> [AsyncFSWAL] Failed to get sync result after 30 ms for txid=160912, WAL 
> system stuck?
> -
>
> Key: HBASE-20503
> URL: https://issues.apache.org/jira/browse/HBASE-20503
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Michael Stack
>Priority: Major
> Attachments: 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch, 
> 0001-HBASE-20503-AsyncFSWAL-Failed-to-get-sync-result-aft.patch
>
>
> Scale test. Startup w/ 30k regions over ~250nodes. This RS is trying to 
> furiously open regions assigned by Master. It is importantly 

[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656009419



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.
  */
 @InterfaceAudience.Private
-public class PersistedStoreFlushContext extends DefaultStoreFlushContext {
+public class DirectInStoreFlushContext extends DefaultStoreFlushContext {

Review comment:
   Changed name after comments from @saintstack on HBASE-25391. I can do 
"DirectStore" as suggested.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656100941



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.

Review comment:
   Yes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656098099



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectInStoreCompactor extends DefaultCompactor {
+  public DirectInStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0, shouldDropBehind);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind) throws IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.
+writerCacheConf = new CacheConfig(store.getCacheConfig());
+writerCacheConf.setCacheDataOnWrite(false);
+
+InetSocketAddress[] favoredNodes = null;
+if (store.getHRegion().getRegionServerServices() != null) {
+  favoredNodes = 
store.getHRegion().getRegionServerServices().getFavoredNodesForRegion(
+store.getHRegion().getRegionInfo().getEncodedName());
+}
+HFileContext hFileContext = store.createFileContext(compression, 
includeMVCCReadpoint,
+  includesTag, store.getCryptoContext());
+Path familyDir = new Path(store.getRegionFileSystem().getRegionDir(),
+  store.getColumnFamilyDescriptor().getNameAsString());
+StoreFileWriter.Builder builder = new StoreFileWriter.Builder(conf, 
writerCacheConf,
+  store.getFileSystem())
+  .withOutputDir(familyDir)
+  .withBloomType(store.getColumnFamilyDescriptor().getBloomFilterType())
+  .withMaxKeyCount(maxKeyCount)
+  .withFavoredNodes(favoredNodes)
+  .withFileContext(hFileContext)
+  .withShouldDropCacheBehind(shouldDropBehind)
+  .withCompactedFilesSupplier(() -> store.getCompactedFiles());

Review comment:
   Ack.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue

2021-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367202#comment-17367202
 ] 

Viraj Jasani commented on HBASE-26021:
--

Hmm that might be a possibility. Might have to figure out recent problematic 
commit/code issue.

> HBase 1.7 to 2.4 upgrade issue
> --
>
> Key: HBASE-26021
> URL: https://issues.apache.org/jira/browse/HBASE-26021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.7.0, 2.4.4
>Reporter: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot 
> 2021-06-22 at 12.54.30 PM.png
>
>
> As of today, if we bring up HBase cluster using branch-1 and upgrade to 
> branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. 
> Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil 
> seems to be producing "*\n hbase:\n meta*" and "*\n hbase:\n namespace*"
> {code:java}
> 2021-06-22 00:05:56,611 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> regionserver.RSRpcServices: Open hbase:meta,,1.1588230740
> 2021-06-22 00:05:56,648 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> regionserver.RSRpcServices: Open 
> hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a.
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R   namespace
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R^Dmeta
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625)
> at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #3411: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3411:
URL: https://github.com/apache/hbase/pull/3411#issuecomment-865869205


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 17s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 143m  5s |  hbase-server in the patch failed.  |
   |  |   | 172m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3411 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux adbf78310619 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 12d707c880 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/testReport/
 |
   | Max. process+thread count | 3901 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3411: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3411:
URL: https://github.com/apache/hbase/pull/3411#issuecomment-865868711


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 146m 30s |  hbase-server in the patch failed.  |
   |  |   | 171m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3411 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux aec0dd9651ae 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 12d707c880 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/testReport/
 |
   | Max. process+thread count | 4120 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue

2021-06-22 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367200#comment-17367200
 ] 

Anoop Sam John commented on HBASE-26021:


Oh.. I was about to ask that Q next..   That means if u try upgrade an old 1.x 
cluster to latest 1.7 based cluster,  u will see same issue even there . 
Correct?

> HBase 1.7 to 2.4 upgrade issue
> --
>
> Key: HBASE-26021
> URL: https://issues.apache.org/jira/browse/HBASE-26021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.7.0, 2.4.4
>Reporter: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot 
> 2021-06-22 at 12.54.30 PM.png
>
>
> As of today, if we bring up HBase cluster using branch-1 and upgrade to 
> branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. 
> Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil 
> seems to be producing "*\n hbase:\n meta*" and "*\n hbase:\n namespace*"
> {code:java}
> 2021-06-22 00:05:56,611 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> regionserver.RSRpcServices: Open hbase:meta,,1.1588230740
> 2021-06-22 00:05:56,648 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> regionserver.RSRpcServices: Open 
> hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a.
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R   namespace
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R^Dmeta
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> 

[GitHub] [hbase] YutSean opened a new pull request #3413: HBASE-21674 complement the admin operations in thrift2

2021-06-22 Thread GitBox


YutSean opened a new pull request #3413:
URL: https://github.com/apache/hbase/pull/3413


   https://issues.apache.org/jira/browse/HBASE-21674


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3409: HBASE-26020 Split TestWALEntryStream.testDifferentCounts out

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3409:
URL: https://github.com/apache/hbase/pull/3409#issuecomment-865855187


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 13s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  | 190m  1s |  hbase-server in the patch passed.  
|
   |  |   | 227m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3409/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3409 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4779e242742d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f640eef924 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3409/2/testReport/
 |
   | Max. process+thread count | 2464 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3409/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] zhengzhuobinzzb closed pull request #3412: HBASE-26022. DNS jitter causes hbase client to get stuck

2021-06-22 Thread GitBox


zhengzhuobinzzb closed pull request #3412:
URL: https://github.com/apache/hbase/pull/3412


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] zhengzhuobinzzb commented on pull request #3412: HBASE-26022. DNS jitter causes hbase client to get stuck

2021-06-22 Thread GitBox


zhengzhuobinzzb commented on pull request #3412:
URL: https://github.com/apache/hbase/pull/3412#issuecomment-865847552


   > branch-1.2 is EOLed, could you provide patch for branch-1?
   
   OK, I will try it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue

2021-06-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367175#comment-17367175
 ] 

Viraj Jasani commented on HBASE-26021:
--

It seems the issue is in branch-1. I just went 250 commits behind the current 
branch-1 HEAD as of today, then built HBase 1, started 1 HM and 4 RS. Started 1 
RS from branch-2.4 and both meta, namespace regions were smoothly opened on 2.4 
RS.

> HBase 1.7 to 2.4 upgrade issue
> --
>
> Key: HBASE-26021
> URL: https://issues.apache.org/jira/browse/HBASE-26021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.7.0, 2.4.4
>Reporter: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot 
> 2021-06-22 at 12.54.30 PM.png
>
>
> As of today, if we bring up HBase cluster using branch-1 and upgrade to 
> branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. 
> Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil 
> seems to be producing "*\n hbase:\n meta*" and "*\n hbase:\n namespace*"
> {code:java}
> 2021-06-22 00:05:56,611 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> regionserver.RSRpcServices: Open hbase:meta,,1.1588230740
> 2021-06-22 00:05:56,648 INFO  
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> regionserver.RSRpcServices: Open 
> hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a.
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R   namespace
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597)
> at 
> org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> 2021-06-22 00:05:56,759 ERROR 
> [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] 
> ipc.RpcServer: Unexpected throwable object
> java.lang.IllegalArgumentException: Illegal character <
> > at 0. Namespaces may only contain 'alphanumeric characters' from any 
> > language and digits:
> ^Ehbase^R^Dmeta
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246)
> at 
> org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220)
> at org.apache.hadoop.hbase.TableName.(TableName.java:348)
> at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385)
> at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937)
> at 
> 

[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656045500



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectInStoreCompactor extends DefaultCompactor {
+  public DirectInStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0, shouldDropBehind);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind) throws IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.
+writerCacheConf = new CacheConfig(store.getCacheConfig());
+writerCacheConf.setCacheDataOnWrite(false);
+
+InetSocketAddress[] favoredNodes = null;
+if (store.getHRegion().getRegionServerServices() != null) {
+  favoredNodes = 
store.getHRegion().getRegionServerServices().getFavoredNodesForRegion(
+store.getHRegion().getRegionInfo().getEncodedName());

Review comment:
   I already need an instance of HStore, as you noted on your comment 
below. This HStore instance need is not exclusive from this Compactor 
extension, it is already defined on the parent classes single declared 
constructors, so it's used for other stuff in the hierarchy, not only the 
filereader creation. I don't think there's much value in create an extra 
StoreContext here when we have everything we need in HStore instance itself. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] petersomogyi commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


petersomogyi commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656022254



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##
@@ -266,6 +266,7 @@ public InternalScanner createScanner(ScanInfo scanInfo, 
List s
* @param fd The file details.
* @return Writer for a new StoreFile in the tmp dir.
* @throws IOException if creation failed
+   * @deprecated Use initWriter instead.

Review comment:
   Cool! Probably I made this comment earlier and haven't noticed the 
change.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##
@@ -533,4 +549,46 @@ protected InternalScanner createScanner(HStore store, 
ScanInfo scanInfo,
 return new StoreScanner(store, scanInfo, scanners, smallestReadPoint, 
earliestPutTs,
 dropDeletesFromRow, dropDeletesToRow);
   }
+
+  /**
+   * Default implementation for committing store files created after a 
compaction. Assumes new files
+   * had been created on a temp directory, so it renames those files into the 
actual store dir,
+   * then create a reader and cache it into the store.
+   * @param cr the compaction request.
+   * @param newFiles the new files created by this compaction under a temp dir.
+   * @param user the running user/
+   * @return A list of the resulting store files already placed in the store 
dir and loaded into the
+   * store cache.
+   * @throws IOException
+   */
+  public List commitCompaction(CompactionRequestImpl cr, 
List newFiles, User user)
+  throws IOException {
+List sfs = new ArrayList<>(newFiles.size());
+for (Path newFile : newFiles) {
+  assert newFile != null;
+  this.store.validateStoreFile(newFile);

Review comment:
   Thanks for explaining!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Reidddddd commented on pull request #3412: HBASE-26022. DNS jitter causes hbase client to get stuck

2021-06-22 Thread GitBox


Reidd commented on pull request #3412:
URL: https://github.com/apache/hbase/pull/3412#issuecomment-865774559


   branch-1.2 is EOLed, could you provide patch for branch-1?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656019402



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectInStoreCompactor extends DefaultCompactor {
+  public DirectInStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0, shouldDropBehind);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind) throws IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.
+writerCacheConf = new CacheConfig(store.getCacheConfig());
+writerCacheConf.setCacheDataOnWrite(false);

Review comment:
   Yeah, missed this while converting my original PoC to this PR. Let me 
try avoid duplication with `HStore.createWriterInTmp`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656009419



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.
  */
 @InterfaceAudience.Private
-public class PersistedStoreFlushContext extends DefaultStoreFlushContext {
+public class DirectInStoreFlushContext extends DefaultStoreFlushContext {

Review comment:
   Changed name after comments from @saintstack on HBASE-25391.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (HBASE-26022) DNS jitter causes hbase client to get stuck

2021-06-22 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng reassigned HBASE-26022:
-

Assignee: zhuobin zheng

> DNS jitter causes hbase client to get stuck
> ---
>
> Key: HBASE-26022
> URL: https://issues.apache.org/jira/browse/HBASE-26022
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>
> In our product hbase cluster, we occasionally encounter below errors, and 
> stuck hbase a long time. Then hbase requests to this machine will fail 
> forever.
> {code:java}
> WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - LOOKING_UP_SERVER)]
> WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
> cause:java.io.IOException: Couldn't setup connection for ${user@realm} to 
> hbase/${ip}@realm
> {code}
> The main problem is  the trully server principal we generated in KDC is  
> hbase/*${hostname}*@realm, so we must can't find  hbase/*${ip}*@realm in KDC.
> When RpcClientImpl#Connection construct, the field serverPrincial which never 
> changed generated by method InetAddress.getCanonicalHostName() which will 
> return IP when failed to get hostname.
> Therefor, once DNS jitter when RpcClientImpl#Connection, this connection will 
> never setup sasl env. And I'm not see connection abandon logic in sasl failed 
> code path.
> I think of two solutions to this problem: 
>  # Abandon connection when sasl failed. So next request will reconstruct a 
> connection, and will regenerate a new server principal.
>  # Refresh serverPrincial field when sasl failed. So next retry will use new 
> server principal.
> HBase Version: 1.2.0-cdh5.14.4



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


wchevreuil commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r656009419



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.
  */
 @InterfaceAudience.Private
-public class PersistedStoreFlushContext extends DefaultStoreFlushContext {
+public class DirectInStoreFlushContext extends DefaultStoreFlushContext {

Review comment:
   Name change suggestion came after comments from @saintstack on 
HBASE-25391.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] zhengzhuobinzzb opened a new pull request #3412: HBASE-26022. DNS jitter causes hbase client to get stuck

2021-06-22 Thread GitBox


zhengzhuobinzzb opened a new pull request #3412:
URL: https://github.com/apache/hbase/pull/3412


   Signed-off-by: Zhuobin Zheng 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3411: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3411:
URL: https://github.com/apache/hbase/pull/3411#issuecomment-865738028


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 16s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   4m 17s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 14s |  the patch passed  |
   | -0 :warning: |  javac  |   4m 14s |  hbase-server generated 1 new + 192 
unchanged - 1 fixed = 193 total (was 193)  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 30s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  53m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3411 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 81c139c59581 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 12d707c880 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-server.txt
 |
   | Max. process+thread count | 86 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3411/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25761) POC: hbase:meta,,1 as ROOT

2021-06-22 Thread Francis Christopher Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367137#comment-17367137
 ] 

Francis Christopher Liu commented on HBASE-25761:
-

I see, thanks for clarifying. I think I do understand, I just don't think it 
would go down the route of dying for reasons mention earlier: possibly enabling 
it by default in all or subset of tests, removing the compatibility switch and 
then there's adoption. In general I think the death of a feature is because of 
a mix of lack of support and adoption. Since we are guaranteeing rolling 
upgradeabbility from 2.x it is something that I think all implementations would 
run into, perhaps maybe a little more for single meta region than the others 
but arguably inconsequential from this perspective (eg the code for multiple 
meta is not exercised). 

Perhaps this is something we can discuss further in a video call sync-up with 
[~stack] and hopefully a bunch of other folks that are interested? I think we 
should have a sync-up discussion for split meta in general anyway.

> POC: hbase:meta,,1 as ROOT
> --
>
> Key: HBASE-25761
> URL: https://issues.apache.org/jira/browse/HBASE-25761
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Assignee: Francis Christopher Liu
>Priority: Major
>
> One of the proposals up in the split-meta design doc suggests a 
> sleight-of-hand where the current hard-coded hbase:meta,,1 Region is 
> leveraged to serve as first Region of a split hbase:meta but also does 
> double-duty as 'ROOT'. This suggestion was put aside as a complicating 
> recursion in chat but then Francis noticed on a re-read of the BigTable 
> paper, that this is how they describe they do 'ROOT': "The root tablet is 
> just the first tablet in the METADATA table, but is treated specially -- it 
> is never split..."
> This issue is for playing around with this notion to see what the problems 
> are so can do a better description of this approach here, in the design:
> https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit?ts=606c120f#heading=h.ikbhxlcthjle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26022) DNS jitter causes hbase client to get stuck

2021-06-22 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HBASE-26022:
--
Description: 
In our product hbase cluster, we occasionally encounter below errors, and stuck 
hbase a long time. Then hbase requests to this machine will fail forever.
{code:java}
WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Server not found 
in Kerberos database (7) - LOOKING_UP_SERVER)]
WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
cause:java.io.IOException: Couldn't setup connection for ${user@realm} to 
hbase/${ip}@realm
{code}
The main problem is  the trully server principal we generated in KDC is  
hbase/*${hostname}*@realm, so we must can't find  hbase/*${ip}*@realm in KDC.

When RpcClientImpl#Connection construct, the field serverPrincial which never 
changed generated by method InetAddress.getCanonicalHostName() which will 
return IP when failed to get hostname.

Therefor, once DNS jitter when RpcClientImpl#Connection, this connection will 
never setup sasl env. And I'm not see connection abandon logic in sasl failed 
code path.

I think of two solutions to this problem: 
 # Abandon connection when sasl failed. So next request will reconstruct a 
connection, and will regenerate a new server principal.
 # Refresh serverPrincial field when sasl failed. So next retry will use new 
server principal.

HBase Version: 1.2.0-cdh5.14.4

  was:
In our product hbase cluster, we occasionally encounter  errors

 


> DNS jitter causes hbase client to get stuck
> ---
>
> Key: HBASE-26022
> URL: https://issues.apache.org/jira/browse/HBASE-26022
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: zhuobin zheng
>Priority: Major
>
> In our product hbase cluster, we occasionally encounter below errors, and 
> stuck hbase a long time. Then hbase requests to this machine will fail 
> forever.
> {code:java}
> WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - LOOKING_UP_SERVER)]
> WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:${user@realm} (auth:KERBEROS) 
> cause:java.io.IOException: Couldn't setup connection for ${user@realm} to 
> hbase/${ip}@realm
> {code}
> The main problem is  the trully server principal we generated in KDC is  
> hbase/*${hostname}*@realm, so we must can't find  hbase/*${ip}*@realm in KDC.
> When RpcClientImpl#Connection construct, the field serverPrincial which never 
> changed generated by method InetAddress.getCanonicalHostName() which will 
> return IP when failed to get hostname.
> Therefor, once DNS jitter when RpcClientImpl#Connection, this connection will 
> never setup sasl env. And I'm not see connection abandon logic in sasl failed 
> code path.
> I think of two solutions to this problem: 
>  # Abandon connection when sasl failed. So next request will reconstruct a 
> connection, and will regenerate a new server principal.
>  # Refresh serverPrincial field when sasl failed. So next retry will use new 
> server principal.
> HBase Version: 1.2.0-cdh5.14.4



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3410: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3410:
URL: https://github.com/apache/hbase/pull/3410#issuecomment-865695601


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m  1s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   0m 53s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   3m 25s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 20s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javac  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 59s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  findbugs  |   3m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 178m  3s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 226m  9s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
   |   | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint |
   |   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3410/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3410 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 2c2fe3677d31 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3410/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 7e57fec |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, 
Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3410/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3410/1/testReport/
 |
   | Max. process+thread count | 4342 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3410/1/console
 |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 

[GitHub] [hbase] Reidddddd merged pull request #3410: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


Reidd merged pull request #3410:
URL: https://github.com/apache/hbase/pull/3410


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] YutSean opened a new pull request #3411: HBASE-26013 Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread GitBox


YutSean opened a new pull request #3411:
URL: https://github.com/apache/hbase/pull/3411


   https://issues.apache.org/jira/browse/HBASE-26013


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26013) Get operations readRows metrics becomes zero after HBASE-25677

2021-06-22 Thread Reid Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-26013:
--
Fix Version/s: 1.7.1

> Get operations readRows metrics becomes zero after HBASE-25677
> --
>
> Key: HBASE-26013
> URL: https://issues.apache.org/jira/browse/HBASE-26013
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 1.7.1
>
>
> After HBASE-25677, Server+table counters on each scan are extracted from 
> #nextRaw to rsServices scan. In this case, the get operation will not count 
> the read rows. So that the readRows metrics becomes zero. Should add counter 
> in metricsUpdateForGet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3352: HBASE-25913 Introduce EnvironmentEdge.Clock and Clock.currentTimeAdvancing

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3352:
URL: https://github.com/apache/hbase/pull/3352#issuecomment-865464340






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] tomscut removed a comment on pull request #3325: HBASE-25934 Add username for RegionScannerHolder

2021-06-22 Thread GitBox


tomscut removed a comment on pull request #3325:
URL: https://github.com/apache/hbase/pull/3325#issuecomment-864965924


   > Agree the test failures look unrelated. Rerunning tests just to be sure.
   
   We can look at this. @anoopsjohn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3395: HBASE-26009 Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the pattern of the split point" to branch-2.3

2021-06-22 Thread GitBox


ndimiduk commented on a change in pull request #3395:
URL: https://github.com/apache/hbase/pull/3395#discussion_r655802538



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DelimitedKeyPrefixRegionSplitPolicy.java
##
@@ -37,7 +37,11 @@
  * userid_eventtype_eventid, and use prefix delimiter _, this 
split policy
  * ensures that all rows starting with the same userid, belongs to the same 
region.
  * @see KeyPrefixRegionSplitPolicy
+ *
+ * @deprecated since 2.4.3 and will be removed in 4.0.0. Use {@link 
RegionSplitRestriction},

Review comment:
   You need to update this deprecation string to also include the 
applicable 2.3.x version number.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyPrefixRegionSplitPolicy.java
##
@@ -29,7 +29,11 @@
  *
  * This ensures that a region is not split "inside" a prefix of a row key.
  * I.e. rows can be co-located in a region by their prefix.
+ *
+ * @deprecated since 2.4.3 and will be removed in 4.0.0. Use {@link 
RegionSplitRestriction},

Review comment:
   You need to update this deprecation string to also include the 
applicable 2.3.x version number.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani commented on pull request #3407: HBASE-26018. Perf improvement in L1 cache

2021-06-22 Thread GitBox


virajjasani commented on pull request #3407:
URL: https://github.com/apache/hbase/pull/3407#issuecomment-864949628


   FYI @ben-manes 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] petersomogyi commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


petersomogyi commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r651901792



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##
@@ -266,6 +266,7 @@ public InternalScanner createScanner(ScanInfo scanInfo, 
List s
* @param fd The file details.
* @return Writer for a new StoreFile in the tmp dir.
* @throws IOException if creation failed
+   * @deprecated Use initWriter instead.

Review comment:
   Add `@Deprecated` annotation to the method.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##
@@ -533,4 +549,46 @@ protected InternalScanner createScanner(HStore store, 
ScanInfo scanInfo,
 return new StoreScanner(store, scanInfo, scanners, smallestReadPoint, 
earliestPutTs,
 dropDeletesFromRow, dropDeletesToRow);
   }
+
+  /**
+   * Default implementation for committing store files created after a 
compaction. Assumes new files
+   * had been created on a temp directory, so it renames those files into the 
actual store dir,
+   * then create a reader and cache it into the store.
+   * @param cr the compaction request.
+   * @param newFiles the new files created by this compaction under a temp dir.
+   * @param user the running user/
+   * @return A list of the resulting store files already placed in the store 
dir and loaded into the
+   * store cache.
+   * @throws IOException
+   */
+  public List commitCompaction(CompactionRequestImpl cr, 
List newFiles, User user)
+  throws IOException {
+List sfs = new ArrayList<>(newFiles.size());
+for (Path newFile : newFiles) {
+  assert newFile != null;
+  this.store.validateStoreFile(newFile);

Review comment:
   This is _new_ here. Can this add some delay to the commit time?

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestDefaultCompactor.java
##
@@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import static junit.framework.TestCase.assertEquals;
+import static org.mockito.Mockito.mock;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.RegionServerTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+/**
+ * Test class for DirectInStoreCompactor.
+ */

Review comment:
   Copy-paste javadoc issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] brfrn169 commented on pull request #3395: HBASE-26009 Backport HBASE-25766 "Introduce RegionSplitRestriction that restricts the pattern of the split point" to branch-2.3

2021-06-22 Thread GitBox


brfrn169 commented on pull request #3395:
URL: https://github.com/apache/hbase/pull/3395#issuecomment-865434495


   This may be a new feature but it doesn't break any compatibilities. So I 
thought we could put it into the next 2.3.x release. What do you guys think? 
Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #251: HBASE-22114 Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #251:
URL: https://github.com/apache/hbase/pull/251#issuecomment-865595259


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | -1 :x: |  hbaseanti  |   0m  0s |  The patch appears use Hadoop 
classification instead of HBase.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m 14s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  compile  |   2m  2s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  checkstyle  |   8m 59s |  branch-1 passed  |
   | +0 :ok: |  refguide  |   4m 36s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | -1 :x: |  shadedjars  |   0m 18s |  branch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.8.0_262-b19  |
   | +1 :green_heart: |  javadoc  |   4m 43s |  branch-1 passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  spotbugs  |   2m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 27s |  branch/hbase-resource-bundle no 
findbugs output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  the patch passed  |
   | -1 :x: |  compile  |   0m  9s |  root in the patch failed with JDK Azul 
Systems, Inc.-1.8.0_262-b19.  |
   | -1 :x: |  javac  |   0m  9s |  root in the patch failed with JDK Azul 
Systems, Inc.-1.8.0_262-b19.  |
   | +1 :green_heart: |  compile  |   1m 43s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +1 :green_heart: |  javac  |   1m 43s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   4m 57s |  root: The patch generated 0 
new + 83 unchanged - 11 fixed = 83 total (was 94)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  xml  |   0m  1s |  The patch has 5 ill-formed XML file(s).  |
   | +0 :ok: |  refguide  |   2m 57s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  shadedjars  |   2m 56s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 33s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | -1 :x: |  javadoc  |   0m  9s |  root in the patch failed with JDK Azul 
Systems, Inc.-1.8.0_262-b19.  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  the patch passed with JDK Azul 
Systems, Inc.-1.7.0_272-b10  |
   | +0 :ok: |  findbugs  |   0m 12s |  hbase-resource-bundle has no data from 
findbugs  |
   | -1 :x: |  findbugs  |   0m 10s |  hbase-tinylfu-blockcache in the patch 
failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 161m 29s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 257m  8s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | XML | Parsing Error(s): |
   |   | hbase-common/src/main/resources/hbase-default.xml |
   |   | hbase-it/pom.xml |
   |   | hbase-resource-bundle/src/main/resources/supplemental-models.xml |
   |   | hbase-tinylfu-blockcache/pom.xml |
   |   | pom.xml |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-251/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/251 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml |
   | uname | Linux 2f7671152fab 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-agent/workspace/HBase-PreCommit-GitHub-PR_PR-251/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 5263b8c |
   | Default Java | Azul Systems, Inc.-1.7.0_272-b10 |
   | Multi-JDK 

[GitHub] [hbase] Apache-HBase commented on pull request #3406: HBASE-26015 Should implement getRegionServers(boolean) method in Asyn…

2021-06-22 Thread GitBox


Apache-HBase commented on pull request #3406:
URL: https://github.com/apache/hbase/pull/3406#issuecomment-864809668






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on pull request #2114: HBASE-24286: HMaster won't become healthy after after cloning or crea…

2021-06-22 Thread GitBox


joshelser commented on pull request #2114:
URL: https://github.com/apache/hbase/pull/2114#issuecomment-865145985


   > I'm suggesting to hide this behind a feature flag
   
   Makes sense to me. I think that addresses some of the other concerns from 
@Apache9 (mentioning him to make sure that's OK with him).
   
   If @taklwu is OK with it (and can grant you edit perms), maybe you can 
update this PR with your changes? Or, close this and open a new one with your 
modifications.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] z-york commented on a change in pull request #3389: HBASE-25392 Direct insert compacted HFiles into data directory.

2021-06-22 Thread GitBox


z-york commented on a change in pull request #3389:
URL: https://github.com/apache/hbase/pull/3389#discussion_r655784451



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFileContext;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class DirectInStoreCompactor extends DefaultCompactor {
+  public DirectInStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+  }
+
+  @Override
+  protected StoreFileWriter initWriter(FileDetails fd, boolean 
shouldDropBehind, boolean major)
+throws IOException {
+// When all MVCC readpoints are 0, don't write them.
+// See HBASE-8166, HBASE-12600, and HBASE-13389.
+return createWriterInFamilyDir(fd.maxKeyCount,
+  major ? majorCompactionCompression : minorCompactionCompression,
+  fd.maxMVCCReadpoint > 0, fd.maxTagsLength > 0, shouldDropBehind);
+  }
+
+  private StoreFileWriter createWriterInFamilyDir(long maxKeyCount,
+  Compression.Algorithm compression, boolean includeMVCCReadpoint, boolean 
includesTag,
+boolean shouldDropBehind) throws IOException {
+final CacheConfig writerCacheConf;
+// Don't cache data on write on compactions.
+writerCacheConf = new CacheConfig(store.getCacheConfig());
+writerCacheConf.setCacheDataOnWrite(false);
+
+InetSocketAddress[] favoredNodes = null;
+if (store.getHRegion().getRegionServerServices() != null) {
+  favoredNodes = 
store.getHRegion().getRegionServerServices().getFavoredNodesForRegion(
+store.getHRegion().getRegionInfo().getEncodedName());

Review comment:
   Why not use this directly from the StoreContext: 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java#L95
 

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DirectInStoreFlushContext.java
##
@@ -33,11 +33,11 @@
  * To be used only when PersistedStoreEngine is configured as the StoreEngine 
implementation.
  */
 @InterfaceAudience.Private
-public class PersistedStoreFlushContext extends DefaultStoreFlushContext {
+public class DirectInStoreFlushContext extends DefaultStoreFlushContext {

Review comment:
   Why the change in naming? 'DirectInStore' seems a bit confusing... if 
you want to change it, perhaps just 'DirectStore'?

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/DirectInStoreCompactor.java
##
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.compactions;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+import org.apache.hadoop.conf.Configuration;
+import 

[GitHub] [hbase] Apache9 commented on pull request #3396: HBASE-26010 Backport HBASE-25703 and HBASE-26002 to branch-2.3

2021-06-22 Thread GitBox


Apache9 commented on pull request #3396:
URL: https://github.com/apache/hbase/pull/3396#issuecomment-865016704


   Ping @ndimiduk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >