[GitHub] [hbase] virajjasani commented on pull request #2542: HBASE-24667 Rename configs that support atypical DNS set ups to put them in hbase.unsafe

2020-11-05 Thread GitBox


virajjasani commented on pull request #2542:
URL: https://github.com/apache/hbase/pull/2542#issuecomment-722907739


   @HorizonNet Could you please take one look?
   Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] anoopsjohn commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


anoopsjohn commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518549774



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   Sorry not rowKey... Ya new rowLen state. My bad.  What is the real adv 
of this rowLen decoding happening at the Reader class?(and set to a state var)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518546602



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   >>So why to change the critical part of HFile reader adding new byte[] 
state
   
   Sorry. When you mean byte[] state - are you saying we are adding a new 
byte[] as state variable?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518546602



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   >>So why to change the critical part of HFile reader adding new byte[] 
state
   Sorry. When you mean byte[] state - are you saying we are adding a new 
byte[] as state variable?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518546426



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   If you see my previous commit - I got the rowLen also in blockSeek and 
the readKeyValue() method to pass it to the SizeCachedKV variants. That was 
mainly just to ensure that we get rowLen while doing the KV parsing itself 
rather than KV creation. Now in the last commit since I was already doing that 
rowLen related change I changed the BBKV also so that we don't need to parse it 
there. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] anoopsjohn commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


anoopsjohn commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518540155



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   Any advantage we get because of doing here?  Because in one place, while 
constructing this KV objects, we decode the value and set as a state var and 
pass in constructor.  For keyLen, ya we already decoded there (before this 
patch) we pass that for reuse.  But for rowLen its not that way na..  So why to 
change the critical part of HFile reader adding new byte[] state? Better avoid.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-25216) The client zk syncer should deal with meta replica count change

2020-11-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-25216.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

TestSeparateClientZKCluster is fine now.

Resolve.

> The client zk syncer should deal with meta replica count change
> ---
>
> Key: HBASE-25216
> URL: https://issues.apache.org/jira/browse/HBASE-25216
> Project: HBase
>  Issue Type: Bug
>  Components: master, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The failure of TestSeparateClientZKCluster is because that, we start the zk 
> syncer before we initialize meta region, and after HBASE-25099, we will scan 
> zookeeper to get the meta znodes directly instead of checking the config, so 
> we will get an empty list since there is no meta location on zk yet, thus we 
> will sync nothing.
> But changing the order can not solve everything, as after HBASE-25099, we can 
> change the meta replica count without restartinig master, so the zk syncer 
> should have the ability to know the change and start to sync the location for 
> the new replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2627: HBASE-25251 Enable configuration based enable/disable of Unsafe packa…

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2627:
URL: https://github.com/apache/hbase/pull/2627#issuecomment-722793279


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   0m 46s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 19s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  35m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2627 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 92ac11070570 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 23e656712b |
   | Max. process+thread count | 94 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2627: HBASE-25251 Enable configuration based enable/disable of Unsafe packa…

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2627:
URL: https://github.com/apache/hbase/pull/2627#issuecomment-722791782


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 10s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   7m 26s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 20s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 38s |  hbase-common in the patch passed.  
|
   |  |   |  30m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2627 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5d19052200d2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 23e656712b |
   | Default Java | AdoptOpenJDK-11.0.6+10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/testReport/
 |
   | Max. process+thread count | 197 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2627: HBASE-25251 Enable configuration based enable/disable of Unsafe packa…

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2627:
URL: https://github.com/apache/hbase/pull/2627#issuecomment-722790276


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 51s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 30s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 16s |  hbase-common in the patch passed.  
|
   |  |   |  25m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2627 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux bb9c03a2ef68 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 23e656712b |
   | Default Java | AdoptOpenJDK-1.8.0_232-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/testReport/
 |
   | Max. process+thread count | 340 (vs. ulimit of 3) |
   | modules | C: hbase-common U: hbase-common |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2627/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] sguggilam opened a new pull request #2627: HBASE-25251 Enable configuration based enable/disable of Unsafe packa…

2020-11-05 Thread GitBox


sguggilam opened a new pull request #2627:
URL: https://github.com/apache/hbase/pull/2627


   …ge usage



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”

2020-11-05 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17227117#comment-17227117
 ] 

Michael Stack commented on HBASE-25238:
---

[~Zhuqi1108] There is no release w/ this fix in it yet. If you need it now, try 
making a build from the tip of branch-2.3.

> Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing 
> required fields: state”
> -
>
> Key: HBASE-25238
> URL: https://issues.apache.org/jira/browse/HBASE-25238
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Zhuqi Jin
>Assignee: Michael Stack
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4
>
>
> When we upgraded HBASE cluster from 2.2.0-RC0 to 2.3.0 or 2.3.3, the HMaster 
> on upgraded node failed to start.
> The error message is shown below: 
> {code:java}
> 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active 
> masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Message missing required fields: state   at 
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228)  
>  at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124)
>    at 
> org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352)
>    at 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294)
>    at 
> org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43)
>    at 
> org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
>    at 
> org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572)
>    at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950)
>    at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
>    at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) 
>   at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR 
> [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING 
> master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting 
> shutdown. 
> *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Message missing required fields: state   at 
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
>  

[jira] [Updated] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”

2020-11-05 Thread Zhuqi Jin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhuqi Jin updated HBASE-25238:
--
Description: 
When we upgraded HBASE cluster from 2.2.0-RC0 to 2.3.0 or 2.3.3, the HMaster on 
upgraded node failed to start.

The error message is shown below: 
{code:java}
2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] 
master.HMaster: Failed to become active 
masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
 Message missing required fields: state   at 
org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228)
   at 
org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124)
   at 
org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352)
   at 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72)
   at 
org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294)
   at 
org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43)
   at 
org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90)
   at 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194)
   at 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474)
   at 
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151)
   at 
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103)
   at 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465)
   at 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184)
   at 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257)
   at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
   at 
org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572)
   at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950)
   at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
   at 
org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622)   
at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR 
[master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING 
master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting 
shutdown. 
*org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
 Message missing required fields: state   at 
org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
   at 
org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228)
   at 
org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124)
   at 
org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352)
   at 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72)
   at 
org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294)
   at 

[jira] [Commented] (HBASE-25251) Enable configuration based enable/disable of Unsafe package usage

2020-11-05 Thread Sandeep Guggilam (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17227096#comment-17227096
 ] 

Sandeep Guggilam commented on HBASE-25251:
--

FYI [~apurtell]

> Enable configuration based enable/disable of Unsafe package usage
> -
>
> Key: HBASE-25251
> URL: https://issues.apache.org/jira/browse/HBASE-25251
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sandeep Guggilam
>Assignee: Sandeep Guggilam
>Priority: Major
>
> We need a provide away for clients to disable Unsafe package usage . 
> Currently there is no way for clients to specify that they don't want to use 
> Unsafe conversion for Bytes conversion.
> As a result there could be some issues with missing methods of Unsafe when 
> client is on JDK 11 . So the clients can disable Unsafe package use and use 
> normal conversion if they want to.
> Also we use static references to Unsafe Availability in Bytes class assuming 
> that the Unsafe availability is set during class loading and no one can ever 
> override it later. Now that we plan to expose a util for clients to override 
> the availability if required, we need to avoid the static references for 
> computing the availability whenever we do the comparisions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-25251) Enable configuration based enable/disable of Unsafe package usage

2020-11-05 Thread Sandeep Guggilam (Jira)
Sandeep Guggilam created HBASE-25251:


 Summary: Enable configuration based enable/disable of Unsafe 
package usage
 Key: HBASE-25251
 URL: https://issues.apache.org/jira/browse/HBASE-25251
 Project: HBase
  Issue Type: Improvement
Reporter: Sandeep Guggilam
Assignee: Sandeep Guggilam


We need a provide away for clients to disable Unsafe package usage . Currently 
there is no way for clients to specify that they don't want to use Unsafe 
conversion for Bytes conversion.

As a result there could be some issues with missing methods of Unsafe when 
client is on JDK 11 . So the clients can disable Unsafe package use and use 
normal conversion if they want to.

Also we use static references to Unsafe Availability in Bytes class assuming 
that the Unsafe availability is set during class loading and no one can ever 
override it later. Now that we plan to expose a util for clients to override 
the availability if required, we need to avoid the static references for 
computing the availability whenever we do the comparisions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25239) Upgrading HBase from 2.2.0/2.3.3 to master(3.0.0) fails because HMaster “Failed to become active master”

2020-11-05 Thread Zhuqi Jin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226998#comment-17226998
 ] 

Zhuqi Jin commented on HBASE-25239:
---

[~zhangduo]

I don't quite understand what "there should be live region servers to host 
namespace table and meta table." mean.
Can't I shut down all nodes and then start them after upgrading to master?

> Upgrading HBase from 2.2.0/2.3.3 to master(3.0.0) fails because HMaster 
> “Failed to become active master”
> 
>
> Key: HBASE-25239
> URL: https://issues.apache.org/jira/browse/HBASE-25239
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.3
>Reporter: Zhuqi Jin
>Priority: Major
>
> When we upgraded HBASE cluster from 2.2.0/2.3.3 to 
> master(c303f9d329d578d31140e507bdbcbe3aa097042b),  the HMaster on upgraded 
> node failed to start.
> The error message is shown below:
> {code:java}
> 2020-11-03 02:52:27,809 ERROR [master/65cddff041f6:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active 
> masterjava.lang.IllegalStateException: Expected the service 
> ClusterSchemaServiceImpl [FAILED] to be RUNNING, but the service has FAILEDat 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.checkCurrentState(AbstractService.java:379)at
>  
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.awaitRunning(AbstractService.java:319)at
>  
> org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1362)at
>  
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1137)at
>  
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2245)at
>  org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:626)at 
> java.lang.Thread.run(Thread.java:748)Caused by: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 
> actions: RetriesExhaustedException: 2 times, servers with issues:at 
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.makeError(BufferedMutatorOverAsyncBufferedMutator.java:107)at
>  
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.internalFlush(BufferedMutatorOverAsyncBufferedMutator.java:122)at
>  
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.close(BufferedMutatorOverAsyncBufferedMutator.java:166)at
>  
> org.apache.hadoop.hbase.master.TableNamespaceManager.migrateNamespaceTable(TableNamespaceManager.java:93)at
>  
> org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:123)at
>  
> org.apache.hadoop.hbase.master.ClusterSchemaServiceImpl.doStart(ClusterSchemaServiceImpl.java:61)at
>  
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.startAsync(AbstractService.java:249)at
>  
> org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1360)...
>  4 more2020-11-03 02:52:27,810 ERROR 
> [master/65cddff041f6:16000:becomeActiveMaster] master.HMaster: Master server 
> abort: loaded coprocessors are: []2020-11-03 02:52:27,810 ERROR 
> [master/65cddff041f6:16000:becomeActiveMaster] master.HMaster: * ABORTING 
> master 65cddff041f6,16000,1604371935915: Unhandled exception. Starting 
> shutdown. *java.lang.IllegalStateException: Expected the service 
> ClusterSchemaServiceImpl [FAILED] to be RUNNING, but the service has FAILEDat 
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.checkCurrentState(AbstractService.java:379)at
>  
> org.apache.hbase.thirdparty.com.google.common.util.concurrent.AbstractService.awaitRunning(AbstractService.java:319)at
>  
> org.apache.hadoop.hbase.master.HMaster.initClusterSchemaService(HMaster.java:1362)at
>  
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1137)at
>  
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2245)at
>  org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:626)at 
> java.lang.Thread.run(Thread.java:748)Caused by: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 
> actions: RetriesExhaustedException: 2 times, servers with issues:at 
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.makeError(BufferedMutatorOverAsyncBufferedMutator.java:107)at
>  
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.internalFlush(BufferedMutatorOverAsyncBufferedMutator.java:122)at
>  
> org.apache.hadoop.hbase.client.BufferedMutatorOverAsyncBufferedMutator.close(BufferedMutatorOverAsyncBufferedMutator.java:166)at
>  
> org.apache.hadoop.hbase.master.TableNamespaceManager.migrateNamespaceTable(TableNamespaceManager.java:93)at
>  
> 

[jira] [Commented] (HBASE-25240) gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226982#comment-17226982
 ] 

Hudson commented on HBASE-25240:


Results for branch branch-2.2
[build #114 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> gson format of RpcServer.logResponse is abnormal
> 
>
> Key: HBASE-25240
> URL: https://issues.apache.org/jira/browse/HBASE-25240
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.3, 2.2.6
>Reporter: wenfeiyi666
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0, 2.2.7, 2.3.4
>
>
> It will turn ‘=’ into ‘\u003d’.
> ipc.RpcServer(550): (responseTooSlow): 
> {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":0,"starttimems":"1604389131993","responsesize":"64","method":"Multi","param":"region{color:#FF}\u003d{color}
>  test,,1604389129684.8812226d0f8942b24892c79e3c393b26., for 10 action(s) and 
> 1st row 
> key{color:#FF}\u003d{color}11","processingtimems":20,"client":"172.16.136.23:61264","queuetimems":0,"multi.servicecalls":0,"class":"MiniHBaseClusterRegionServer","multi.mutations":10}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226983#comment-17226983
 ] 

Hudson commented on HBASE-25238:


Results for branch branch-2.2
[build #114 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/114//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing 
> required fields: state”
> -
>
> Key: HBASE-25238
> URL: https://issues.apache.org/jira/browse/HBASE-25238
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Zhuqi Jin
>Assignee: Michael Stack
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4
>
>
> When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster 
> on upgraded node failed to start.
> The error message is shown below: 
> {code:java}
> 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active 
> masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Message missing required fields: state   at 
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228)  
>  at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124)
>    at 
> org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352)
>    at 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294)
>    at 
> org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43)
>    at 
> org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
>    at 
> org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572)
>    at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950)
>    at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
>    at 
> 

[GitHub] [hbase] Apache-HBase commented on pull request #2626: HBASE-25029 Resolve the TODO of AssignmentManager's loadMeta() method.

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2626:
URL: https://github.com/apache/hbase/pull/2626#issuecomment-722637185


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 39s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 18s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | -1 :x: |  spotbugs  |   0m 42s |  hbase-balancer generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 25s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 25s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-balancer |
   |  |  Useless object stored in variable listFuture of method 
org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(boolean, Connection, byte[], 
byte[], ClientMetaTableAccessor$QueryType, Filter, int, 
ClientMetaTableAccessor$Visitor)  At MetaTableAccessor.java:listFuture of 
method org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(boolean, Connection, 
byte[], byte[], ClientMetaTableAccessor$QueryType, Filter, int, 
ClientMetaTableAccessor$Visitor)  At MetaTableAccessor.java:[line 540] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2626/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2626 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 3370e6a7d264 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 23e656712b |
   | spotbugs | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2626/1/artifact/yetus-general-check/output/new-spotbugs-hbase-balancer.html
 |
   | Max. process+thread count | 94 (vs. ulimit of 3) |
   | modules | C: hbase-balancer hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2626/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-25250) Fixing hacky test in TestCoprocessorInterface

2020-11-05 Thread Abhishek Khanna (Jira)
Abhishek Khanna created HBASE-25250:
---

 Summary: Fixing hacky test in TestCoprocessorInterface
 Key: HBASE-25250
 URL: https://issues.apache.org/jira/browse/HBASE-25250
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.3.2
Reporter: Abhishek Khanna


The coprocessor was not being initialized in the region/store as the 
regionServices was set as null. The test was using an explicit setter in the 
region for populating the coprocessor after region initialization. This code 
path was exercised only by tests. This commit removes the back door entry for 
the test to set the coprocessor host explicitly and now the same is being set 
through the regular store initialization code path. This also allows the use of 
the storecontext otherwise the test would fail with the storecontext having a 
null coprocessor



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] sanjeetnishad95 opened a new pull request #2626: HBASE-25029 Resolve the TODO of AssignmentManager's loadMeta() method.

2020-11-05 Thread GitBox


sanjeetnishad95 opened a new pull request #2626:
URL: https://github.com/apache/hbase/pull/2626


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-25249) Adding HStoreContext

2020-11-05 Thread Abhishek Khanna (Jira)
Abhishek Khanna created HBASE-25249:
---

 Summary: Adding HStoreContext
 Key: HBASE-25249
 URL: https://issues.apache.org/jira/browse/HBASE-25249
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.3.2
Reporter: Abhishek Khanna


Adding HStoreContext which contains the metadata about the HStore. This meta 
data can be used across the HFileWriter/Readers and other HStore consumers 
without the need of passing around the complete store and exposing its 
internals. This is a refactoring change which cleans up the existing code so 
that the following commits can leverage the context for a more maintainable 
code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25216) The client zk syncer should deal with meta replica count change

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226892#comment-17226892
 ] 

Hudson commented on HBASE-25216:


Results for branch branch-2
[build #94 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The client zk syncer should deal with meta replica count change
> ---
>
> Key: HBASE-25216
> URL: https://issues.apache.org/jira/browse/HBASE-25216
> Project: HBase
>  Issue Type: Bug
>  Components: master, Zookeeper
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The failure of TestSeparateClientZKCluster is because that, we start the zk 
> syncer before we initialize meta region, and after HBASE-25099, we will scan 
> zookeeper to get the meta znodes directly instead of checking the config, so 
> we will get an empty list since there is no meta location on zk yet, thus we 
> will sync nothing.
> But changing the order can not solve everything, as after HBASE-25099, we can 
> change the meta replica count without restartinig master, so the zk syncer 
> should have the ability to know the change and start to sync the location for 
> the new replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25053) WAL replay should ignore 0-length files

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226894#comment-17226894
 ] 

Hudson commented on HBASE-25053:


Results for branch branch-2
[build #94 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> WAL replay should ignore 0-length files
> ---
>
> Key: HBASE-25053
> URL: https://issues.apache.org/jira/browse/HBASE-25053
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.3.1
>Reporter: Nick Dimiduk
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> I overdrove a small testing cluster, filling HDFS. After cleaning up data to 
> bring HBase back up, I noticed all masters -refused to start- abort. Logs 
> complain of seeking past EOF. Indeed the last wal file name logged is a 
> 0-length file. WAL replay should gracefully skip and clean up such an empty 
> file.
> {noformat}
> 2020-09-16 19:51:30,297 ERROR org.apache.hadoop.hbase.master.HMaster: Failed 
> to become active master
> java.io.EOFException: Cannot seek after EOF
> at 
> org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1448)
> at 
> org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:66)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:211)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:323)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:305)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:429)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4859)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4765)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1014)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7496)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7454)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:269)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:309)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:949)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
> at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25210) RegionInfo.isOffline is now a duplication with RegionInfo.isSplit

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226893#comment-17226893
 ] 

Hudson commented on HBASE-25210:


Results for branch branch-2
[build #94 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/94/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> RegionInfo.isOffline is now a duplication with RegionInfo.isSplit
> -
>
> Key: HBASE-25210
> URL: https://issues.apache.org/jira/browse/HBASE-25210
> Project: HBase
>  Issue Type: Improvement
>  Components: meta
>Reporter: Duo Zhang
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> The only place, where we set it to true is in splitRegion, and at the same 
> time we will set split to true.
> So in general, I suggest that we deprecated isOffline and isSplitParent in 
> RegionInfo, only leave the isSplit method. And in RegionInfoBuilder, we 
> deprecated setOffline and only leave the setSplit method.
> This could make our code base cleaner.
> And for serialization compatibility, we'd better still keep the split and 
> offline fields in the actual RegionInfo datastructure for a while.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”

2020-11-05 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-25238.
---
Fix Version/s: 2.3.4
   2.2.7
   2.4.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
 Release Note: Fixes master procedure store migration issues going from 
2.0.x to 2.2.x and/or 2.3.x. Also fixes failed heartbeat parse during rolling 
upgrade from 2.0.x. to 2.3.x.
 Assignee: Michael Stack
   Resolution: Fixed

Merged to 2.2+ (half of the patch only went into 2.2 – full patch elsewhere). 
Thanks for review [~vjasani]

> Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing 
> required fields: state”
> -
>
> Key: HBASE-25238
> URL: https://issues.apache.org/jira/browse/HBASE-25238
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Zhuqi Jin
>Assignee: Michael Stack
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4
>
>
> When we upgraded HBASE cluster from 2.0.0-RC0 to 2.3.0 or 2.3.3, the HMaster 
> on upgraded node failed to start.
> The error message is shown below: 
> {code:java}
> 2020-11-02 23:04:01,998 ERROR [master/2c4006997f99:16000:becomeActiveMaster] 
> master.HMaster: Failed to become active 
> masterorg.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Message missing required fields: state   at 
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:120)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:125)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
>    at 
> org.apache.hbase.thirdparty.com.google.protobuf.Any.unpack(Any.java:228)  
>  at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil$StateSerializer.deserialize(ProcedureUtil.java:124)
>    at 
> org.apache.hadoop.hbase.master.assignment.RegionRemoteProcedureBase.deserializeStateData(RegionRemoteProcedureBase.java:352)
>    at 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure.deserializeStateData(OpenRegionProcedure.java:72)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProcedure(ProcedureUtil.java:294)
>    at 
> org.apache.hadoop.hbase.procedure2.store.ProtoAndProcedure.getProcedure(ProtoAndProcedure.java:43)
>    at 
> org.apache.hadoop.hbase.procedure2.store.InMemoryProcedureIterator.next(InMemoryProcedureIterator.java:90)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore$1.load(RegionProcedureStore.java:194)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore$2.load(WALProcedureStore.java:474)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormatReader.finish(ProcedureWALFormatReader.java:151)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.load(ProcedureWALFormat.java:103)
>    at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.load(WALProcedureStore.java:465)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:184)
>    at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257)
>    at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
>    at 
> org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1572)
>    at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:950)
>    at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
>    at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622) 
>   at java.lang.Thread.run(Thread.java:748)2020-11-02 23:04:01,998 ERROR 
> [master/2c4006997f99:16000:becomeActiveMaster] master.HMaster: * ABORTING 
> master 2c4006997f99,16000,1604358237412: Unhandled exception. Starting 
> shutdown. 
> *org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException:
>  Message missing required fields: state   at 
> 

[jira] [Comment Edited] (HBASE-25234) [Upgrade]Incompatibility in reading RS report from 2.1 RS when Master is upgraded to a version containing HBASE-21406

2020-11-05 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226829#comment-17226829
 ] 

Michael Stack edited comment on HBASE-25234 at 11/5/20, 5:41 PM:
-

Fixed by HBASE-25238


was (Author: stack):
Pushed on branch-2.3+. Applied half of the patch to branch-2.2 (the change in 
clusterreport wasn't added till 2.3).

> [Upgrade]Incompatibility in reading RS report from 2.1 RS when Master is 
> upgraded to a version containing HBASE-21406
> -
>
> Key: HBASE-25234
> URL: https://issues.apache.org/jira/browse/HBASE-25234
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Sanjeet Nishad
>Assignee: Sanjeet Nishad
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4
>
>
> While upgrading to a version having HBASE-21406 and following the upgrade 
> process suggested in HBASE-21075, after Master is upgraded, the following 
> exception is observed while reading the rs report from old region servers :
> {code:java}
> 2020-11-02 18:25:30,303 WARN [RS-EventLoopGroup-1-2] ipc.RpcServer: 
> /x.x.x.x:16000 is unable to read call parameter from client x.x.x.x
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException:
>  Message missing required fields: load.replLoadSink.timestampStarted, 
> load.replLoadSink.totalOpsProcessed
>  at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:477)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerReportRequest$Builder.build(RegionServerStatusProtos.java:2411)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerReportRequest$Builder.build(RegionServerStatusProtos.java:2349)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:654)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:458)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:351)
>  at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:92)
>  at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:68)
>  at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321)
>  at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:295)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>  at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>  at 
> 

[GitHub] [hbase] huaxiangsun commented on a change in pull request #2400: HBASE-24877 addendum: additional checks to avoid one extra possible race control in the initialize loop

2020-11-05 Thread GitBox


huaxiangsun commented on a change in pull request #2400:
URL: https://github.com/apache/hbase/pull/2400#discussion_r518239190



##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSource.java
##
@@ -416,6 +416,17 @@ public synchronized UUID getPeerUUID() {
 
   }
 
+  public static class FaultyReplicationEndpoint extends 
DoNothingReplicationEndpoint {

Review comment:
   Bumping into this during rebase of HBASE-18070 branch. If count is not 
used, just remove it from this class?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25116) RegionMonitor support RegionTask count normalize

2020-11-05 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226881#comment-17226881
 ] 

Michael Stack commented on HBASE-25116:
---

What is about the canary access that slows user-requests?

 

Should these options be made into command-line options for the Canary tool 
rather than internal configs?

 

We are playing w/ 'task' counts in the patch. A task maps to a Region?  Would 
it help if we talked of sampling rather than task counts? A command-line option 
that took a sample float? If you passed --sample=0.1 or --table_sample=0.1 (or 
-Dcanary.sample.. ) on the command-line, would that be easier on the 
operator? It would make the feature easier to find if it showed in the canary 
--help usage?

> RegionMonitor support RegionTask count normalize
> 
>
> Key: HBASE-25116
> URL: https://issues.apache.org/jira/browse/HBASE-25116
> Project: HBase
>  Issue Type: Improvement
>Reporter: niuyulin
>Assignee: niuyulin
>Priority: Minor
>
> large count of region task from canary may affect user normal request, 
> meanwhile if region task is few, the  availability monitoring may shake for 
> occasional exception.
> so , if the task count is large , we will randomly trim tasks for each table, 
> according to the raito of the table region count in whole tasks region count. 
> If the task count is few,  we will repeat tasks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-25234) [Upgrade]Incompatibility in reading RS report from 2.1 RS when Master is upgraded to a version containing HBASE-21406

2020-11-05 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-25234.
---
Fix Version/s: 2.3.4
   2.2.7
   2.4.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
 Release Note: Fixes so auto-migration of master procedure store works 
again going from 2.0.x => 2.2+. Also make it so heartbeats work when rolling 
upgrading from 2.0.x => 2.3+.
   Resolution: Fixed

Pushed on branch-2.3+. Applied half of the patch to branch-2.2 (the change in 
clusterreport wasn't added till 2.3).

> [Upgrade]Incompatibility in reading RS report from 2.1 RS when Master is 
> upgraded to a version containing HBASE-21406
> -
>
> Key: HBASE-25234
> URL: https://issues.apache.org/jira/browse/HBASE-25234
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Sanjeet Nishad
>Assignee: Sanjeet Nishad
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4
>
>
> While upgrading to a version having HBASE-21406 and following the upgrade 
> process suggested in HBASE-21075, after Master is upgraded, the following 
> exception is observed while reading the rs report from old region servers :
> {code:java}
> 2020-11-02 18:25:30,303 WARN [RS-EventLoopGroup-1-2] ipc.RpcServer: 
> /x.x.x.x:16000 is unable to read call parameter from client x.x.x.x
> org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException:
>  Message missing required fields: load.replLoadSink.timestampStarted, 
> load.replLoadSink.totalOpsProcessed
>  at 
> org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:477)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerReportRequest$Builder.build(RegionServerStatusProtos.java:2411)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerReportRequest$Builder.build(RegionServerStatusProtos.java:2349)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processRequest(ServerRpcConnection.java:654)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.processOneRpc(ServerRpcConnection.java:458)
>  at 
> org.apache.hadoop.hbase.ipc.ServerRpcConnection.saslReadAndProcess(ServerRpcConnection.java:351)
>  at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:92)
>  at 
> org.apache.hadoop.hbase.ipc.NettyServerRpcConnection.process(NettyServerRpcConnection.java:68)
>  at 
> org.apache.hadoop.hbase.ipc.NettyRpcServerRequestDecoder.channelRead(NettyRpcServerRequestDecoder.java:62)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321)
>  at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:295)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
>  at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>  at 
> 

[GitHub] [hbase] saintstack commented on pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…

2020-11-05 Thread GitBox


saintstack commented on pull request #2625:
URL: https://github.com/apache/hbase/pull/2625#issuecomment-722492061


   Signed-off-by: Viraj Jasani 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25238) Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”

2020-11-05 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226811#comment-17226811
 ] 

Michael Stack commented on HBASE-25238:
---

Here is log from manual migration. I ran a 2.0.x cluster, loaded it w/ some 
data. I then stopped the 2.0.x Master. Started a 2.4.x Master. Below you see 
successful migration of store from old format to new. All the while the old 
RegionServer kept heartbeating though it was using old ClusterReport format.

 
{code:java}

2020-11-05 16:27:04,684 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
region.RegionProcedureStore: Starting Region Procedure Store lease recovery...
2020-11-05 16:27:04,685 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
region.RegionProcedureStore: The old WALProcedureStore wal directory 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs exists, migrating...
2020-11-05 16:27:04,694 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
util.RecoverLeaseFSUtils: Recover lease on dfs file 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
2020-11-05 16:27:04,697 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
util.RecoverLeaseFSUtils: Recovered lease, attempt=0 on 
file=hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
 after 3ms
2020-11-05 16:27:04,704 WARN  [master/hbasedn020:16000:becomeActiveMaster] 
wal.WALProcedureStore: Unable to read tracker for 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat$InvalidWALDataException:
 Missing trailer: size=18 startPos=18
at 
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readTrailer(ProcedureWALFormat.java:182)
at 
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.readTrailer(ProcedureWALFile.java:93)
at 
org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.readTracker(ProcedureWALFile.java:100)
at 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLog(WALProcedureStore.java:1389)
at 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs(WALProcedureStore.java:1338)
at 
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:416)
at 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.tryMigrate(RegionProcedureStore.java:180)
at 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:257)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
at 
org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.java:1560)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:925)
at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2182)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:603)
at java.base/java.lang.Thread.run(Thread.java:834)
2020-11-05 16:27:04,710 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.WALProcedureStore: Rolled new Procedure Store WAL, id=21
2020-11-05 16:27:04,714 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.ProcedureWALFormatReader: Rebuilding tracker for 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
2020-11-05 16:27:04,716 WARN  [master/hbasedn020:16000:becomeActiveMaster] 
wal.ProcedureWALFormatReader: Nothing left to decode. Exiting with missing EOF, 
log=hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
2020-11-05 16:27:04,716 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.ProcedureWALFormatReader: Read 0 entries in 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log
2020-11-05 16:27:04,737 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.WALProcedureStore: Rolled new Procedure Store WAL, id=22
2020-11-05 16:27:04,737 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.WALProcedureStore: Remove all state logs with ID less than 21, since no 
active procedures
2020-11-05 16:27:04,737 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.ProcedureWALFile: Archiving 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0020.log 
to hdfs://nameservice1/tmp/stack.wal/oldWALs/pv2-0020.log
2020-11-05 16:27:04,738 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
wal.ProcedureWALFile: Archiving 
hdfs://nameservice1/tmp/stack.wal/MasterProcWALs/pv2-0021.log 
to hdfs://nameservice1/tmp/stack.wal/oldWALs/pv2-0021.log
2020-11-05 16:27:04,743 INFO  [master/hbasedn020:16000:becomeActiveMaster] 
region.RegionProcedureStore: Migrated 0 existing procedures from the old 
storage format.
2020-11-05 16:27:04,743 INFO  

[GitHub] [hbase] saintstack merged pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…

2020-11-05 Thread GitBox


saintstack merged pull request #2625:
URL: https://github.com/apache/hbase/pull/2625


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…

2020-11-05 Thread GitBox


saintstack commented on pull request #2625:
URL: https://github.com/apache/hbase/pull/2625#issuecomment-722491691


   > +1, looks like we might need source compatibility report for protos too 
with new release?
   
   Yeah. Let me file an issue.
   
   Thanks for review. I added to the JIRA evidence of a successful update, one 
that doesn't fail in the two ways outlined in JIRA. Let me merge.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-25240) gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-25240.
--
Fix Version/s: 2.4.0
   1.7.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Thanks for the patch [~wenfeiyi666].

> gson format of RpcServer.logResponse is abnormal
> 
>
> Key: HBASE-25240
> URL: https://issues.apache.org/jira/browse/HBASE-25240
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.3, 2.2.6
>Reporter: wenfeiyi666
>Assignee: wenfeiyi666
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0, 2.2.7, 2.3.4
>
>
> It will turn ‘=’ into ‘\u003d’.
> ipc.RpcServer(550): (responseTooSlow): 
> {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":0,"starttimems":"1604389131993","responsesize":"64","method":"Multi","param":"region{color:#FF}\u003d{color}
>  test,,1604389129684.8812226d0f8942b24892c79e3c393b26., for 10 action(s) and 
> 1st row 
> key{color:#FF}\u003d{color}11","processingtimems":20,"client":"172.16.136.23:61264","queuetimems":0,"multi.servicecalls":0,"class":"MiniHBaseClusterRegionServer","multi.mutations":10}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on pull request #2374: HBASE-25003 Backport HBASE-24350 and HBASE-24779 to branch-2.3

2020-11-05 Thread GitBox


virajjasani commented on pull request #2374:
URL: https://github.com/apache/hbase/pull/2374#issuecomment-722421807


   @wchevreuil @ndimiduk maybe we can proceed with this backport anytime soon?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] virajjasani closed pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread GitBox


virajjasani closed pull request #2623:
URL: https://github.com/apache/hbase/pull/2623


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r518036683



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -790,23 +797,28 @@ public Cell getCell() {
 // we can handle the 'no tags' case.
 if (currTagsLen > 0) {
   ret = new SizeCachedKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 } else {
   ret = new SizeCachedNoTagsKeyValue(blockBuffer.array(),
-  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId);
+  blockBuffer.arrayOffset() + blockBuffer.position(), cellBufSize, 
seqId, currKeyLen,
+  rowLen);
 }
   } else {
 ByteBuffer buf = blockBuffer.asSubByteBuffer(cellBufSize);
 if (buf.isDirect()) {
-  ret = currTagsLen > 0 ? new ByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId)
-  : new NoTagsByteBufferKeyValue(buf, buf.position(), cellBufSize, 
seqId);
+  ret = currTagsLen > 0
+  ? new SizeCachedByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen)
+  : new SizeCachedNoTagsByteBufferKeyValue(buf, buf.position(), 
cellBufSize, seqId,
+  currKeyLen, rowLen);
 } else {
   if (currTagsLen > 0) {
 ret = new SizeCachedKeyValue(buf.array(), buf.arrayOffset() + 
buf.position(),
-cellBufSize, seqId);
+cellBufSize, seqId, currKeyLen, rowLen);

Review comment:
   Here too the rowLen is already there. Any specific reason you feel this 
need not be decoded. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-25053) WAL replay should ignore 0-length files

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226708#comment-17226708
 ] 

Hudson commented on HBASE-25053:


Results for branch master
[build #117 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> WAL replay should ignore 0-length files
> ---
>
> Key: HBASE-25053
> URL: https://issues.apache.org/jira/browse/HBASE-25053
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 2.3.1
>Reporter: Nick Dimiduk
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> I overdrove a small testing cluster, filling HDFS. After cleaning up data to 
> bring HBase back up, I noticed all masters -refused to start- abort. Logs 
> complain of seeking past EOF. Indeed the last wal file name logged is a 
> 0-length file. WAL replay should gracefully skip and clean up such an empty 
> file.
> {noformat}
> 2020-09-16 19:51:30,297 ERROR org.apache.hadoop.hbase.master.HMaster: Failed 
> to become active master
> java.io.EOFException: Cannot seek after EOF
> at 
> org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1448)
> at 
> org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:66)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:211)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:323)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:305)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
> at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:429)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4859)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4765)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1014)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:956)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7496)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7454)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:269)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:309)
> at 
> org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:949)
> at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2240)
> at 
> org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:622)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25245) hbase_generate_website is failing due to incorrect jdk and maven syntax

2020-11-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226707#comment-17226707
 ] 

Hudson commented on HBASE-25245:


Results for branch master
[build #117 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/117/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> hbase_generate_website is failing due to incorrect jdk and maven syntax
> ---
>
> Key: HBASE-25245
> URL: https://issues.apache.org/jira/browse/HBASE-25245
> Project: HBase
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> While waiting for HBase download page to reflect new release and during an 
> offline syncup with [~ndimiduk] , realized that generate website is failing 
> for quite some time now, e.g 
> https://ci-hadoop.apache.org/job/HBase/job/hbase_generate_website/80/
> {code:java}
> Obtained dev-support/jenkins-scripts/generate-hbase-website.Jenkinsfile from 
> git https://gitbox.apache.org/repos/asf/hbase.git
> Running in Durability level: PERFORMANCE_OPTIMIZED
> org.codehaus.groovy.control.MultipleCompilationErrorsException: startup 
> failed:
> WorkflowScript: 40: Tool type "maven" does not have an install of "Maven 
> (latest)" configured - did you mean "maven_latest"? @ line 40, column 15.
>maven 'Maven (latest)'
>  ^
> WorkflowScript: 42: Tool type "jdk" does not have an install of "JDK 1.8 
> (latest)" configured - did you mean "jdk_1.8_latest"? @ line 42, column 13.
>jdk "JDK 1.8 (latest)"
>^
> {code}
> We might have to apply fix similar to HBASE-25204 to generate website 
> specific Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2452:
URL: https://github.com/apache/hbase/pull/2452#issuecomment-722325977


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 49s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-24666 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  HBASE-24666 passed  |
   | +1 :green_heart: |  compile  |   2m  6s |  HBASE-24666 passed  |
   | +1 :green_heart: |  shadedjars  |   7m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  HBASE-24666 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 30s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 52s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 40s |  hbase-replication in the patch 
passed.  |
   | -1 :x: |  unit  | 229m 22s |  hbase-server in the patch failed.  |
   |  |   | 264m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.13 Server=19.03.13 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2452 |
   | Optional Tests | unit javac javadoc shadedjars compile |
   | uname | Linux 00bf3087f026 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24666 / f67c3dfc5a |
   | Default Java | 1.8.0_232 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/testReport/
 |
   | Max. process+thread count | 3930 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2623:
URL: https://github.com/apache/hbase/pull/2623#issuecomment-722310878


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 25s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 17s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  | 141m 49s |  hbase-server in the patch passed.  
|
   |  |   | 174m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2623 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 60f8274781e5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 0e71d6192a |
   | Default Java | AdoptOpenJDK-1.8.0_232-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/testReport/
 |
   | Max. process+thread count | 5047 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2623:
URL: https://github.com/apache/hbase/pull/2623#issuecomment-722309909


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 15s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   7m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 33s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 48s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  | 140m  6s |  hbase-server in the patch failed.  |
   |  |   | 172m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2623 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e2dad11bda10 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 0e71d6192a |
   | Default Java | AdoptOpenJDK-11.0.6+10 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/testReport/
 |
   | Max. process+thread count | 4007 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r517935274



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
##
@@ -554,8 +559,10 @@ protected int blockSeek(Cell key, boolean seekBefore) {
   + " path=" + reader.getPath());
 }
 offsetFromPos += Bytes.SIZEOF_LONG;
+rowLen = ((blockBuffer.getByteAfterPosition(offsetFromPos) & 0xff) << 
8)
+^ (blockBuffer.getByteAfterPosition(offsetFromPos + 1) & 0xff);
 blockBuffer.asSubByteBuffer(blockBuffer.position() + offsetFromPos, 
klen, pair);
-bufBackedKeyOnlyKv.setKey(pair.getFirst(), pair.getSecond(), klen);
+bufBackedKeyOnlyKv.setKey(pair.getFirst(), pair.getSecond(), klen, 
(short)rowLen);

Review comment:
   IMO the rowLen is anyway getting decoded in the setKey method. And the 
same we can reuse and have it for rowLen. So I think it is better to have it 
decoded earlier. Because that rowLen decoding is happening per cell. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2625:
URL: https://github.com/apache/hbase/pull/2625#issuecomment-722278595


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 39s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m 36s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 28s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 24s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 53s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 37s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 193m 41s |  hbase-server in the patch passed.  
|
   |  |   | 233m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2625 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux edd0581ab44c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 3d8152b635 |
   | Default Java | AdoptOpenJDK-11.0.6+10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/testReport/
 |
   | Max. process+thread count | 2476 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2452:
URL: https://github.com/apache/hbase/pull/2452#issuecomment-722275578


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-24666 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 13s |  HBASE-24666 passed  |
   | +1 :green_heart: |  compile  |   2m 21s |  HBASE-24666 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  HBASE-24666 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 43s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 57s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 27s |  hbase-replication in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 133m 15s |  hbase-server in the patch passed.  
|
   |  |   | 167m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.13 Server=19.03.13 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2452 |
   | Optional Tests | unit javac javadoc shadedjars compile |
   | uname | Linux e77e69bdd10b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24666 / f67c3dfc5a |
   | Default Java | 2020-01-14 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/testReport/
 |
   | Max. process+thread count | 4354 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ramkrish86 commented on a change in pull request #2582: HBASE-25187 Improve SizeCachedKV variants initialization

2020-11-05 Thread GitBox


ramkrish86 commented on a change in pull request #2582:
URL: https://github.com/apache/hbase/pull/2582#discussion_r517919646



##
File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/SizeCachedKeyValue.java
##
@@ -39,12 +39,22 @@
   private short rowLen;
   private int keyLen;
 
-  public SizeCachedKeyValue(byte[] bytes, int offset, int length, long seqId) {
+  public SizeCachedKeyValue(byte[] bytes, int offset, int length, long seqId, 
int keyLen) {
 super(bytes, offset, length);
 // We will read all these cached values at least once. Initialize now 
itself so that we can
 // avoid uninitialized checks with every time call
-rowLen = super.getRowLength();
-keyLen = super.getKeyLength();
+this.rowLen = super.getRowLength();

Review comment:
   I tried this actually but from the this() constructor calling the 
super.getRowLength is not allowed. Hence I went with this simple way. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2625: HBASE-25238 Upgrading HBase from 2.2.0 to 2.3.x fails because of “Mes…

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2625:
URL: https://github.com/apache/hbase/pull/2625#issuecomment-722248703


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 39s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m  5s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m  1s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 55s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 40s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 22s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 144m 58s |  hbase-server in the patch passed.  
|
   |  |   | 177m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2625 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ff06d485a3bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 3d8152b635 |
   | Default Java | AdoptOpenJDK-1.8.0_232-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/testReport/
 |
   | Max. process+thread count | 4029 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2625/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2623: HBASE-25240 gson format of RpcServer.logResponse is abnormal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2623:
URL: https://github.com/apache/hbase/pull/2623#issuecomment-722243322


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 43s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 26s |  hbase-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m  6s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 25s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  47m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2623 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 5b71568530d8 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 0e71d6192a |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | Max. process+thread count | 94 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2623/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #2452: HBASE-25071 ReplicationServer support start ReplicationSource internal

2020-11-05 Thread GitBox


Apache-HBase commented on pull request #2452:
URL: https://github.com/apache/hbase/pull/2452#issuecomment-722214405


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-24666 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 27s |  HBASE-24666 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  HBASE-24666 passed  |
   | +1 :green_heart: |  spotbugs  |   5m 53s |  HBASE-24666 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  5s |  hbase-server: The patch 
generated 1 new + 56 unchanged - 0 fixed = 57 total (was 56)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  16m 54s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   6m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  50m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.13 Server=19.03.13 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/2452 |
   | Optional Tests | dupname asflicense cc hbaseprotoc prototool spotbugs 
hadoopcheck hbaseanti checkstyle |
   | uname | Linux 24a7bab306c9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-24666 / f67c3dfc5a |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 94 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2452/12/console
 |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org