[GitHub] [hbase-operator-tools] wchevreuil commented on pull request #86: HBASE-25874 [hbase-operator-tools]Add tool for identifying 'unknown s…
wchevreuil commented on pull request #86: URL: https://github.com/apache/hbase-operator-tools/pull/86#issuecomment-837185929 > I think my main worry is about relying on log file parsing for this. That's not to say this is flawed -- it just is what it is. Yeah, it's fragile indeed, but we are targeting this mainly for the unfortunate souls running hbase versions lower than 2.2.7, 2.3.5 and 2.4.7, as mentioned on the readme, so there shouldn't be any changes on logging format for these already released versions. For any version from the above mentioned onwards, there's already hbck2 recoverUnknown method. > Is there a reason you went for this approach rather than trying to read meta and interrogate the Master as to who the active RegionServers are? Although meta is already online, master has not completed initialisation (because of namespace table region stuck on a unknown server), so it pushes back any client requests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work started] (HBASE-25875) RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
[ https://issues.apache.org/jira/browse/HBASE-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-25875 started by Pankaj Kumar. > RegionServer failed to start due to IllegalThreadStateException in > AuthenticationTokenSecretManager.start > - > > Key: HBASE-25875 > URL: https://issues.apache.org/jira/browse/HBASE-25875 > Project: HBase > Issue Type: Bug >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > > RegionServer failed to complete initialization and aborted during > AuthenticationTokenSecretManager#leaderElector start. > Observed following WARN log, > {noformat} > 2021-05-03 07:59:01,848 | WARN | RS-EventLoopGroup-1-6 | Thread > leaderElector[ZKSecretWatcher-leaderElector:56] is stopped or not alive | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:153) > 2021-05-03 07:59:01,848 | INFO | RS-EventLoopGroup-1-6 | Thread > leaderElector [ZKSecretWatcher-leaderElector:56] is started | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:156) > 2021-05-03 07:59:01,854 | INFO | ZKSecretWatcher-leaderElector | Found > existing leader with ID: RS-IP-PORT-StartCode | > org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.waitToBecomeLeader(ZKLeaderManager.java:130) > {noformat} > As per the code, AuthenticationTokenSecretManager#leaderElector is started > while retrieving password before AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L155] > So IllegalThreadStateException occured during > AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L107] > {noformat} > 2021-05-03 07:59:02,066 | ERROR | main | Failed construction RegionServer | > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:775) > java.lang.IllegalThreadStateException > at java.lang.Thread.start(Thread.java:708) > at > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.start(AuthenticationTokenSecretManager.java:107) > at > org.apache.hadoop.hbase.ipc.NettyRpcServer.start(NettyRpcServer.java:131) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.start(RSRpcServices.java:1695) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:756) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:3270) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3242: HBASE-25867 Extra doc around ITBLL
Apache-HBase commented on pull request #3242: URL: https://github.com/apache/hbase/pull/3242#issuecomment-837244127 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 53s | branch-2.4 passed | | +1 :green_heart: | compile | 2m 14s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 8m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 9s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 50s | the patch passed | | +1 :green_heart: | compile | 1m 43s | the patch passed | | +1 :green_heart: | javac | 1m 43s | the patch passed | | +1 :green_heart: | shadedjars | 7m 39s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 58s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 195m 28s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 1m 5s | hbase-it in the patch passed. | | | | 234m 9s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3242 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9585789c4738 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / e4dc6eaa76 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/testReport/ | | Max. process+thread count | 2611 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-it U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] pankaj72981 opened a new pull request #3250: HBASE-25875 RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
pankaj72981 opened a new pull request #3250: URL: https://github.com/apache/hbase/pull/3250 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] shahrs87 commented on a change in pull request #3244: HBASE-25869 WAL value compression
shahrs87 commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629708006 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException { } } } + +private byte[] compressValue(Cell cell) throws IOException { + byte[] buffer = new byte[4096]; + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + Deflater deflater = compression.getValueCompressor().getDeflater(); + deflater.setInput(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + boolean finished = false; + do { +int bytesOut = deflater.deflate(buffer); +if (bytesOut > 0) { + baos.write(buffer, 0, bytesOut); +} else { + bytesOut = deflater.deflate(buffer, 0, buffer.length, Deflater.SYNC_FLUSH); Review comment: If we reach this else condition that means we have compressed all the data and written to ByteArrayOutputStream. Now in else condition, we are sure that bytesOut will be 0. We are not resetting `buffer` anywhere. So for the last chunk of Cell's value whose size <= 4096, will we be compressing them twice ? @apurtell It is entirely possible that I misunderstood something. Please correct me if that is the case. Thank you ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
saintstack commented on a change in pull request #3246: URL: https://github.com/apache/hbase/pull/3246#discussion_r629730178 ## File path: hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java ## @@ -295,6 +295,11 @@ public void setBackupInfo(BackupInfo backupInfo) { .withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames()) .withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build(); + // Not required if root dir is not same. + if (!image.getRootDir().equals(backupInfo.getBackupRootDir())) { Review comment: Maybe add a because here? (I say this because this change looks fine but looking at this and looking at the surrounding code, I'm not clear what is going on -- 'image', 'ancestor' -- and I'm not that inclined to spend much time doing research because as you know, I'm not that mad about the backup feature!). Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342086#comment-17342086 ] Huaxiang Sun edited comment on HBASE-25032 at 5/10/21, 6:58 PM: [~ndimiduk] I think the metadata is not changed. When the patch was backported to branch-2, the correct fix version was not marked. In general, it is a good practice to mark the correct fix version to reduce the release manager's work. Thanks. was (Author: huaxiangsun): [~ndimiduk] I think the metadata is not changed. When the patch was backported to branch-2, the correct fix version was not marked. In general, it will be good practice to mark the correct fix version to reduce the release manager's work. Thanks. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-operator-tools] joshelser commented on pull request #86: HBASE-25874 [hbase-operator-tools]Add tool for identifying 'unknown s…
joshelser commented on pull request #86: URL: https://github.com/apache/hbase-operator-tools/pull/86#issuecomment-837194494 > Although meta is already online, master has not completed initialisation (because of namespace table region stuck on a unknown server), so it pushes back any client requests. Argh. The gift that keeps on giving. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] shahrs87 commented on a change in pull request #3244: HBASE-25869 WAL value compression
shahrs87 commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629711573 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException { } } } + +private byte[] compressValue(Cell cell) throws IOException { + byte[] buffer = new byte[4096]; + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + Deflater deflater = compression.getValueCompressor().getDeflater(); + deflater.setInput(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + boolean finished = false; + do { +int bytesOut = deflater.deflate(buffer); +if (bytesOut > 0) { + baos.write(buffer, 0, bytesOut); +} else { + bytesOut = deflater.deflate(buffer, 0, buffer.length, Deflater.SYNC_FLUSH); Review comment: Also the java doc for SYNC_FLUSH says the following thing. ``` In the case of FULL_FLUSH or SYNC_FLUSH, if the return value is len, the space available in output buffer b, this method should be invoked again with the same flush parameter and more output space. ``` https://docs.oracle.com/javase/8/docs/api/java/util/zip/Deflater.html#deflate-byte:A-int-int-int- @apurtell Do we need to change something regarding this comment ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25855) [hbase-thirdparty] Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342201#comment-17342201 ] Sean Busbey commented on HBASE-25855: - ah I see. 3.5.0 was the htrace replacement module. that's been pending me being able to pick it up again. with the open telemetry stuff landing in the master branch the htrace artifact only needs to be used in active release lines that don't get a back port of the new tracing. That's okay I'm happy to write up docs against 4.0.0 instead of 3.5.0. folks can still make use of the 3.5.0 artifact now to do a drop in mitigation of the CVEs introduced by our use of htrace if we should get bogged down trying to get this release out for whatever reason. > [hbase-thirdparty] Fix typo in jersey relocation path > - > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629757222 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -241,10 +246,27 @@ public void write(Cell cell) throws IOException { compression.getDictionary(CompressionContext.DictionaryIndex.FAMILY)); PrivateCellUtil.compressQualifier(out, cell, compression.getDictionary(CompressionContext.DictionaryIndex.QUALIFIER)); - // Write timestamp, type and value as uncompressed. + // Write timestamp, type and value. StreamUtils.writeLong(out, cell.getTimestamp()); - out.write(cell.getTypeByte()); - PrivateCellUtil.writeValue(out, cell, cell.getValueLength()); + byte type = cell.getTypeByte(); + if (compression.getValueCompressor() != null && + cell.getValueLength() > VALUE_COMPRESS_THRESHOLD) { +// Try compressing the cell's value +byte[] compressedBytes = compressValue(cell); +// Only write the compressed value if we have achieved some space savings. +if (compressedBytes.length < cell.getValueLength()) { + // Set the high bit of type to indicate the value is compressed + out.write((byte)(type|0x80)); Review comment: > Jetty settled on a size threshold of 23 bytes. Thank you @ndimiduk . gzip and deflate are the same thing, essentially. Let's opt for the smaller threshold and see how it goes. Worst case if the compressor produces output that is larger than the original, we just discard it and use the original, so that's not a problem. With a smaller threshold more values are eligible for compression so there will be more time spent in compression, but presumably with a pay off in space savings, so that seems fine. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629757734 ## File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto ## @@ -32,6 +32,7 @@ message WALHeader { optional bool has_tag_compression = 3; optional string writer_cls_name = 4; optional string cell_codec_cls_name = 5; + optional bool has_value_compression = 6; Review comment: Yes, we initialize the CompressionContext from the header. It has to be done up front. If we made value compression something we always do, then this field is not necesary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #3244: HBASE-25869 WAL value compression
bharathv commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629772178 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -302,14 +343,28 @@ protected Cell parseCell() throws IOException { compression.getDictionary(CompressionContext.DictionaryIndex.QUALIFIER)); pos += elemLen; - // timestamp, type and value - int tsTypeValLen = length - pos; + // timestamp + long ts = StreamUtils.readLong(in); + pos = Bytes.putLong(backingArray, pos, ts); + // type and value + int typeValLen = length - pos; if (tagsLength > 0) { -tsTypeValLen = tsTypeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; +typeValLen = typeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; + } + // high bit of type byte is 1 if value is compressed + byte type = (byte)in.read(); + if ((type & 0x80) == 0x80) { Review comment: Ya, my gut feeling says it will be amortized in a mix of small and large values and worst case assuming every value is small, the overhead is not that much (since its off by default anyway, people who choose to turn it on will experiment to see if it actually works for them). So for that is it worth complicating the code. Just throwing this idea out here. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: I'm talking about performance, not buffer overwrites. Following is the code path. [FS/Async]WAL#doAppend() -> writer.append() -> cellEncoder.Write() -> WallCellCodec.compressValue() -> deflater.deflate() all the deflate() calls are synchronized on the above `zsRef`. My question was does it affect the throughput of a wal appends? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3236: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
Apache-HBase commented on pull request #3236: URL: https://github.com/apache/hbase/pull/3236#issuecomment-837164125 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 30s | master passed | | +1 :green_heart: | compile | 1m 14s | master passed | | +1 :green_heart: | shadedjars | 8m 12s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 8m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 145m 3s | hbase-server in the patch passed. | | | | 176m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3236 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ff49c0761b51 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8c2332d465 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/testReport/ | | Max. process+thread count | 3998 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack merged pull request #3236: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
saintstack merged pull request #3236: URL: https://github.com/apache/hbase/pull/3236 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-thirdparty] ndimiduk commented on pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
ndimiduk commented on pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#issuecomment-837239785 > Since the relocated jaxrs provider is intimately tied to our use of jersey, can we fold this into our packaged relocation of jersey in the `hbase-shaded-jersey` module? I chose intentionally not to do that, because we don't currently use this dependency in the `hbase-server` classpath. It is only used by `hbase-rest`. I believe we want to finish the purge of our use of jackson, which means eventually removing this jar and replacing it with an EntityWriter based on Gson. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342130#comment-17342130 ] Andrew Kyle Purtell commented on HBASE-25032: - Good, so no release is affected and we can proceed as normal. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25855) Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-25855: - Fix Version/s: (was: thirdparty-3.5.1) thirdparty-4.0.0 > Fix typo in jersey relocation path > -- > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25855) [hbase-thirdparty] Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-25855: - Summary: [hbase-thirdparty] Fix typo in jersey relocation path (was: Fix typo in jersey relocation path) > [hbase-thirdparty] Fix typo in jersey relocation path > - > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-thirdparty] Apache-HBase commented on pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
Apache-HBase commented on pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#issuecomment-837377636 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 6s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 0m 23s | master passed | | +1 :green_heart: | compile | 0m 15s | master passed | | +1 :green_heart: | javadoc | 0m 9s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 4s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 37s | the patch passed | | +1 :green_heart: | compile | 0m 19s | the patch passed | | +1 :green_heart: | javac | 0m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 5s | hbase-shaded-jersey in the patch passed. | | +1 :green_heart: | unit | 0m 4s | hbase-shaded-jackson-jaxrs-json-provider in the patch passed. | | +1 :green_heart: | unit | 0m 29s | root in the patch passed. | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 3m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-51/3/artifact/yetus-precommit-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/51 | | Optional Tests | dupname asflicense javac javadoc unit xml compile | | uname | Linux f4d1f8dc973d 5.4.0-1047-aws #49~18.04.1-Ubuntu SMP Wed Apr 28 23:08:58 UTC 2021 x86_64 GNU/Linux | | Build tool | maven | | git revision | master / 1d73a2e | | Default Java | Oracle Corporation-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-51/3/testReport/ | | Max. process+thread count | 396 (vs. ulimit of 1000) | | modules | C: hbase-shaded-jersey hbase-shaded-jackson-jaxrs-json-provider . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-51/3/console | | versions | git=2.20.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #3244: HBASE-25869 WAL value compression
bharathv commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629760196 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: All the inflate deflate methods are synchronized on a single object, for example, Is that a a concern for bottleneck since we are sharing instances of them, just curious. ``` > public int inflate(byte[] b, int off, int len) > throws DataFormatException > { > if (b == null) { > throw new NullPointerException(); > } > if (off < 0 || len < 0 || off > b.length - len) { > throw new ArrayIndexOutOfBoundsException(); > } > synchronized (zsRef) { <=== > ensureOpen(); > int thisLen = this.len; > int n = inflateBytes(zsRef.address(), b, off, len); > bytesWritten += n; > bytesRead += (thisLen - this.len); > return n; > } > } ``` ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -302,14 +343,28 @@ protected Cell parseCell() throws IOException { compression.getDictionary(CompressionContext.DictionaryIndex.QUALIFIER)); pos += elemLen; - // timestamp, type and value - int tsTypeValLen = length - pos; + // timestamp + long ts = StreamUtils.readLong(in); + pos = Bytes.putLong(backingArray, pos, ts); + // type and value + int typeValLen = length - pos; if (tagsLength > 0) { -tsTypeValLen = tsTypeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; +typeValLen = typeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; + } + // high bit of type byte is 1 if value is compressed + byte type = (byte)in.read(); + if ((type & 0x80) == 0x80) { Review comment: ... My original comment . Why this and why not, ``` if (header.has_value_compression) { ... readCompressed } else { ... } ``` Value compression is a property of WALCellCodec which is created as a part of WAL reader/writer at init. Why do we have to include this compression bit as a part of every Cell type if we already have it as a part of the header (which I see you added in this patch). So rather than doing this per cell, you could just do it once at code init if has_value_compression == true? Is it possible that some KVs in the same WAL are compressed and some aren't? ... Above was my original comment before reading the full patch, now I see the code about VALUE_COMPRESS_THRESHOLD. I believe this is for for handling the small cell values, right? Read somewhere that zlib has 18bytes overhead (header trailer and such, don't quote me on this, need to read the original RFC), so even with a million entries, the size increase is not much. Also that will be amortized by larger sized values anyway. On the flip side it makes the code ugly with special type bytes. Isn't it better to just zlib compress all the values? WDYT. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342205#comment-17342205 ] Bharath Vissapragada commented on HBASE-25032: -- Oops, this patch was committed on 25th March and 2.3.5RC1 came out on 26th March, so I got confused and ended up adding the wrong release version. Is there is a better way to avoid this race rather than keeping track of emails in dev@? > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629763436 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: There is no sharing. One CompressionContext instance per WAL writer. If there is sharing, WAL compression would already be thoroughly broken, that is bad bad bad, dictionaries would be all messed up because they would be updated with output going to different files (think about it, it cannot work) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25862) Improve the Configuration File Descriptions accuracy
[ https://issues.apache.org/jira/browse/HBASE-25862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342088#comment-17342088 ] Nick Dimiduk commented on HBASE-25862: -- Hi [~impself]. The project policy is to have unrelated changes be separate Jira issues with separate commits. The reason being that if some change causes problems and must be reverted, it can be removed from the product without also removing other unrelated improvements. In this case, I would recommend two Jira issues. One might be "update documentation for log4j2.xml", where you would update section headers and add a comment saying something like "As of version X.Y.Z, HBase has upgraded to Log4j2, and so the configuration file name and format has changed." You could have a second Jira called something like "fix various formatting issues in the book," where you take a pass over the whole document and fix up formatting problems like this bulleted list you have identified. These two would be nice contributions! > Improve the Configuration File Descriptions accuracy > > > Key: HBASE-25862 > URL: https://issues.apache.org/jira/browse/HBASE-25862 > Project: HBase > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-alpha-1 >Reporter: Che Xun >Assignee: Che Xun >Priority: Minor > Fix For: 3.0.0-alpha-1 > > Attachments: image-2021-05-07-23-12-11-934.png > > > In Apache HBase ™ Reference Guide > *Need to modify part1:* > 7.Default Configuration ->7.4. log4j.properties > *explanation:* > we need to update to 7.4 log4j2.xml > *Need to modify part2:* > In the first paragraph of the chapter 8.1,the details list is not displayed > correctly,as show below: > > *!image-2021-05-07-23-12-11-934.png!* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3242: HBASE-25867 Extra doc around ITBLL
Apache-HBase commented on pull request #3242: URL: https://github.com/apache/hbase/pull/3242#issuecomment-837157621 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 47s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 53s | branch-2.4 passed | | +1 :green_heart: | compile | 1m 30s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 6m 24s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 27s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | +1 :green_heart: | shadedjars | 5m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 142m 54s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 1m 14s | hbase-it in the patch passed. | | | | 176m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3242 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c987168d3934 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / e4dc6eaa76 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/testReport/ | | Max. process+thread count | 3977 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-it U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342081#comment-17342081 ] Huaxiang Sun edited comment on HBASE-25032 at 5/10/21, 7:01 PM: Just saw this. This Jira is not in 2.3.5, so no need to do 2.3.6. [~apurtell] was (Author: huaxiangsun): Just saw this. This Jira is not in 2.3.5 yet, so no need to do 2.3.6. [~apurtell] > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #86: HBASE-25874 [hbase-operator-tools]Add tool for identifying 'unknown s…
wchevreuil commented on a change in pull request #86: URL: https://github.com/apache/hbase-operator-tools/pull/86#discussion_r629626179 ## File path: hbase-tools/src/main/java/org/apache/hbase/RegionsOnUnknownServersRecoverer.java ## @@ -0,0 +1,101 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hbase; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.util.HashSet; +import java.util.Set; + +/** + * Tool for identifying Unknown Servers from master logs and schedule SCPs for each of those using + * HBCK2 'scheduleRecoveries' option. This is useful for clusters running hbase versions lower than + * 2.2.7, 2.3.5 and 2.4.7. For any of these versions or higher, use HBCK2 'recoverUnknown' option. + */ +public class RegionsOnUnknownServersRecoverer extends Configured implements Tool { + + private static final Logger LOG = +LoggerFactory.getLogger(RegionsOnUnknownServersRecoverer.class.getName()); + + private static final String CATALOG_JANITOR = "CatalogJanitor: hole="; + + private static final String UNKNOWN_SERVER = "unknown_server="; + + private Configuration conf; + + private Set unknownServers = new HashSet<>(); + + public RegionsOnUnknownServersRecoverer(Configuration conf){ +this.conf = conf; + } + + @Override + public int run(String[] args) throws Exception { +if(args.length!=1){ + LOG.error("Wrong number of arguments. " ++ "Arguments are: "); + return 1; +} +BufferedReader reader = null; +try(Connection conn = ConnectionFactory.createConnection(conf)) { + reader = new BufferedReader(new FileReader(new File(args[0]))); Review comment: It's actually closed by `reader.close`, as it's the IS wrapped in there, no? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342162#comment-17342162 ] Huaxiang Sun commented on HBASE-25032: -- Yeah, no new release is needed for 2.3.*. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #3244: HBASE-25869 WAL value compression
ndimiduk commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629712475 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java ## @@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength, int flength, int qlength DeleteColumn((byte)12), DeleteFamily((byte)14), +// Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of the +// type byte in the WAL codecs to indicate, in a backwards compatible way, if the +// value is compressed there. +EffectiveMaximum((byte)Byte.MAX_VALUE), + // Maximum is used when searching; you look from maximum on down. Maximum((byte)255); Review comment: Since this compression bit mask effectively cuts the `Type` value space in half, why introduce a new `EffectiveMaximum`? It seems to me that you can instead replace the value of `Maximum` with the new reduced value of `127`. ## File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto ## @@ -32,6 +32,7 @@ message WALHeader { optional bool has_tag_compression = 3; optional string writer_cls_name = 4; optional string cell_codec_cls_name = 5; + optional bool has_value_compression = 6; Review comment: I'm surprised to see that we already have WAL compression, but don't explicitly track it in the header. Why is it needed to add a flag to the header at all? Is this so that a decompressor can be initialized ahead of reading the first compressed `Type` code? Is it better to extend the header than to initialize the decompressor dynamically? Is it reasonable to assume that compression will be enabled by default, and so the decompressor should be created by default and is simply not used if no compressed values are read? (I assume that the `Type` code is written without compression, and thus a decompressor is not required in order to read it back.) ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java ## @@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength, int flength, int qlength DeleteColumn((byte)12), DeleteFamily((byte)14), +// Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of the +// type byte in the WAL codecs to indicate, in a backwards compatible way, if the +// value is compressed there. +EffectiveMaximum((byte)Byte.MAX_VALUE), Review comment: I don't find this approach ugly as such. You're using the highest bit as a is-compressed flag, and otherwise retaining the meaning of all existing codes. If I understand you correctly, an uncompressed `Put` has a `Type.code` of `0b 0100` while a compressed `Put` has a `Type.code` of `0b1000 0100`. This is fine, so long as we're okay with cutting the size of the `Type` value space in half. For what it's worth, it might seem like less of a "ugly trick" and more of an "intentional, explicit design" if you were to instead implement as one of the following. * Add explicit `Type` entries for `CompressedPut`, `CompressedDelete`, , with their associated `code` values also explicitly listed. * Fold the representation of compression into the `Type` implementation itself by having each `Type` carry two `code` values (one for compressed, one for uncompressed). These are simply suggestions; each carry implementation ramifications that I cannot estimate without significantly more code reading. --- Returning to this comment after reading the bit where the `Type` code is written. You might consider adding to this Enum an instance method like `int appendTo(OutputStream out)` that encapsulates the details of the `code` encoding. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { Review comment: nit: it's nice for all classes to have a javadoc header. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = Review
[GitHub] [hbase] saintstack commented on a change in pull request #3249: HBASE-25861 Correct the usage of Configuration#addDeprecation
saintstack commented on a change in pull request #3249: URL: https://github.com/apache/hbase/pull/3249#discussion_r629734904 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java ## @@ -1601,6 +1601,89 @@ */ public static final int BATCH_ROWS_THRESHOLD_DEFAULT = 5000; + /** + * @deprecated since 2.4.0 and will be removed in 4.0.0. + * Use {@link HConstants#UNSAFE_RS_HOSTNAME_KEY} instead. + * @see https://issues.apache.org/jira/browse/HBASE-24667;>HBASE-24667 + */ + @Deprecated + @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) + public static final String RS_HOSTNAME_KEY = "hbase.regionserver.hostname"; Review comment: I agree w/ the Nick comment. This file of all the constants is a bit of an anti-pattern... should have gone away long ago Was hoping it would shrivel but it hasn't. Don't pour water on it (smile). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3250: HBASE-25875 RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
Apache-HBase commented on pull request #3250: URL: https://github.com/apache/hbase/pull/3250#issuecomment-837463452 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-thirdparty] busbey commented on a change in pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
busbey commented on a change in pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#discussion_r629751088 ## File path: hbase-shaded-jackson-jaxrs-json-provider/pom.xml ## @@ -0,0 +1,110 @@ + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd;> + +4.0.0 + +org.apache.hbase.thirdparty +hbase-thirdparty +3.5.1-SNAPSHOT Review comment: that is confusing. I'll take my release note concerns to that issue then. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell edited a comment on pull request #3244: HBASE-25869 WAL value compression
apurtell edited a comment on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837518016 > Is it reasonable to always use the same compressor all the time for WALs? It strikes me that operators make an explicit choice about the compression used in their tables, so maybe they would expect that the compressor used by WALs would follow suit. I know it's not that simple, given that WALs are per host and not per table, so maybe it's something to consider in the future. If this is to be tied to table settings we would have to look up settings per table/CF whenever processing an edit. WALs have to be readable on their own without regard to schema changes, which may happen at any point. -1. I don't want to do this. It would be too expensive both in terms of time for all of those lookups and space for each compressed edit to specify algorithm and parameters. I see the goal here as making WALs not so terribly un-efficient to keep around. One pass of a standard compression over the values can achieve that, especially when we drop values that don't compress. Speed is important so BEST_SPEED is the right choice for compression level always. Enabling or disabling compression is enough configuration to give to users. This is what we have given them up to now. Now they can turn on or off value compression too. Arguably zlib compression level here is not a user serviceable setting but I could be convinced to at least make that a site configuration setting. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629761476 ## File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto ## @@ -32,6 +32,7 @@ message WALHeader { optional bool has_tag_compression = 3; optional string writer_cls_name = 4; optional string cell_codec_cls_name = 5; + optional bool has_value_compression = 6; Review comment: > I'm surprised to see that we already have WAL compression, but don't explicitly track it in the header. Huh, I just noticed this too. Something to correct as a followup. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-25876) Add retry if we fail to read all bytes of the protobuf magic marker
Michael Stack created HBASE-25876: - Summary: Add retry if we fail to read all bytes of the protobuf magic marker Key: HBASE-25876 URL: https://issues.apache.org/jira/browse/HBASE-25876 Project: HBase Issue Type: Sub-task Components: io Reporter: Michael Stack The parent issue fixes an instance where we try once to read protobuf magic marker bytes rather than retry till we have enough. This subtask applies the same trick in all cases where we could run into this issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-thirdparty] ndimiduk commented on a change in pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
ndimiduk commented on a change in pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#discussion_r629642048 ## File path: hbase-shaded-jackson-jaxrs-json-provider/pom.xml ## @@ -0,0 +1,110 @@ + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd;> + +4.0.0 + +org.apache.hbase.thirdparty +hbase-thirdparty +3.5.1-SNAPSHOT Review comment: I intended to handle the artifact version as part of a separate commit process. In part because I haven't done any releases out of this repository and I don't know the current practices. Since you pointed this out specifically, we can discuss here. Because this is adding a new interface without removing anything, my thinking is this would result in a bump of minor version number. I don't remove the existing `javax.ws.rs` as it was never provided by this jar directly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-thirdparty] busbey commented on pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
busbey commented on pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#issuecomment-837242913 > > Since the relocated jaxrs provider is intimately tied to our use of jersey, can we fold this into our packaged relocation of jersey in the `hbase-shaded-jersey` module? > > I chose intentionally not to do that, because we don't currently use this dependency in the `hbase-server` classpath. It is only used by `hbase-rest`. I believe we want to finish the purge of our use of jackson, which means eventually removing this jar and replacing it with an EntityWriter based on Gson. Please put a comment to this end in the module pom -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25855) Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342148#comment-17342148 ] Nick Dimiduk commented on HBASE-25855: -- It appears to me that hbase-core has not moved onto 3.5.x (master is at 3.4.1). By making this change and bumping the version to 4.0.0, it seems we will have skipped 3.5.x releases entirely. > Fix typo in jersey relocation path > -- > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-3.5.1 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack commented on a change in pull request #3248: HBASE-25852 Move all the intialization work of LoadBalancer implement…
saintstack commented on a change in pull request #3248: URL: https://github.com/apache/hbase/pull/3248#discussion_r629719036 ## File path: hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java ## @@ -59,10 +59,11 @@ */ @Deprecated String HBASE_RSGROUP_LOADBALANCER_CLASS = "hbase.rsgroup.grouploadbalancer.class"; + /** * Set the current cluster status. This allows a LoadBalancer to map host name to a server */ - void setClusterMetrics(ClusterMetrics st); + void updateClusterMetrics(ClusterMetrics metrics); Review comment: We change the method name because the method is now being used differently? Before we called 'set' once? But now we 'update' continuously during operation? Or was it just always incorrectly named? ## File path: hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java ## @@ -53,6 +52,10 @@ * The base class for load balancers. It provides the the functions used to by * {@code AssignmentManager} to assign regions in the edge cases. It doesn't provide an * implementation of the actual balancing algorithm. + * + * Since 3.0.0, all the balancers will be wrapped inside a {@code RSGroupBasedLoadBalancer}, it will + * be in charge of the synchronization of balancing and configuration changing, so we do not need to + * synchronized by ourselves. Review comment: Interesting. Wondering what the rationale for this (Perhaps it later in this patch) ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java ## @@ -400,6 +387,12 @@ BalanceAction nextAction(BalancerClusterState cluster) { .generate(cluster); } + @RestrictedApi(explanation = "Should only be called in tests", link = "", Review comment: Instead of @VisibleForTests? (I like this annotation) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on pull request #3063: HBASE-25651: NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
ndimiduk commented on pull request #3063: URL: https://github.com/apache/hbase/pull/3063#issuecomment-837478253 Hi @rahulLiving , I noticed you closed this PR. Are you planning to take this one in a different direction? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-thirdparty] ndimiduk commented on a change in pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
ndimiduk commented on a change in pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#discussion_r629746479 ## File path: hbase-shaded-jackson-jaxrs-json-provider/pom.xml ## @@ -0,0 +1,110 @@ + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd;> + +4.0.0 + +org.apache.hbase.thirdparty +hbase-thirdparty +3.5.1-SNAPSHOT Review comment: Landing HBASE-25855 seems to have been a noop in that I didn't find any use of the typo-ed symbol in hbase-core. hbase-operator-tools and hbase-connectors are still on `hbase-thirdparty.verison=2.2.1` and don't make use of hbase-shaded-jersey at all. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell edited a comment on pull request #3244: HBASE-25869 WAL value compression
apurtell edited a comment on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837512098 > There is a potential performance improvement here. We could create a Deflater/Inflater pair per column family. This idea didn't pan out. It makes compression in my test case slightly worse. Let me collect some microbenchmarks for the Deflate compression level options of DEFAULT, BEST_SPEED, and BEST_COMPRESSION with the latest code and present them here. Basically, BEST_COMPRESSION is twice as slow on average as DEFAULT for marginal gains, and BEST_COMPRESSION is twice as fast as DEFAULT but does not compress as well, but good enough, and speed counts here, so BEST_SPEED is best. (However I collected these measures with dictionary per cf so need to redo now that I've dropped the idea.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on pull request #3244: HBASE-25869 WAL value compression
apurtell commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837512098 bq. There is a potential performance improvement here. We could create a Deflater/Inflater pair per column family. This idea didn't pan out. It makes compression in my test case slightly worse. Let me collect some microbenchmarks for the Deflate compression level options of DEFAULT, BEST_SPEED, and BEST_COMPRESSION with the latest code and present them here. Basically, BEST_COMPRESSION is twice as slow on average as DEFAULT for marginal gains, and BEST_COMPRESSION is twice as fast as DEFAULT but does not compress as well, but good enough, and speed counts here, so BEST_SPEED is best. (However I collected these measures with dictionary per cf so need to redo now that I've dropped the idea.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629763212 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -302,14 +343,28 @@ protected Cell parseCell() throws IOException { compression.getDictionary(CompressionContext.DictionaryIndex.QUALIFIER)); pos += elemLen; - // timestamp, type and value - int tsTypeValLen = length - pos; + // timestamp + long ts = StreamUtils.readLong(in); + pos = Bytes.putLong(backingArray, pos, ts); + // type and value + int typeValLen = length - pos; if (tagsLength > 0) { -tsTypeValLen = tsTypeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; +typeValLen = typeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE; + } + // high bit of type byte is 1 if value is compressed + byte type = (byte)in.read(); + if ((type & 0x80) == 0x80) { Review comment: We decide at each value whether to store it compressed or not. First, there is a threshold. Too small and there is no point. Even so, sometimes values will not compress, even though value compression is enabled and by size it is eligible. If this happens we throw the compressed version away and just store the original. > Isn't it better to just zlib compress all the values? WDYT. Some value data will not compress but we won't know ahead of time. I'm not strongly wedded to this idea but it seems good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629763436 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: There is no sharing. One Compressor instance per WAL writer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3251: HBASE-25876 Add retry if we fail to read all bytes of the protobuf ma…
Apache-HBase commented on pull request #3251: URL: https://github.com/apache/hbase/pull/3251#issuecomment-837558899 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 24s | branch-2 passed | | +1 :green_heart: | compile | 1m 48s | branch-2 passed | | +1 :green_heart: | shadedjars | 7m 56s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 20s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 1m 47s | the patch passed | | +1 :green_heart: | javac | 1m 47s | the patch passed | | +1 :green_heart: | shadedjars | 7m 58s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 2s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 217m 11s | hbase-server in the patch passed. | | | | 258m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3251/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3251 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 86bde040dcae 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4e507ccda0 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3251/1/testReport/ | | Max. process+thread count | 2966 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3251/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3236: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
Apache-HBase commented on pull request #3236: URL: https://github.com/apache/hbase/pull/3236#issuecomment-837185063 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 7m 3s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 0s | master passed | | +1 :green_heart: | compile | 1m 1s | master passed | | +1 :green_heart: | shadedjars | 8m 12s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 40s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 38s | the patch passed | | +1 :green_heart: | compile | 1m 2s | the patch passed | | +1 :green_heart: | javac | 1m 2s | the patch passed | | +1 :green_heart: | shadedjars | 8m 16s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 153m 40s | hbase-server in the patch passed. | | | | 190m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3236 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 1f7b79fa1a70 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 8c2332d465 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/testReport/ | | Max. process+thread count | 3844 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3236/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25876) Add retry if we fail to read all bytes of the protobuf magic marker
[ https://issues.apache.org/jira/browse/HBASE-25876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-25876: -- Priority: Trivial (was: Major) > Add retry if we fail to read all bytes of the protobuf magic marker > --- > > Key: HBASE-25876 > URL: https://issues.apache.org/jira/browse/HBASE-25876 > Project: HBase > Issue Type: Sub-task > Components: io >Reporter: Michael Stack >Priority: Trivial > > The parent issue fixes an instance where we try once to read protobuf magic > marker bytes rather than retry till we have enough. This subtask applies the > same trick in all cases where we could run into this issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25876) Add retry if we fail to read all bytes of the protobuf magic marker
[ https://issues.apache.org/jira/browse/HBASE-25876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342125#comment-17342125 ] Michael Stack commented on HBASE-25876: --- Just two places (thought there were more). Trivial. > Add retry if we fail to read all bytes of the protobuf magic marker > --- > > Key: HBASE-25876 > URL: https://issues.apache.org/jira/browse/HBASE-25876 > Project: HBase > Issue Type: Sub-task > Components: io >Reporter: Michael Stack >Priority: Trivial > > The parent issue fixes an instance where we try once to read protobuf magic > marker bytes rather than retry till we have enough. This subtask applies the > same trick in all cases where we could run into this issue. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25774) ServerManager.getOnlineServer may miss some region servers when refreshing state in some procedure implementations
[ https://issues.apache.org/jira/browse/HBASE-25774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342127#comment-17342127 ] Michael Stack commented on HBASE-25774: --- Thanks for figuring the race [~zhangduo] (My bad too for not seeing it on review...) > ServerManager.getOnlineServer may miss some region servers when refreshing > state in some procedure implementations > -- > > Key: HBASE-25774 > URL: https://issues.apache.org/jira/browse/HBASE-25774 > Project: HBase > Issue Type: Bug > Components: Replication >Reporter: Xiaolin Ha >Assignee: Duo Zhang >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.3, 2.3.5.1 > > > [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3025/9/testReport/org.apache.hadoop.hbase.replication/TestSyncReplicationStandbyKillRS/precommit_checks___yetus_jdk8_Hadoop3_checks__/] > {code:java} > ...[truncated 391170 chars]... > 76d634:45149.replicationSource,1] regionserver.HRegionServer(2351): STOPPED: > Unexpected exception in RS:2;ece3af76d634:45149.replicationSource,1 > 2021-04-11T11:14:40,268 INFO [RS:2;ece3af76d634:45149] > regionserver.HeapMemoryManager(218): Stopping > 2021-04-11T11:14:40,268 INFO [MemStoreFlusher.0] > regionserver.MemStoreFlusher$FlushHandler(384): MemStoreFlusher.0 exiting > 2021-04-11T11:14:40,268 INFO [RS:2;ece3af76d634:45149] > flush.RegionServerFlushTableProcedureManager(118): Stopping region server > flush procedure manager abruptly. > 2021-04-11T11:14:40,270 INFO [RS:2;ece3af76d634:45149] > snapshot.RegionServerSnapshotManager(136): Stopping > RegionServerSnapshotManager abruptly. > 2021-04-11T11:14:40,270 INFO [RS:2;ece3af76d634:45149] > regionserver.HRegionServer(1146): aborting server > ece3af76d634,45149,1618139661734 > 2021-04-11T11:14:40,272 ERROR > [ReplicationExecutor-0.replicationSource,1-ece3af76d634,44745,1618139625245] > regionserver.ReplicationSource(428): Unexpected exception in > ReplicationExecutor-0.replicationSource,1-ece3af76d634,44745,1618139625245 > currentPath=null > java.lang.IllegalStateException: Source should be active. > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize(ReplicationSource.java:547) > ~[classes/:?] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282] > 2021-04-11T11:14:40,272 DEBUG > [ReplicationExecutor-0.replicationSource,1-ece3af76d634,44745,1618139625245] > regionserver.HRegionServer(2576): Abort already in progress. Ignoring the > current request with reason: Unexpected exception in > ReplicationExecutor-0.replicationSource,1-ece3af76d634,44745,1618139625245 > {code} > Maybe it should use HBASE-24877 to avoid failure of the initialize of > ReplicationSource. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629759791 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException { } } } + +private byte[] compressValue(Cell cell) throws IOException { + byte[] buffer = new byte[4096]; + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + Deflater deflater = compression.getValueCompressor().getDeflater(); + deflater.setInput(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + boolean finished = false; + do { +int bytesOut = deflater.deflate(buffer); +if (bytesOut > 0) { + baos.write(buffer, 0, bytesOut); +} else { + bytesOut = deflater.deflate(buffer, 0, buffer.length, Deflater.SYNC_FLUSH); Review comment: Yes you are missing the semantics of Deflator#deflate. Deflator#deflate will only return 0 if it needs more input. At this point because there is no more input, we are now done, but we still need to flush. To flush it we call deflate() again using the method that allows us to specify a sync flag. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629759981 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException { } } } + +private byte[] compressValue(Cell cell) throws IOException { + byte[] buffer = new byte[4096]; + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + Deflater deflater = compression.getValueCompressor().getDeflater(); + deflater.setInput(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + boolean finished = false; + do { +int bytesOut = deflater.deflate(buffer); +if (bytesOut > 0) { + baos.write(buffer, 0, bytesOut); +} else { + bytesOut = deflater.deflate(buffer, 0, buffer.length, Deflater.SYNC_FLUSH); Review comment: I will add a comment here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629763436 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: There is no sharing. One CompressionContext instance per WAL writer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25032) Wait for region server to become online before adding it to online servers in Master
[ https://issues.apache.org/jira/browse/HBASE-25032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342086#comment-17342086 ] Huaxiang Sun commented on HBASE-25032: -- [~ndimiduk] I think the metadata is not changed. When the patch was backported to branch-2, the correct fix version was not marked. In general, it will be good practice to mark the correct fix version to reduce the release manager's work. Thanks. > Wait for region server to become online before adding it to online servers in > Master > > > Key: HBASE-25032 > URL: https://issues.apache.org/jira/browse/HBASE-25032 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Guggilam >Assignee: Caroline Zhou >Priority: Major > Labels: master, regionserver > Fix For: 3.0.0-alpha-1, 2.5.0 > > > As part of RS start up, RS reports for duty to Master . Master acknowledges > the request and adds it to the onlineServers list for further assigning any > regions to the RS > Once Master acknowledges the reportForDuty and sends back the response, RS > does a bunch of stuff like initializing replication sources etc before > becoming online. However, sometimes there could be an issue with initializing > replication sources when it is unable to connect to peer clusters because of > some kerberos configuration and there would be a delay of around 20 mins in > becoming online. > > Since master considers it online, it tries to assign regions and which fails > with ServerNotRunningYet exception, then the master tries to unassign which > again fails with the same exception leading the region to FAILED_CLOSE state. > > It would be good to have a check to see if the RS is ready to accept the > assignment requests before adding it to online servers list which would > account for any such delays as described above -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25875) RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
[ https://issues.apache.org/jira/browse/HBASE-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342111#comment-17342111 ] Pankaj Kumar commented on HBASE-25875: -- This is a race condition problem, where LeaderElector is started (via AuthenticationTokenSecretManager#retrievePassword) before the AuthenticationTokenSecretManager start it . Calling start on this live thread via AuthenticationTokenSecretManager#start throws IllegalThreadStateException. *Analysis:* 1.During RSRpcServices#createRpcServer, we create NettyRpcServer instance (NettyRpcServerPreambleHandler, NettyRpcServerRequestDecoder etc are initialized) 2. In NettyRpcServer#start, we are intializing AuthenticationTokenSecretManager and setting it as a secret manager and then start it. 3. In AuthenticationTokenSecretManager#start - We are starting ZKSecretWatcher which internally set watcher and refresh the nodes. - Start the LeaderElector Since we are setting AuthenticationTokenSecretManager as secret manager after step-2, so it will be avalailble for NettyServerRpcConnection processing. So HBaseSaslRpcServer will have the AuthenticationTokenSecretManager and go for HBaseSaslRpcServer#evaluateResponse which internally calls AuthenticationTokenSecretManager#retrievePassword. And while retrieving the password it start the LeaderElector if this thread is not alive (relevant logs are observed). > RegionServer failed to start due to IllegalThreadStateException in > AuthenticationTokenSecretManager.start > - > > Key: HBASE-25875 > URL: https://issues.apache.org/jira/browse/HBASE-25875 > Project: HBase > Issue Type: Bug >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > > RegionServer failed to complete initialization and aborted during > AuthenticationTokenSecretManager#leaderElector start. > Observed following WARN log, > {noformat} > 2021-05-03 07:59:01,848 | WARN | RS-EventLoopGroup-1-6 | Thread > leaderElector[ZKSecretWatcher-leaderElector:56] is stopped or not alive | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:153) > 2021-05-03 07:59:01,848 | INFO | RS-EventLoopGroup-1-6 | Thread > leaderElector [ZKSecretWatcher-leaderElector:56] is started | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:156) > 2021-05-03 07:59:01,854 | INFO | ZKSecretWatcher-leaderElector | Found > existing leader with ID: RS-IP-PORT-StartCode | > org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.waitToBecomeLeader(ZKLeaderManager.java:130) > {noformat} > As per the code, AuthenticationTokenSecretManager#leaderElector is started > while retrieving password before AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L155] > So IllegalThreadStateException occured during > AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L107] > {noformat} > 2021-05-03 07:59:02,066 | ERROR | main | Failed construction RegionServer | > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:775) > java.lang.IllegalThreadStateException > at java.lang.Thread.start(Thread.java:708) > at > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.start(AuthenticationTokenSecretManager.java:107) > at > org.apache.hadoop.hbase.ipc.NettyRpcServer.start(NettyRpcServer.java:131) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.start(RSRpcServices.java:1695) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:756) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:3270) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) > {noformat} -- This message was sent by Atlassian Jira
[jira] [Resolved] (HBASE-25859) Reference class incorrectly parses the protobuf magic marker
[ https://issues.apache.org/jira/browse/HBASE-25859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25859. --- Fix Version/s: 2.4.4 2.5.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Merged the PR. Let me make a subissue to address other instances of the problem here. > Reference class incorrectly parses the protobuf magic marker > > > Key: HBASE-25859 > URL: https://issues.apache.org/jira/browse/HBASE-25859 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.4.1 >Reporter: Constantin-Catalin Luca >Assignee: Constantin-Catalin Luca >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.4 > > > The Reference class incorrectly parses the protobuf magic marker. > It uses: > {code:java} > // DataInputStream.read(byte[lengthOfPNMagic]){code} > but this call does not guarantee to read all the bytes of the marker. > The fix is the same as the one for > https://issues.apache.org/jira/browse/HBASE-25674 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25875) RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
[ https://issues.apache.org/jira/browse/HBASE-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342113#comment-17342113 ] Pankaj Kumar commented on HBASE-25875: -- NettyRpcServer should start the secret manager in synchronized way should fix this race condition. AuthenticationTokenSecretManager#retrievePassword start the LeaderElector in synchronized manner when LeaderElector thread isn't allive or stoppped. [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L151] Ideally during RS startup AuthenticationTokenSecretManager should start the LeaderElector. > RegionServer failed to start due to IllegalThreadStateException in > AuthenticationTokenSecretManager.start > - > > Key: HBASE-25875 > URL: https://issues.apache.org/jira/browse/HBASE-25875 > Project: HBase > Issue Type: Bug >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > > RegionServer failed to complete initialization and aborted during > AuthenticationTokenSecretManager#leaderElector start. > Observed following WARN log, > {noformat} > 2021-05-03 07:59:01,848 | WARN | RS-EventLoopGroup-1-6 | Thread > leaderElector[ZKSecretWatcher-leaderElector:56] is stopped or not alive | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:153) > 2021-05-03 07:59:01,848 | INFO | RS-EventLoopGroup-1-6 | Thread > leaderElector [ZKSecretWatcher-leaderElector:56] is started | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:156) > 2021-05-03 07:59:01,854 | INFO | ZKSecretWatcher-leaderElector | Found > existing leader with ID: RS-IP-PORT-StartCode | > org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.waitToBecomeLeader(ZKLeaderManager.java:130) > {noformat} > As per the code, AuthenticationTokenSecretManager#leaderElector is started > while retrieving password before AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L155] > So IllegalThreadStateException occured during > AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L107] > {noformat} > 2021-05-03 07:59:02,066 | ERROR | main | Failed construction RegionServer | > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:775) > java.lang.IllegalThreadStateException > at java.lang.Thread.start(Thread.java:708) > at > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.start(AuthenticationTokenSecretManager.java:107) > at > org.apache.hadoop.hbase.ipc.NettyRpcServer.start(NettyRpcServer.java:131) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.start(RSRpcServices.java:1695) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:756) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:3270) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3250: HBASE-25875 RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
Apache-HBase commented on pull request #3250: URL: https://github.com/apache/hbase/pull/3250#issuecomment-837331291 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 47s | master passed | | +1 :green_heart: | compile | 3m 39s | master passed | | +1 :green_heart: | checkstyle | 1m 13s | master passed | | +1 :green_heart: | spotbugs | 2m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 3m 40s | the patch passed | | +1 :green_heart: | javac | 3m 40s | the patch passed | | +1 :green_heart: | checkstyle | 1m 11s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 22m 17s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | -1 :x: | spotbugs | 2m 45s | hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 57m 26s | | | Reason | Tests | |---:|:--| | FindBugs | module:hbase-server | | | org.apache.hadoop.hbase.ipc.NettyRpcServer.start() synchronizes on updated field NettyRpcServer.authTokenSecretMgr At NettyRpcServer.java:NettyRpcServer.authTokenSecretMgr At NettyRpcServer.java:[line 135] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3250/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3250 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 6d9d5b4edd1f 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | spotbugs | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3250/1/artifact/yetus-general-check/output/new-spotbugs-hbase-server.html | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3250/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629759791 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -256,6 +278,26 @@ public void write(Cell cell) throws IOException { } } } + +private byte[] compressValue(Cell cell) throws IOException { + byte[] buffer = new byte[4096]; + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + Deflater deflater = compression.getValueCompressor().getDeflater(); + deflater.setInput(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()); + boolean finished = false; + do { +int bytesOut = deflater.deflate(buffer); +if (bytesOut > 0) { + baos.write(buffer, 0, bytesOut); +} else { + bytesOut = deflater.deflate(buffer, 0, buffer.length, Deflater.SYNC_FLUSH); Review comment: Yes you are missing the semantics of Deflator#deflate. Deflator#deflate will only return 0 if it needs more input. At this point because there is no more input, we are now done, but we still need to flush. To flush it we call deflate() again using the method that allows us to specify a sync flag. Also, buffer can be reused here like this. It's fine. buffer does not accumulate output. It is used per pass as part of the API contract with the compressor and then the contents are sent to the output stream. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25855) [hbase-thirdparty] Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-25855: - Resolution: Fixed Status: Resolved (was: Patch Available) > [hbase-thirdparty] Fix typo in jersey relocation path > - > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3251: HBASE-25876 Add retry if we fail to read all bytes of the protobuf ma…
Apache-HBase commented on pull request #3251: URL: https://github.com/apache/hbase/pull/3251#issuecomment-837357330 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | branch-2 passed | | +1 :green_heart: | compile | 4m 17s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 54s | branch-2 passed | | +1 :green_heart: | spotbugs | 4m 19s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 24s | the patch passed | | +1 :green_heart: | compile | 5m 18s | the patch passed | | +1 :green_heart: | javac | 5m 18s | the patch passed | | +1 :green_heart: | checkstyle | 0m 39s | hbase-client: The patch generated 0 new + 57 unchanged - 1 fixed = 57 total (was 58) | | +1 :green_heart: | checkstyle | 1m 23s | The patch passed checkstyle in hbase-server | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 15m 37s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 4m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 56m 56s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3251/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3251 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux fe68fdf12963 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4e507ccda0 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3251/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on pull request #3244: HBASE-25869 WAL value compression
ndimiduk commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837437980 Oh, one other comment I forgot to mention before hitting submit. Is it reasonable to always use the same compressor all the time for WALs? It strikes me that operators make an explicit choice about the compression used in their tables, so maybe they would expect that the compressor used by WALs would follow suit. I know it's not that simple, given that WALs are per host and not per table, so maybe it's something to consider in the future. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3242: HBASE-25867 Extra doc around ITBLL
Apache-HBase commented on pull request #3242: URL: https://github.com/apache/hbase/pull/3242#issuecomment-837454813 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 18s | branch-2.4 passed | | +1 :green_heart: | compile | 1m 22s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 6m 0s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 50s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 20s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | +1 :green_heart: | shadedjars | 6m 3s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 140m 34s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 1m 13s | hbase-it in the patch passed. | | | | 168m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3242 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ea3f97ada3b4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 992d80ad39 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/testReport/ | | Max. process+thread count | 4216 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-it U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629757734 ## File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto ## @@ -32,6 +32,7 @@ message WALHeader { optional bool has_tag_compression = 3; optional string writer_cls_name = 4; optional string cell_codec_cls_name = 5; + optional bool has_value_compression = 6; Review comment: Yes, we initialize the CompressionContext from the header when initializing the reader. It has to be done up front. If we made value compression something we always do, then this field is not necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629757734 ## File path: hbase-protocol-shaded/src/main/protobuf/server/region/WAL.proto ## @@ -32,6 +32,7 @@ message WALHeader { optional bool has_tag_compression = 3; optional string writer_cls_name = 4; optional string cell_codec_cls_name = 5; + optional bool has_value_compression = 6; Review comment: Yes, we initialize the `CompressionContext` from the header when initializing the reader. With this feature toggle placed in `CompressionContext` this is what we have to do. If we made value compression something we always do, then this field is not necessary. I'd prefer that over turning it on on-demand at the first encounter of a high bit set in `Type`. ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java ## @@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength, int flength, int qlength DeleteColumn((byte)12), DeleteFamily((byte)14), +// Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of the +// type byte in the WAL codecs to indicate, in a backwards compatible way, if the +// value is compressed there. +EffectiveMaximum((byte)Byte.MAX_VALUE), + // Maximum is used when searching; you look from maximum on down. Maximum((byte)255); Review comment: The short answer is this part is where I thought people would raise objections, so wasn't sure what to do here, but I agree with your point. We can try it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629763436 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/CompressionContext.java ## @@ -34,21 +36,49 @@ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public class CompressionContext { - static final String ENABLE_WAL_TAGS_COMPRESSION = + public static final String ENABLE_WAL_TAGS_COMPRESSION = "hbase.regionserver.wal.tags.enablecompression"; + public static final String ENABLE_WAL_VALUE_COMPRESSION = + "hbase.regionserver.wal.value.enablecompression"; + public enum DictionaryIndex { REGION, TABLE, FAMILY, QUALIFIER, ROW } + static class ValueCompressor { +final Deflater deflater; +final Inflater inflater; + +public ValueCompressor() { + deflater = new Deflater(); + inflater = new Inflater(); Review comment: There is no sharing. One CompressionContext instance per WAL writer. If there is sharing, WAL compression would already be thoroughly broken, that is bad bad bad, dictionaries would be all messed up because they would be updated with output spread among different files and not the same file (think about it, it cannot work) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-operator-tools] joshelser commented on a change in pull request #86: HBASE-25874 [hbase-operator-tools]Add tool for identifying 'unknown s…
joshelser commented on a change in pull request #86: URL: https://github.com/apache/hbase-operator-tools/pull/86#discussion_r629631126 ## File path: hbase-tools/src/main/java/org/apache/hbase/RegionsOnUnknownServersRecoverer.java ## @@ -0,0 +1,101 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hbase; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileReader; +import java.util.HashSet; +import java.util.Set; + +/** + * Tool for identifying Unknown Servers from master logs and schedule SCPs for each of those using + * HBCK2 'scheduleRecoveries' option. This is useful for clusters running hbase versions lower than + * 2.2.7, 2.3.5 and 2.4.7. For any of these versions or higher, use HBCK2 'recoverUnknown' option. + */ +public class RegionsOnUnknownServersRecoverer extends Configured implements Tool { + + private static final Logger LOG = +LoggerFactory.getLogger(RegionsOnUnknownServersRecoverer.class.getName()); + + private static final String CATALOG_JANITOR = "CatalogJanitor: hole="; + + private static final String UNKNOWN_SERVER = "unknown_server="; + + private Configuration conf; + + private Set unknownServers = new HashSet<>(); + + public RegionsOnUnknownServersRecoverer(Configuration conf){ +this.conf = conf; + } + + @Override + public int run(String[] args) throws Exception { +if(args.length!=1){ + LOG.error("Wrong number of arguments. " ++ "Arguments are: "); + return 1; +} +BufferedReader reader = null; +try(Connection conn = ConnectionFactory.createConnection(conf)) { + reader = new BufferedReader(new FileReader(new File(args[0]))); Review comment: Oh, yes, you're right. I missed it down there. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25875) RegionServer failed to start due to IllegalThreadStateException in AuthenticationTokenSecretManager.start
[ https://issues.apache.org/jira/browse/HBASE-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pankaj Kumar updated HBASE-25875: - Fix Version/s: 3.0.0-alpha-1 Status: Patch Available (was: In Progress) > RegionServer failed to start due to IllegalThreadStateException in > AuthenticationTokenSecretManager.start > - > > Key: HBASE-25875 > URL: https://issues.apache.org/jira/browse/HBASE-25875 > Project: HBase > Issue Type: Bug >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > Fix For: 3.0.0-alpha-1 > > > RegionServer failed to complete initialization and aborted during > AuthenticationTokenSecretManager#leaderElector start. > Observed following WARN log, > {noformat} > 2021-05-03 07:59:01,848 | WARN | RS-EventLoopGroup-1-6 | Thread > leaderElector[ZKSecretWatcher-leaderElector:56] is stopped or not alive | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:153) > 2021-05-03 07:59:01,848 | INFO | RS-EventLoopGroup-1-6 | Thread > leaderElector [ZKSecretWatcher-leaderElector:56] is started | > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.retrievePassword(AuthenticationTokenSecretManager.java:156) > 2021-05-03 07:59:01,854 | INFO | ZKSecretWatcher-leaderElector | Found > existing leader with ID: RS-IP-PORT-StartCode | > org.apache.hadoop.hbase.zookeeper.ZKLeaderManager.waitToBecomeLeader(ZKLeaderManager.java:130) > {noformat} > As per the code, AuthenticationTokenSecretManager#leaderElector is started > while retrieving password before AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L155] > So IllegalThreadStateException occured during > AuthenticationTokenSecretManager#start, > > [https://github.com/apache/hbase/blob/8c2332d46532135723cc7a6084a2a125f3d9d8db/hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSecretManager.java#L107] > {noformat} > 2021-05-03 07:59:02,066 | ERROR | main | Failed construction RegionServer | > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:775) > java.lang.IllegalThreadStateException > at java.lang.Thread.start(Thread.java:708) > at > org.apache.hadoop.hbase.security.token.AuthenticationTokenSecretManager.start(AuthenticationTokenSecretManager.java:107) > at > org.apache.hadoop.hbase.ipc.NettyRpcServer.start(NettyRpcServer.java:131) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.start(RSRpcServices.java:1695) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:756) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:3270) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63) > at > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #3243: HBASE-25864 Use shaded javax.ws.rs package classes
ndimiduk commented on a change in pull request #3243: URL: https://github.com/apache/hbase/pull/3243#discussion_r629645403 ## File path: hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java ## @@ -18,22 +17,19 @@ */ package org.apache.hadoop.hbase.rest.model; +import com.fasterxml.jackson.annotation.JsonInclude; Review comment: Nope, because I don't relocate Jackson, just the `jackson-jaxrs-json-provider` and things related specifically to jaxrs. The other PR uses an `includes` list, not excludes, and only relocates the package prefix `com.fasterxml.jackson.jaxrs`. Transitive dependencies of the two artifacts it relocates are preserved. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-thirdparty] busbey commented on a change in pull request #51: HBASE-25863 Shade javax.ws.rs package for use with shaded Jersey
busbey commented on a change in pull request #51: URL: https://github.com/apache/hbase-thirdparty/pull/51#discussion_r629645813 ## File path: hbase-shaded-jackson-jaxrs-json-provider/pom.xml ## @@ -0,0 +1,110 @@ + +http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd;> + +4.0.0 + +org.apache.hbase.thirdparty +hbase-thirdparty +3.5.1-SNAPSHOT Review comment: I think we're in agreement about the impact for this change. My question was if that's obviated by HBASE-25855 having landed already. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #3243: HBASE-25864 Use shaded javax.ws.rs package classes
ndimiduk commented on a change in pull request #3243: URL: https://github.com/apache/hbase/pull/3243#discussion_r629644198 ## File path: hbase-rest/pom.xml ## @@ -160,7 +151,6 @@ org.apache.hbase hbase-protocol-shaded - jar Review comment: Yes, this was redundant. From [Maven Coordinates](http://maven.apache.org/pom.html#Maven_Coordinates), > When no packaging is declared, Maven assumes the packaging is the default: jar. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #3243: HBASE-25864 Use shaded javax.ws.rs package classes
ndimiduk commented on a change in pull request #3243: URL: https://github.com/apache/hbase/pull/3243#discussion_r629645829 ## File path: hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/RowResourceBase.java ## @@ -18,22 +18,16 @@ package org.apache.hadoop.hbase.rest; import static org.junit.Assert.assertEquals; - import com.fasterxml.jackson.databind.ObjectMapper; Review comment: See above -- it seems we do not currently shade jackson-databind and the patch as posted does not do so either. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack opened a new pull request #3251: HBASE-25876 Add retry if we fail to read all bytes of the protobuf ma…
saintstack opened a new pull request #3251: URL: https://github.com/apache/hbase/pull/3251 …gic marker -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3242: HBASE-25867 Extra doc around ITBLL
Apache-HBase commented on pull request #3242: URL: https://github.com/apache/hbase/pull/3242#issuecomment-837320433 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 14s | branch-2.4 passed | | +1 :green_heart: | compile | 3m 49s | branch-2.4 passed | | +1 :green_heart: | checkstyle | 1m 28s | branch-2.4 passed | | +1 :green_heart: | spotbugs | 2m 31s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | +1 :green_heart: | compile | 3m 48s | the patch passed | | -0 :warning: | javac | 0m 39s | hbase-it generated 1 new + 102 unchanged - 1 fixed = 103 total (was 103) | | +1 :green_heart: | checkstyle | 1m 10s | hbase-server: The patch generated 0 new + 200 unchanged - 1 fixed = 200 total (was 201) | | +1 :green_heart: | checkstyle | 0m 17s | hbase-it: The patch generated 0 new + 45 unchanged - 1 fixed = 45 total (was 46) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 12s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 49m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3242 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 03661dfdb385 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 992d80ad39 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/artifact/yetus-general-check/output/diff-compile-javac-hbase-it.txt | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-it U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-25855) [hbase-thirdparty] Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HBASE-25855: --- Assignee: Nick Dimiduk (was: Sean Busbey) > [hbase-thirdparty] Fix typo in jersey relocation path > - > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25855) [hbase-thirdparty] Fix typo in jersey relocation path
[ https://issues.apache.org/jira/browse/HBASE-25855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HBASE-25855: --- Assignee: Sean Busbey (was: Nick Dimiduk) > [hbase-thirdparty] Fix typo in jersey relocation path > - > > Key: HBASE-25855 > URL: https://issues.apache.org/jira/browse/HBASE-25855 > Project: HBase > Issue Type: Task > Components: hbase-thirdparty >Affects Versions: thirdparty-3.4.1 >Reporter: Nick Dimiduk >Assignee: Sean Busbey >Priority: Minor > Fix For: thirdparty-4.0.0 > > > We shade to "jersery" instead of "jersey". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3242: HBASE-25867 Extra doc around ITBLL
Apache-HBase commented on pull request #3242: URL: https://github.com/apache/hbase/pull/3242#issuecomment-837517120 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 45s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.4 Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 44s | branch-2.4 passed | | +1 :green_heart: | compile | 1m 54s | branch-2.4 passed | | +1 :green_heart: | shadedjars | 7m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 1s | branch-2.4 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 49s | the patch passed | | +1 :green_heart: | compile | 1m 50s | the patch passed | | +1 :green_heart: | javac | 1m 50s | the patch passed | | +1 :green_heart: | shadedjars | 7m 43s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 197m 4s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 1m 7s | hbase-it in the patch passed. | | | | 233m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3242 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c70bb594800e 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.4 / 992d80ad39 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/testReport/ | | Max. process+thread count | 2488 (vs. ulimit of 12500) | | modules | C: hbase-server hbase-it U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3242/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on pull request #3244: HBASE-25869 WAL value compression
apurtell commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837518016 > Is it reasonable to always use the same compressor all the time for WALs? It strikes me that operators make an explicit choice about the compression used in their tables, so maybe they would expect that the compressor used by WALs would follow suit. I know it's not that simple, given that WALs are per host and not per table, so maybe it's something to consider in the future. If this is to be tied to table settings we would have to look up settings per table/CF whenever processing an edit. -1. I don't want to do this. It would be too expensive. I see the goal here as making WALs not so terribly un-efficient to keep around. One pass of a standard compression over the values can achieve that, especially when we drop values that don't compress. Speed is important so BEST_SPEED is the right choice for compression level always. Enabling or disabling compression is enough configuration to give to users. This is what we have given them up to now. Now they can turn on or off value compression too. Arguably zlib compression level here is not a user serviceable setting but I could be convinced to at least make that a site configuration setting. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629760897 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java ## @@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength, int flength, int qlength DeleteColumn((byte)12), DeleteFamily((byte)14), +// Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of the +// type byte in the WAL codecs to indicate, in a backwards compatible way, if the +// value is compressed there. +EffectiveMaximum((byte)Byte.MAX_VALUE), + // Maximum is used when searching; you look from maximum on down. Maximum((byte)255); Review comment: > Add explicit Type entries for CompressedPut, CompressedDelete, , with their associated code values also explicitly listed Such Type entries would not be used anywhere outside of the WAL codec. It's really a WAL codec internal detail. As such, this would raise more questions than provide answers. However I agree with your related point that to make this code production ready the magic constants should be defined with names and javadoc (in wal codec) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] rda3mon commented on a change in pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
rda3mon commented on a change in pull request #3246: URL: https://github.com/apache/hbase/pull/3246#discussion_r629784728 ## File path: hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java ## @@ -295,6 +295,11 @@ public void setBackupInfo(BackupInfo backupInfo) { .withRootDir(backup.getBackupRootDir()).withTableList(backup.getTableNames()) .withStartTime(backup.getStartTs()).withCompleteTime(backup.getCompleteTs()).build(); + // Not required if root dir is not same. + if (!image.getRootDir().equals(backupInfo.getBackupRootDir())) { Review comment: @saintstack I have updated the comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629801863 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java ## @@ -241,10 +246,27 @@ public void write(Cell cell) throws IOException { compression.getDictionary(CompressionContext.DictionaryIndex.FAMILY)); PrivateCellUtil.compressQualifier(out, cell, compression.getDictionary(CompressionContext.DictionaryIndex.QUALIFIER)); - // Write timestamp, type and value as uncompressed. + // Write timestamp, type and value. StreamUtils.writeLong(out, cell.getTimestamp()); - out.write(cell.getTypeByte()); - PrivateCellUtil.writeValue(out, cell, cell.getValueLength()); + byte type = cell.getTypeByte(); + if (compression.getValueCompressor() != null && + cell.getValueLength() > VALUE_COMPRESS_THRESHOLD) { +// Try compressing the cell's value +byte[] compressedBytes = compressValue(cell); +// Only write the compressed value if we have achieved some space savings. +if (compressedBytes.length < cell.getValueLength()) { + // Set the high bit of type to indicate the value is compressed + out.write((byte)(type|0x80)); Review comment: It is simpler if we don't try to conditionally compress values. No high bit twiddling. So there is no leakage into KeyValue in this case and no magic values. There also doesn't need to be a size threshold. Since we are using a single deflator instance over all values in the WAL file, overheads of too small values should be amortized. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on a change in pull request #3244: HBASE-25869 WAL value compression
apurtell commented on a change in pull request #3244: URL: https://github.com/apache/hbase/pull/3244#discussion_r629802049 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java ## @@ -230,6 +230,11 @@ public static long getKeyDataStructureSize(int rlength, int flength, int qlength DeleteColumn((byte)12), DeleteFamily((byte)14), +// Effective maximum is 127 (Byte.MAX_VALUE). We set the high order bit of the +// type byte in the WAL codecs to indicate, in a backwards compatible way, if the +// value is compressed there. +EffectiveMaximum((byte)Byte.MAX_VALUE), + // Maximum is used when searching; you look from maximum on down. Maximum((byte)255); Review comment: It is simpler if we don't try to conditionally compress values. No high bit twiddling. So there is no leakage into KeyValue in this case and no magic values. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
Apache-HBase commented on pull request #3246: URL: https://github.com/apache/hbase/pull/3246#issuecomment-837677173 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
Apache-HBase commented on pull request #3246: URL: https://github.com/apache/hbase/pull/3246#issuecomment-837636720 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3244: HBASE-25869 WAL value compression
Apache-HBase commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837655575 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | master passed | | +1 :green_heart: | compile | 5m 20s | master passed | | +1 :green_heart: | checkstyle | 1m 41s | master passed | | +1 :green_heart: | spotbugs | 6m 37s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 5m 27s | the patch passed | | +1 :green_heart: | cc | 5m 27s | the patch passed | | +1 :green_heart: | javac | 5m 27s | the patch passed | | +1 :green_heart: | checkstyle | 1m 39s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 50s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 4s | the patch passed | | +1 :green_heart: | spotbugs | 7m 5s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 38s | The patch does not generate ASF License warnings. | | | | 65m 47s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3244 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux 185e016f5114 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
Apache-HBase commented on pull request #3246: URL: https://github.com/apache/hbase/pull/3246#issuecomment-837684227 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 0s | master passed | | +1 :green_heart: | compile | 0m 30s | master passed | | +1 :green_heart: | shadedjars | 9m 59s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 43s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | shadedjars | 10m 43s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 50s | hbase-backup in the patch passed. | | | | 43m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3246/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3246 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 98bfeae25cba 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3246/3/testReport/ | | Max. process+thread count | 4709 (vs. ulimit of 3) | | modules | C: hbase-backup U: hbase-backup | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3246/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3244: HBASE-25869 WAL value compression
Apache-HBase commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837776421 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 53s | master passed | | +1 :green_heart: | compile | 2m 12s | master passed | | +1 :green_heart: | shadedjars | 8m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 16s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 41s | the patch passed | | +1 :green_heart: | compile | 2m 15s | the patch passed | | +1 :green_heart: | javac | 2m 15s | the patch passed | | +1 :green_heart: | shadedjars | 8m 4s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 46s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 51s | hbase-common in the patch passed. | | -1 :x: | unit | 150m 10s | hbase-server in the patch failed. | | | | 187m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3244 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5e05eb7fd3fa 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/testReport/ | | Max. process+thread count | 4782 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/6/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3244: HBASE-25869 WAL value compression
Apache-HBase commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837823204 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 42s | master passed | | +1 :green_heart: | compile | 2m 12s | master passed | | +1 :green_heart: | shadedjars | 8m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 15s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | compile | 2m 12s | the patch passed | | +1 :green_heart: | javac | 2m 12s | the patch passed | | +1 :green_heart: | shadedjars | 8m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 47s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 47s | hbase-common in the patch passed. | | -1 :x: | unit | 8m 3s | hbase-server in the patch failed. | | | | 44m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3244 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 79e49e6c0241 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/testReport/ | | Max. process+thread count | 806 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3244: HBASE-25869 WAL value compression
Apache-HBase commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837831402 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 4m 0s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 32s | master passed | | +1 :green_heart: | compile | 2m 37s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 22s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 13s | the patch passed | | +1 :green_heart: | compile | 2m 39s | the patch passed | | +1 :green_heart: | javac | 2m 39s | the patch passed | | +1 :green_heart: | shadedjars | 8m 13s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 59s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 59s | hbase-common in the patch passed. | | -1 :x: | unit | 8m 48s | hbase-server in the patch failed. | | | | 51m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3244 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 03fb346176f4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/testReport/ | | Max. process+thread count | 887 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3246: HBASE-25870 while creating manifest, search only for ancestors insteadd of all of…
Apache-HBase commented on pull request #3246: URL: https://github.com/apache/hbase/pull/3246#issuecomment-837638994 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 36s | master passed | | +1 :green_heart: | compile | 0m 41s | master passed | | +1 :green_heart: | checkstyle | 0m 17s | master passed | | +1 :green_heart: | spotbugs | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 37s | the patch passed | | +1 :green_heart: | compile | 0m 38s | the patch passed | | +1 :green_heart: | javac | 0m 38s | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 54s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 40m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3246/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3246 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 71952380b46a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-backup U: hbase-backup | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3246/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on pull request #3244: HBASE-25869 WAL value compression
apurtell commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837688241 Rolled up the first round of review feedback into d4d707d -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3202: HBASE-25803 Add compaction offload switch
Apache-HBase commented on pull request #3202: URL: https://github.com/apache/hbase/pull/3202#issuecomment-837872302 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 55s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 10s | HBASE-25714 passed | | +1 :green_heart: | compile | 7m 15s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 2m 49s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 8m 49s | HBASE-25714 passed | | -0 :warning: | patch | 2m 1s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 8s | the patch passed | | +1 :green_heart: | compile | 7m 5s | the patch passed | | +1 :green_heart: | cc | 7m 5s | the patch passed | | +1 :green_heart: | javac | 7m 5s | the patch passed | | +1 :green_heart: | checkstyle | 0m 9s | The patch passed checkstyle in hbase-protocol-shaded | | +1 :green_heart: | checkstyle | 0m 27s | The patch passed checkstyle in hbase-client | | +1 :green_heart: | checkstyle | 1m 18s | hbase-server: The patch generated 0 new + 316 unchanged - 1 fixed = 316 total (was 317) | | +1 :green_heart: | checkstyle | 0m 42s | The patch passed checkstyle in hbase-thrift | | +1 :green_heart: | checkstyle | 0m 10s | The patch passed checkstyle in hbase-shell | | -0 :warning: | rubocop | 0m 20s | The patch generated 3 new + 652 unchanged - 0 fixed = 655 total (was 652) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 2s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 55s | the patch passed | | +1 :green_heart: | spotbugs | 9m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 52s | The patch does not generate ASF License warnings. | | | | 82m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/5/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3202 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool rubocop | | uname | Linux 590ad1ba89eb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 7f32d6cfc5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/5/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/5/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 rubocop=0.80.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell edited a comment on pull request #3244: HBASE-25869 WAL value compression
apurtell edited a comment on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837666043 I have some changes coming soon. Accepted @bharathv 's argument that unconditionally compressing values if value compression is enabled is fine even if some value cases may not compress because gains and losses are amortized over all values in the WAL file. Also addresses several points of @ndimiduk 's feedback by mooting the discussion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] apurtell commented on pull request #3244: HBASE-25869 WAL value compression
apurtell commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837666043 I have some changes coming soon. Accepted @bharathv 's argument that unconditionally compressing values if value compression is enabled is fine even if some value cases may not compress because gains and losses are amortized over all values in the WAL file. Also address several points of @ndimiduk 's feedback by mooting the discussion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] rahulLiving commented on pull request #3063: HBASE-25651: NORMALIZER_TARGET_REGION_SIZE needs a unit in its name
rahulLiving commented on pull request #3063: URL: https://github.com/apache/hbase/pull/3063#issuecomment-837813571 Hi @ndimiduk I need to test out this change once, after addressing some comments. I was out for sometime, hence couldn't do it and thus closed for time being. I'll get it to closure in a week or two. Thanks for following up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3244: HBASE-25869 WAL value compression
Apache9 commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837580362 > > So we will only compress value? > > This is an enhancement to existing WAL compression. As you know the existing WAL compression already compresses other aspects of WAL entries _except_ for the value. This patch adds support for compressing values too. > > > As we will do batching when writing WAL entries out, is it possible to compress when flushing? The data will be larger and compress may perform better. The structure of a WAL file will be multiple compressed blocks. > > This is not possible for two reasons: > > 1. WALCellCodec does not compress the WAL file in blocks. The design is edit by edit. I want to introduce value compression without re-engineering the whole WAL format. Perhaps our WAL file format is due for a redesign, but I would like to see that be a different issue. > 2. We flush the compressor at the end of every value to ensure each WALedit record persists all of the value data into the expected place. Otherwise the compressor would put some of the unflushed output of the previous value into the next/current value. But, we are not resetting the compressor. (That would be FULL_FLUSH. We are using SYNC_FLUSH.) By using the same Deflater instance for the whole WAL we already get the benefit you are thinking of. The (re-used) Deflater is able to build its dictionary across the contents of all of the values in the file, not just each value considered in isolation (that was the original patch but I pushed an improvement that aligns with this suggestion later), achieving a better compression. > > Way back in the distant past our WAL format was based on Hadoop's SequenceFile, which supported both record-by-record and block based compression, where the blocks would contain multiple records. I don't remember why we moved away from it but I imagine it was because if there are corruptions of the WAL, a record by record codec is able to skip over the corrupt record and we lose only the record (or as many records as are actually corrupt), but with a block format we would lose the whole block and all of the edits contained within that block, especially if compression or encryption is enabled. OK, so the idea is like all the values are written out like a stream, seems workable. Let me take a look at the code then. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25858) [branch-1] make hbase-thrift optional in hbase-assembly module
[ https://issues.apache.org/jira/browse/HBASE-25858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342273#comment-17342273 ] Reid Chan commented on HBASE-25858: --- Try merging, FYI [~apurtell] > [branch-1] make hbase-thrift optional in hbase-assembly module > -- > > Key: HBASE-25858 > URL: https://issues.apache.org/jira/browse/HBASE-25858 > Project: HBase > Issue Type: Task >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Blocker > > {code} > [ERROR] Failed to execute goal on project hbase-assembly: Could not resolve > dependencies for project org.apache.hbase:hbase-assembly:pom:1.7.0: Failure > to find org.apache.hbase:hbase-thrift:jar:1.7.0 in > https://repository.apache.org/content/repositories/releases/ was cached in > the local repository, resolution will not be reattempted until the update > interval of apache release has elapsed or updates are forced -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3202: HBASE-25803 Add compaction offload switch
Apache-HBase commented on pull request #3202: URL: https://github.com/apache/hbase/pull/3202#issuecomment-837765986 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | HBASE-25714 passed | | +1 :green_heart: | compile | 3m 39s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 7m 54s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 22s | HBASE-25714 passed | | -0 :warning: | patch | 11m 11s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 2m 12s | root in the patch failed. | | -1 :x: | compile | 0m 45s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 45s | hbase-server in the patch failed. | | -1 :x: | shadedjars | 5m 53s | patch has 20 errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 59s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 21s | hbase-client in the patch passed. | | -1 :x: | unit | 0m 46s | hbase-server in the patch failed. | | +1 :green_heart: | unit | 4m 23s | hbase-thrift in the patch passed. | | -1 :x: | unit | 7m 2s | hbase-shell in the patch failed. | | | | 49m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3202 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 69c12fd02cfa 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 7f32d6cfc5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | mvninstall | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt | | compile | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-shell.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/testReport/ | | Max. process+thread count | 2387 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift hbase-shell U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3202: HBASE-25803 Add compaction offload switch
Apache-HBase commented on pull request #3202: URL: https://github.com/apache/hbase/pull/3202#issuecomment-837775180 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 21s | HBASE-25714 passed | | +1 :green_heart: | compile | 7m 3s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 2m 48s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 8m 29s | HBASE-25714 passed | | -0 :warning: | patch | 2m 1s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 1m 58s | root in the patch failed. | | -1 :x: | compile | 0m 44s | hbase-server in the patch failed. | | -0 :warning: | cc | 0m 44s | hbase-server in the patch failed. | | -0 :warning: | javac | 0m 44s | hbase-server in the patch failed. | | +1 :green_heart: | checkstyle | 0m 8s | The patch passed checkstyle in hbase-protocol-shaded | | +1 :green_heart: | checkstyle | 0m 25s | The patch passed checkstyle in hbase-client | | +1 :green_heart: | checkstyle | 1m 15s | hbase-server: The patch generated 0 new + 316 unchanged - 1 fixed = 316 total (was 317) | | +1 :green_heart: | checkstyle | 0m 40s | The patch passed checkstyle in hbase-thrift | | +1 :green_heart: | checkstyle | 0m 9s | The patch passed checkstyle in hbase-shell | | -0 :warning: | rubocop | 0m 20s | The patch generated 3 new + 652 unchanged - 0 fixed = 655 total (was 652) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | -1 :x: | hadoopcheck | 2m 8s | The patch causes 20 errors with Hadoop v3.1.2. | | -1 :x: | hadoopcheck | 4m 19s | The patch causes 20 errors with Hadoop v3.2.1. | | -1 :x: | hadoopcheck | 6m 32s | The patch causes 20 errors with Hadoop v3.3.0. | | -1 :x: | hbaseprotoc | 0m 34s | hbase-server in the patch failed. | | -1 :x: | spotbugs | 0m 27s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 46s | The patch does not generate ASF License warnings. | | | | 59m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3202 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool rubocop | | uname | Linux b10c4594217e 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 7f32d6cfc5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | mvninstall | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-mvninstall-root.txt | | compile | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-compile-hbase-server.txt | | cc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-compile-hbase-server.txt | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-compile-hbase-server.txt | | rubocop | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/diff-patch-rubocop.txt | | hadoopcheck | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-javac-3.1.2.txt | | hadoopcheck | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3202/4/artifact/yetus-general-check/output/patch-javac-3.2.1.txt | | hadoopcheck |
[GitHub] [hbase] Apache-HBase commented on pull request #3244: HBASE-25869 WAL value compression
Apache-HBase commented on pull request #3244: URL: https://github.com/apache/hbase/pull/3244#issuecomment-837847210 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 4s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | compile | 5m 25s | master passed | | +1 :green_heart: | checkstyle | 1m 41s | master passed | | +1 :green_heart: | spotbugs | 6m 37s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 38s | the patch passed | | +1 :green_heart: | compile | 5m 23s | the patch passed | | +1 :green_heart: | cc | 5m 23s | the patch passed | | -0 :warning: | javac | 0m 47s | hbase-common generated 1 new + 158 unchanged - 1 fixed = 159 total (was 159) | | +1 :green_heart: | checkstyle | 1m 41s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 5s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 1m 59s | the patch passed | | +1 :green_heart: | spotbugs | 7m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 36s | The patch does not generate ASF License warnings. | | | | 65m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3244 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux af1156982b7d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 2b6a91a1da | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/artifact/yetus-general-check/output/diff-compile-javac-hbase-common.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-common hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3244/7/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org