[GitHub] [hbase] Apache-HBase commented on pull request #5296: HBASE-27940 Midkey metadata in root index block would always be ignor…
Apache-HBase commented on PR #5296: URL: https://github.com/apache/hbase/pull/5296#issuecomment-1595614096 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 50s | master passed | | +1 :green_heart: | compile | 2m 23s | master passed | | +1 :green_heart: | checkstyle | 0m 31s | master passed | | +1 :green_heart: | spotless | 0m 40s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 27s | the patch passed | | +1 :green_heart: | compile | 2m 23s | the patch passed | | +1 :green_heart: | javac | 2m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 33s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 9m 30s | Patch does not cause any errors with Hadoop 3.2.4 3.3.5. | | +1 :green_heart: | spotless | 0m 39s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 30s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 9s | The patch does not generate ASF License warnings. | | | | 31m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5296/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5296 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux b35b1447e437 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 622f4ae862 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5296/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei reassigned HBASE-27940: Assignee: chenglei > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Assignee: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for > {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , > it still subtracts the checksum to check if the midkey metadat exists, the > midkey metadata would always be ignored: > {code:java} > public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) > throws IOException { > DataInputStream in = readRootIndex(blk, numEntries); > // after reading the root index the checksum bytes have to > // be subtracted to know if the mid key exists. > int checkSumBytes = blk.totalChecksumBytes(); > if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { > // No mid-key metadata available. > return; > } > midLeafBlockOffset = in.readLong(); > midLeafBlockOnDiskSize = in.readInt(); > midKeyEntry = in.readInt(); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] comnetwork opened a new pull request, #5296: HBASE-27940 Midkey metadata in root index block would always be ignor…
comnetwork opened a new pull request, #5296: URL: https://github.com/apache/hbase/pull/5296 …ed by BlockIndexReader.readMultiLevelIndexRoot -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27940: - Description: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , it still subtracts the checksum to check if the midkey metadat exists, the midkey metadata would always be ignored: {code:java} public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) throws IOException { DataInputStream in = readRootIndex(blk, numEntries); // after reading the root index the checksum bytes have to // be subtracted to know if the mid key exists. int checkSumBytes = blk.totalChecksumBytes(); if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { // No mid-key metadata available. return; } midLeafBlockOffset = in.readLong(); midLeafBlockOnDiskSize = in.readInt(); midKeyEntry = in.readInt(); } {code} was: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , it still subtract the checksum to check if the midkey metadat exists, the : {code:java} public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) throws IOException { DataInputStream in = readRootIndex(blk, numEntries); // after reading the root index the checksum bytes have to // be subtracted to know if the mid key exists. int checkSumBytes = blk.totalChecksumBytes(); if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { // No mid-key metadata available. return; } midLeafBlockOffset = in.readLong(); midLeafBlockOnDiskSize = in.readInt(); midKeyEntry = in.readInt(); } {code} > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for > {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , > it still subtracts the checksum to check if the midkey metadat exists, the > midkey metadata would always be ignored: > {code:java} > public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) > throws IOException { > DataInputStream in = readRootIndex(blk, numEntries); > // after reading the root index the checksum bytes have to > // be subtracted to know if the mid key exists. > int checkSumBytes = blk.totalChecksumBytes(); > if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { > // No mid-key metadata available. > return; > } > midLeafBlockOffset = in.readLong(); > midLeafBlockOnDiskSize = in.readInt(); > midKeyEntry = in.readInt(); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27940: - Description: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , it still subtract the checksum to check if the midkey metadat exists, the : {code:java} public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) throws IOException { DataInputStream in = readRootIndex(blk, numEntries); // after reading the root index the checksum bytes have to // be subtracted to know if the mid key exists. int checkSumBytes = blk.totalChecksumBytes(); if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { // No mid-key metadata available. return; } midLeafBlockOffset = in.readLong(); midLeafBlockOnDiskSize = in.readInt(); midKeyEntry = in.readInt(); } {code} was: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , it still subtract the checksum to check if the midkey exists: {code:java} public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) throws IOException { DataInputStream in = readRootIndex(blk, numEntries); // after reading the root index the checksum bytes have to // be subtracted to know if the mid key exists. int checkSumBytes = blk.totalChecksumBytes(); if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { // No mid-key metadata available. return; } midLeafBlockOffset = in.readLong(); midLeafBlockOnDiskSize = in.readInt(); midKeyEntry = in.readInt(); } {code} > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for > {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , > it still subtract the checksum to check if the midkey metadat exists, the : > {code:java} > public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) > throws IOException { > DataInputStream in = readRootIndex(blk, numEntries); > // after reading the root index the checksum bytes have to > // be subtracted to know if the mid key exists. > int checkSumBytes = blk.totalChecksumBytes(); > if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { > // No mid-key metadata available. > return; > } > midLeafBlockOffset = in.readLong(); > midLeafBlockOnDiskSize = in.readInt(); > midKeyEntry = in.readInt(); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27940: - Description: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , it still subtract the checksum to check if the midkey exists: {code:java} public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) throws IOException { DataInputStream in = readRootIndex(blk, numEntries); // after reading the root index the checksum bytes have to // be subtracted to know if the mid key exists. int checkSumBytes = blk.totalChecksumBytes(); if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { // No mid-key metadata available. return; } midLeafBlockOffset = in.readLong(); midLeafBlockOnDiskSize = in.readInt(); midKeyEntry = in.readInt(); } {code} was:After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, it checks > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for > {{BlockIndexReader.readMultiLevelIndexRoot}}, after read root index entries , > it still subtract the checksum to check if the midkey exists: > {code:java} > public void readMultiLevelIndexRoot(HFileBlock blk, final int numEntries) > throws IOException { > DataInputStream in = readRootIndex(blk, numEntries); > // after reading the root index the checksum bytes have to > // be subtracted to know if the mid key exists. > int checkSumBytes = blk.totalChecksumBytes(); > if ((in.available() - checkSumBytes) < MID_KEY_METADATA_SIZE) { > // No mid-key metadata available. > return; > } > midLeafBlockOffset = in.readLong(); > midLeafBlockOnDiskSize = in.readInt(); > midKeyEntry = in.readInt(); > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27940: - Description: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{BlockIndexReader.readMultiLevelIndexRoot}}, it checks (was: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{}}) > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for > {{BlockIndexReader.readMultiLevelIndexRoot}}, it checks -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
[ https://issues.apache.org/jira/browse/HBASE-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27940: - Description: After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so {{HFileBlock.buf}} does not include checksum, but for {{}} > Midkey metadata in root index block would always be ignored by > BlockIndexReader.readMultiLevelIndexRoot > --- > > Key: HBASE-27940 > URL: https://issues.apache.org/jira/browse/HBASE-27940 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 3.0.0-alpha-4, 2.4.17, 2.5.5, 4.0.0-alpha-1 >Reporter: chenglei >Priority: Major > > After HBASE-27053, checksum is removed from the {{HFileBlock}} {{ByteBuff}} > in {{FSReaderImpl.readBlockDataInternal}} once the checksum is verified, so > {{HFileBlock.buf}} does not include checksum, but for {{}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27940) Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot
chenglei created HBASE-27940: Summary: Midkey metadata in root index block would always be ignored by BlockIndexReader.readMultiLevelIndexRoot Key: HBASE-27940 URL: https://issues.apache.org/jira/browse/HBASE-27940 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 2.5.5, 2.4.17, 3.0.0-alpha-4, 4.0.0-alpha-1 Reporter: chenglei -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27924) Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the sentByte metrics more accurate
[ https://issues.apache.org/jira/browse/HBASE-27924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733687#comment-17733687 ] chenglei commented on HBASE-27924: -- Thanks [~zhangduo] for reviewing and merging ! > Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the > sentByte metrics more accurate > > > Key: HBASE-27924 > URL: https://issues.apache.org/jira/browse/HBASE-27924 > Project: HBase > Issue Type: Bug > Components: netty, rpc, security >Affects Versions: 2.6.0, 3.0.0-alpha-4, 4.0.0-alpha-1 >Reporter: chenglei >Assignee: chenglei >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > {{NettyHBaseSaslRpcServerHandler.doResponse}} and > {{ServerRpcConnection.doRawSaslReply}} are very similar, I think we could > replace {{NettyHBaseSaslRpcServerHandler.doResponse}} with > {{ServerRpcConnection.doRawSaslReply}}: > {code:java} > private void doResponse(ChannelHandlerContext ctx, SaslStatus status, > Writable rv, > String errorClass, String error) throws IOException { > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > ByteBuf resp = ctx.alloc().buffer(256); > try (ByteBufOutputStream out = new ByteBufOutputStream(resp)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > } > NettyFutureUtils.safeWriteAndFlush(ctx, resp); > } > {code} > {code:java} > protected final void doRawSaslReply(SaslStatus status, Writable rv, String > errorClass, > String error) throws IOException { > BufferChain bc; > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > try (ByteBufferOutputStream saslResponse = new > ByteBufferOutputStream(256); > DataOutputStream out = new DataOutputStream(saslResponse)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > bc = new BufferChain(saslResponse.getByteBuffer()); > } > doRespond(() -> bc); > } > {code} > At the same time, {{NettyHBaseSaslRpcServerHandler.doResponse}} sends > ByteBuf directly , not the unified {{RpcResponse}} , so it would not handled > by the logic in {{NettyRpcServerResponseEncoder.write}}, which would update > the {{MetricsHBaseServer.sentBytes}}. Using > {{ServerRpcConnection.doRawSaslReply}} uniformly would make the > {{MetricsHBaseServer.sentBytes}} more accurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27924) Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the sentByte metrics more accurate
[ https://issues.apache.org/jira/browse/HBASE-27924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27924: - Resolution: Fixed Status: Resolved (was: Patch Available) > Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the > sentByte metrics more accurate > > > Key: HBASE-27924 > URL: https://issues.apache.org/jira/browse/HBASE-27924 > Project: HBase > Issue Type: Bug > Components: netty, rpc, security >Affects Versions: 2.6.0, 3.0.0-alpha-4, 4.0.0-alpha-1 >Reporter: chenglei >Assignee: chenglei >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > {{NettyHBaseSaslRpcServerHandler.doResponse}} and > {{ServerRpcConnection.doRawSaslReply}} are very similar, I think we could > replace {{NettyHBaseSaslRpcServerHandler.doResponse}} with > {{ServerRpcConnection.doRawSaslReply}}: > {code:java} > private void doResponse(ChannelHandlerContext ctx, SaslStatus status, > Writable rv, > String errorClass, String error) throws IOException { > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > ByteBuf resp = ctx.alloc().buffer(256); > try (ByteBufOutputStream out = new ByteBufOutputStream(resp)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > } > NettyFutureUtils.safeWriteAndFlush(ctx, resp); > } > {code} > {code:java} > protected final void doRawSaslReply(SaslStatus status, Writable rv, String > errorClass, > String error) throws IOException { > BufferChain bc; > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > try (ByteBufferOutputStream saslResponse = new > ByteBufferOutputStream(256); > DataOutputStream out = new DataOutputStream(saslResponse)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > bc = new BufferChain(saslResponse.getByteBuffer()); > } > doRespond(() -> bc); > } > {code} > At the same time, {{NettyHBaseSaslRpcServerHandler.doResponse}} sends > ByteBuf directly , not the unified {{RpcResponse}} , so it would not handled > by the logic in {{NettyRpcServerResponseEncoder.write}}, which would update > the {{MetricsHBaseServer.sentBytes}}. Using > {{ServerRpcConnection.doRawSaslReply}} uniformly would make the > {{MetricsHBaseServer.sentBytes}} more accurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27924) Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the sentByte metrics more accurate
[ https://issues.apache.org/jira/browse/HBASE-27924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chenglei updated HBASE-27924: - Fix Version/s: 2.6.0 3.0.0-beta-1 > Remove duplicate code for NettyHBaseSaslRpcServerHandler and make the > sentByte metrics more accurate > > > Key: HBASE-27924 > URL: https://issues.apache.org/jira/browse/HBASE-27924 > Project: HBase > Issue Type: Bug > Components: netty, rpc, security >Affects Versions: 2.6.0, 3.0.0-alpha-4, 4.0.0-alpha-1 >Reporter: chenglei >Assignee: chenglei >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > {{NettyHBaseSaslRpcServerHandler.doResponse}} and > {{ServerRpcConnection.doRawSaslReply}} are very similar, I think we could > replace {{NettyHBaseSaslRpcServerHandler.doResponse}} with > {{ServerRpcConnection.doRawSaslReply}}: > {code:java} > private void doResponse(ChannelHandlerContext ctx, SaslStatus status, > Writable rv, > String errorClass, String error) throws IOException { > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > ByteBuf resp = ctx.alloc().buffer(256); > try (ByteBufOutputStream out = new ByteBufOutputStream(resp)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > } > NettyFutureUtils.safeWriteAndFlush(ctx, resp); > } > {code} > {code:java} > protected final void doRawSaslReply(SaslStatus status, Writable rv, String > errorClass, > String error) throws IOException { > BufferChain bc; > // In my testing, have noticed that sasl messages are usually > // in the ballpark of 100-200. That's why the initial capacity is 256. > try (ByteBufferOutputStream saslResponse = new > ByteBufferOutputStream(256); > DataOutputStream out = new DataOutputStream(saslResponse)) { > out.writeInt(status.state); // write status > if (status == SaslStatus.SUCCESS) { > rv.write(out); > } else { > WritableUtils.writeString(out, errorClass); > WritableUtils.writeString(out, error); > } > bc = new BufferChain(saslResponse.getByteBuffer()); > } > doRespond(() -> bc); > } > {code} > At the same time, {{NettyHBaseSaslRpcServerHandler.doResponse}} sends > ByteBuf directly , not the unified {{RpcResponse}} , so it would not handled > by the logic in {{NettyRpcServerResponseEncoder.write}}, which would update > the {{MetricsHBaseServer.sentBytes}}. Using > {{ServerRpcConnection.doRawSaslReply}} uniformly would make the > {{MetricsHBaseServer.sentBytes}} more accurate. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #5294: HBASE-27904: A random data generator tool leveraging hbase bulk load
Apache-HBase commented on PR #5294: URL: https://github.com/apache/hbase/pull/5294#issuecomment-1595489952 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 20s | branch-2 passed | | +1 :green_heart: | compile | 1m 32s | branch-2 passed | | +1 :green_heart: | shadedjars | 4m 42s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 35s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 5s | the patch passed | | +1 :green_heart: | compile | 1m 26s | the patch passed | | +1 :green_heart: | javac | 1m 26s | the patch passed | | +1 :green_heart: | shadedjars | 4m 39s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 366m 0s | root in the patch passed. | | | | 392m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5294 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6a063383bbee 5.4.0-1099-aws #107~18.04.1-Ubuntu SMP Fri Mar 17 16:49:05 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/testReport/ | | Max. process+thread count | 7943 (vs. ulimit of 3) | | modules | C: hbase-mapreduce . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5266: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5266: URL: https://github.com/apache/hbase/pull/5266#issuecomment-1595406993 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 39s | master passed | | +1 :green_heart: | compile | 0m 45s | master passed | | +1 :green_heart: | shadedjars | 5m 52s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 31s | master passed | | -0 :warning: | patch | 6m 38s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | shadedjars | 5m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 28s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 54s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 3m 54s | hbase-endpoint in the patch passed. | | | | 29m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5266 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d78cf725be14 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 622f4ae862 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/testReport/ | | Max. process+thread count | 1319 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5266: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5266: URL: https://github.com/apache/hbase/pull/5266#issuecomment-1595406572 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 47s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 45s | master passed | | +1 :green_heart: | compile | 0m 59s | master passed | | +1 :green_heart: | checkstyle | 0m 26s | master passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 7s | master passed | | -0 :warning: | patch | 0m 35s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 36s | the patch passed | | +1 :green_heart: | compile | 0m 54s | the patch passed | | +1 :green_heart: | javac | 0m 54s | the patch passed | | +1 :green_heart: | checkstyle | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 8m 58s | Patch does not cause any errors with Hadoop 3.2.4 3.3.5. | | +1 :green_heart: | spotless | 0m 40s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 20s | The patch does not generate ASF License warnings. | | | | 28m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5266 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 22cf3d78b2c7 5.4.0-148-generic #165-Ubuntu SMP Tue Apr 18 08:53:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 622f4ae862 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5266: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5266: URL: https://github.com/apache/hbase/pull/5266#issuecomment-1595402386 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 51s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 26s | master passed | | +1 :green_heart: | compile | 0m 34s | master passed | | +1 :green_heart: | shadedjars | 4m 36s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | master passed | | -0 :warning: | patch | 5m 18s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 20s | the patch passed | | +1 :green_heart: | compile | 0m 33s | the patch passed | | +1 :green_heart: | javac | 0m 33s | the patch passed | | +1 :green_heart: | shadedjars | 4m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 3m 16s | hbase-endpoint in the patch passed. | | | | 23m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5266 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f0c78533b5f1 5.4.0-148-generic #165-Ubuntu SMP Tue Apr 18 08:53:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 622f4ae862 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/testReport/ | | Max. process+thread count | 1481 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5266/10/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5295: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5295: URL: https://github.com/apache/hbase/pull/5295#issuecomment-1595389675 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 28s | branch-2 passed | | +1 :green_heart: | compile | 0m 44s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 40s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 32s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 7s | the patch passed | | +1 :green_heart: | compile | 0m 43s | the patch passed | | +1 :green_heart: | javac | 0m 43s | the patch passed | | +1 :green_heart: | shadedjars | 4m 50s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 30s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 8m 17s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 2m 40s | hbase-endpoint in the patch passed. | | | | 33m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5295 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9d4fc74a2a80 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/testReport/ | | Max. process+thread count | 1738 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5295: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5295: URL: https://github.com/apache/hbase/pull/5295#issuecomment-1595385234 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 42s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 58s | branch-2 passed | | +1 :green_heart: | compile | 1m 1s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 23s | branch-2 passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 10s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 29s | the patch passed | | +1 :green_heart: | compile | 0m 59s | the patch passed | | +1 :green_heart: | javac | 0m 59s | the patch passed | | +1 :green_heart: | checkstyle | 0m 21s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 20s | Patch does not cause any errors with Hadoop 2.10.2 or 3.2.4 3.3.5. | | +1 :green_heart: | spotless | 0m 39s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 28m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5295 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux c22ca2e97803 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 80 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5295: HBASE-27902 New async admin api to invoke coproc on multiple servers
Apache-HBase commented on PR #5295: URL: https://github.com/apache/hbase/pull/5295#issuecomment-1595384642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 6s | branch-2 passed | | +1 :green_heart: | compile | 0m 35s | branch-2 passed | | +1 :green_heart: | shadedjars | 4m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | the patch passed | | +1 :green_heart: | compile | 0m 36s | the patch passed | | +1 :green_heart: | javac | 0m 36s | the patch passed | | +1 :green_heart: | shadedjars | 4m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 7m 39s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 2m 40s | hbase-endpoint in the patch passed. | | | | 27m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5295 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 5599b130f510 5.4.0-148-generic #165-Ubuntu SMP Tue Apr 18 08:53:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/testReport/ | | Max. process+thread count | 1859 (vs. ulimit of 3) | | modules | C: hbase-client hbase-endpoint U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5295/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5294: HBASE-27904: A random data generator tool leveraging hbase bulk load
Apache-HBase commented on PR #5294: URL: https://github.com/apache/hbase/pull/5294#issuecomment-1595367264 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 24s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 45s | branch-2 passed | | +1 :green_heart: | compile | 1m 58s | branch-2 passed | | +1 :green_heart: | shadedjars | 4m 44s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 10s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 33s | the patch passed | | +1 :green_heart: | compile | 1m 51s | the patch passed | | +1 :green_heart: | javac | 1m 51s | the patch passed | | +1 :green_heart: | shadedjars | 4m 44s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 257m 58s | root in the patch passed. | | | | 287m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5294 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ff83fd6ed21e 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/testReport/ | | Max. process+thread count | 7933 (vs. ulimit of 3) | | modules | C: hbase-mapreduce . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-27918) Remove useless content for branch-3
[ https://issues.apache.org/jira/browse/HBASE-27918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733632#comment-17733632 ] Hudson commented on HBASE-27918: Results for branch branch-3 [build #4 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/4/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/4/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/4/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/4/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Remove useless content for branch-3 > --- > > Key: HBASE-27918 > URL: https://issues.apache.org/jira/browse/HBASE-27918 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-beta-1 > > > For example, the ref guide related stuff. > Let's alian with branch-2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27917) Set version to 4.0.0-alpha-1-SNAPSHOT on master
[ https://issues.apache.org/jira/browse/HBASE-27917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733626#comment-17733626 ] Hudson commented on HBASE-27917: Results for branch master [build #858 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Set version to 4.0.0-alpha-1-SNAPSHOT on master > --- > > Key: HBASE-27917 > URL: https://issues.apache.org/jira/browse/HBASE-27917 > Project: HBase > Issue Type: Sub-task > Components: build, pom >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 4.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-27933) Stable version outdated on https://hbase.apache.org/downloads.html
[ https://issues.apache.org/jira/browse/HBASE-27933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733625#comment-17733625 ] Hudson commented on HBASE-27933: Results for branch master [build #858 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/858/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Stable version outdated on https://hbase.apache.org/downloads.html > -- > > Key: HBASE-27933 > URL: https://issues.apache.org/jira/browse/HBASE-27933 > Project: HBase > Issue Type: Task >Affects Versions: 2.5.4 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Minor > Fix For: 4.0.0-alpha-1 > > > In HBASE-27849, the stable version of HBase was updated to 2.5.x. This > updated the [https://downloads.apache.org/hbase/] page. > However, the download page ([https://hbase.apache.org/downloads.html)] still > refers to 2.4.x as the stable version. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache-HBase commented on pull request #5294: HBASE-27904: A random data generator tool leveraging hbase bulk load
Apache-HBase commented on PR #5294: URL: https://github.com/apache/hbase/pull/5294#issuecomment-1595084101 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 45s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 40s | branch-2 passed | | +1 :green_heart: | compile | 4m 59s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 8s | branch-2 passed | | -0 :warning: | mvnsite | 2m 0s | root in branch-2 failed. | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 7m 36s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 33s | the patch passed | | +1 :green_heart: | compile | 4m 55s | the patch passed | | +1 :green_heart: | javac | 4m 55s | the patch passed | | +1 :green_heart: | checkstyle | 1m 5s | the patch passed | | -0 :warning: | mvnsite | 1m 57s | root in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 34s | Patch does not cause any errors with Hadoop 2.10.2 or 3.2.4 3.3.5. | | +1 :green_heart: | spotless | 0m 41s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 7m 35s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 17s | The patch does not generate ASF License warnings. | | | | 54m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5294 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile mvnsite | | uname | Linux 86fc8aff12d5 5.4.0-1097-aws #105~18.04.1-Ubuntu SMP Mon Feb 13 17:50:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / b7fa986630 | | Default Java | Eclipse Adoptium-11.0.17+8 | | mvnsite | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/artifact/yetus-general-check/output/branch-mvnsite-root.txt | | mvnsite | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/artifact/yetus-general-check/output/patch-mvnsite-root.txt | | Max. process+thread count | 178 (vs. ulimit of 3) | | modules | C: hbase-mapreduce . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5294/2/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a diff in pull request #5293: HBASE-27892 Report memstore on-heap and off-heap size as jmx metrics
virajjasani commented on code in PR #5293: URL: https://github.com/apache/hbase/pull/5293#discussion_r1232567291 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java: ## @@ -317,12 +317,15 @@ private void tune() { unblockedFlushCnt = unblockedFlushCount.getAndSet(0); tunerContext.setUnblockedFlushCount(unblockedFlushCnt); metricsHeapMemoryManager.updateUnblockedFlushCount(unblockedFlushCnt); - // TODO : add support for offheap metrics tunerContext.setCurBlockCacheUsed((float) blockCache.getCurrentSize() / maxHeapSize); metricsHeapMemoryManager.setCurBlockCacheSizeGauge(blockCache.getCurrentSize()); + long globalMemstoreDataSize = regionServerAccounting.getGlobalMemStoreDataSize(); long globalMemstoreHeapSize = regionServerAccounting.getGlobalMemStoreHeapSize(); + long globalMemStoreOffHeapSize = regionServerAccounting.getGlobalMemStoreOffHeapSize(); tunerContext.setCurMemStoreUsed((float) globalMemstoreHeapSize / maxHeapSize); - metricsHeapMemoryManager.setCurMemStoreSizeGauge(globalMemstoreHeapSize); Review Comment: i was not aware either, realized these metrics exist and this is likely bug only after Jing created this PR, i also need to do some digging here. Thanks for pointing out `sub=Server` @bbeaudreault FYI @jinggou -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Himanshu-g81 commented on pull request #5280: HBASE-27904: A random data generator tool leveraging hbase bulk load
Himanshu-g81 commented on PR #5280: URL: https://github.com/apache/hbase/pull/5280#issuecomment-1595031294 Thank you @virajjasani!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5293: HBASE-27892 Report memstore on-heap and off-heap size as jmx metrics
bbeaudreault commented on code in PR #5293: URL: https://github.com/apache/hbase/pull/5293#discussion_r1232545176 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java: ## @@ -317,12 +317,15 @@ private void tune() { unblockedFlushCnt = unblockedFlushCount.getAndSet(0); tunerContext.setUnblockedFlushCount(unblockedFlushCnt); metricsHeapMemoryManager.updateUnblockedFlushCount(unblockedFlushCnt); - // TODO : add support for offheap metrics tunerContext.setCurBlockCacheUsed((float) blockCache.getCurrentSize() / maxHeapSize); metricsHeapMemoryManager.setCurBlockCacheSizeGauge(blockCache.getCurrentSize()); + long globalMemstoreDataSize = regionServerAccounting.getGlobalMemStoreDataSize(); long globalMemstoreHeapSize = regionServerAccounting.getGlobalMemStoreHeapSize(); + long globalMemStoreOffHeapSize = regionServerAccounting.getGlobalMemStoreOffHeapSize(); tunerContext.setCurMemStoreUsed((float) globalMemstoreHeapSize / maxHeapSize); - metricsHeapMemoryManager.setCurMemStoreSizeGauge(globalMemstoreHeapSize); Review Comment: Sorry for the confusion, I didn't realize these metrics existed. Maybe they are better? I need to read more. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bbeaudreault commented on a diff in pull request #5293: HBASE-27892 Report memstore on-heap and off-heap size as jmx metrics
bbeaudreault commented on code in PR #5293: URL: https://github.com/apache/hbase/pull/5293#discussion_r1232544851 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java: ## @@ -317,12 +317,15 @@ private void tune() { unblockedFlushCnt = unblockedFlushCount.getAndSet(0); tunerContext.setUnblockedFlushCount(unblockedFlushCnt); metricsHeapMemoryManager.updateUnblockedFlushCount(unblockedFlushCnt); - // TODO : add support for offheap metrics tunerContext.setCurBlockCacheUsed((float) blockCache.getCurrentSize() / maxHeapSize); metricsHeapMemoryManager.setCurBlockCacheSizeGauge(blockCache.getCurrentSize()); + long globalMemstoreDataSize = regionServerAccounting.getGlobalMemStoreDataSize(); long globalMemstoreHeapSize = regionServerAccounting.getGlobalMemStoreHeapSize(); + long globalMemStoreOffHeapSize = regionServerAccounting.getGlobalMemStoreOffHeapSize(); tunerContext.setCurMemStoreUsed((float) globalMemstoreHeapSize / maxHeapSize); - metricsHeapMemoryManager.setCurMemStoreSizeGauge(globalMemstoreHeapSize); Review Comment: Ah interesting. So I wasn't talking about these metrics at all. These metrics show up in JMX under `sub=Memory` (which for us is all 0's for some reason, and we don't use them). The metrics I was referring to are in `sub=Server` and also per-region and per-table metrics. These also have a `memStoreSize` field, and it is derived from the DataSize rather than HeapSize. These are calculated in a few places, you have to sort of dig in based on usages of `MetricsRegionServerSource.MEMSTORE_SIZE` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on pull request #5294: HBASE-27904: A random data generator tool leveraging hbase bulk load
virajjasani commented on PR #5294: URL: https://github.com/apache/hbase/pull/5294#issuecomment-1595002652 triggered one more build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-27904) A random data generator tool leveraging bulk load.
[ https://issues.apache.org/jira/browse/HBASE-27904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733611#comment-17733611 ] Viraj Jasani commented on HBASE-27904: -- merged changes to master and branch-3, awaiting one build result before merging branch-2 PR > A random data generator tool leveraging bulk load. > -- > > Key: HBASE-27904 > URL: https://issues.apache.org/jira/browse/HBASE-27904 > Project: HBase > Issue Type: New Feature > Components: util >Reporter: Himanshu Gwalani >Assignee: Himanshu Gwalani >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > As of now, there is no data generator tool in HBase leveraging bulk load. > Since bulk load skips client writes path, it's much faster to generate data > and use of for load/performance tests where client writes are not a mandate. > {*}Example{*}: Any tooling over HBase that need x TBs of HBase Table for load > testing. > {*}Requirements{*}: > 1. Tooling should generate RANDOM data on the fly and should not require any > pre-generated data as CSV/XML files as input. > 2. Tooling should support pre-splited tables (number of splits to be taken as > input). > 3. Data should be UNIFORMLY distributed across all regions of the table. > *High-level Steps* > 1. A table will be created (pre-splited with number of splits as input) > 2. The mapper of a custom Map Reduce job will generate random key-value pair > and ensure that those are equally distributed across all regions of the table. > 3. > [HFileOutputFormat2|https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java] > will be used to add reducer to the MR job and create HFiles based on key > value pairs generated by mapper. > 4. Bulk load those HFiles to the respective regions of the table using > [LoadIncrementalFiles|https://hbase.apache.org/2.2/devapidocs/org/apache/hadoop/hbase/tool/LoadIncrementalHFiles.html] > *Results* > We had POC for this tool in our organization, tested this tool with a 11 > nodes HBase cluster (having HBase + Hadoop services running). The tool > generated: > 1. *100* *GB* of data in *6 minutes* > 2. *340 GB* of data in *13 minutes* > 3. *3.5 TB* of data in *3 hours and 10 minutes* > *Usage* > hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool > -mapper-count 100 -table TEST_TABLE_1 -rows-per-mapper 100 -split-count > 100 -delete-if-exist -table-options "NORMALIZATION_ENABLED=false" > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] virajjasani commented on pull request #5280: HBASE-27904: A random data generator tool leveraging hbase bulk load
virajjasani commented on PR #5280: URL: https://github.com/apache/hbase/pull/5280#issuecomment-1595003329 awaiting one build result before merging branch-2 backport PR #5294 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani merged pull request #5280: HBASE-27904: A random data generator tool leveraging hbase bulk load
virajjasani merged PR #5280: URL: https://github.com/apache/hbase/pull/5280 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] jojochuang commented on pull request #5189: HBASE-27746 Check if the file system supports storage policy before invoking setStoragePolicy()
jojochuang commented on PR #5189: URL: https://github.com/apache/hbase/pull/5189#issuecomment-159497 Well actually the test would fail if it's not "hdfs" `[ERROR] TestFSUtils.testSetStoragePolicyDefault:412->verifyNoHDFSApiInvocationForDefaultPolicy:422 » IllegalArgument Invalid URI for NameNode address (check fs.defaultFS): failfs://localhost/ is not of scheme 'hdfs'.` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a diff in pull request #5293: HBASE-27892 Report memstore on-heap and off-heap size as jmx metrics
virajjasani commented on code in PR #5293: URL: https://github.com/apache/hbase/pull/5293#discussion_r1232501652 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java: ## @@ -317,12 +317,15 @@ private void tune() { unblockedFlushCnt = unblockedFlushCount.getAndSet(0); tunerContext.setUnblockedFlushCount(unblockedFlushCnt); metricsHeapMemoryManager.updateUnblockedFlushCount(unblockedFlushCnt); - // TODO : add support for offheap metrics tunerContext.setCurBlockCacheUsed((float) blockCache.getCurrentSize() / maxHeapSize); metricsHeapMemoryManager.setCurBlockCacheSizeGauge(blockCache.getCurrentSize()); + long globalMemstoreDataSize = regionServerAccounting.getGlobalMemStoreDataSize(); long globalMemstoreHeapSize = regionServerAccounting.getGlobalMemStoreHeapSize(); + long globalMemStoreOffHeapSize = regionServerAccounting.getGlobalMemStoreOffHeapSize(); tunerContext.setCurMemStoreUsed((float) globalMemstoreHeapSize / maxHeapSize); - metricsHeapMemoryManager.setCurMemStoreSizeGauge(globalMemstoreHeapSize); Review Comment: yes, this is what i was wondering. FYI @bbeaudreault as he discovered this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on pull request #5295: HBASE-27902 New async admin api to invoke coproc on multiple servers
virajjasani commented on PR #5295: URL: https://github.com/apache/hbase/pull/5295#issuecomment-1594972090 > I still do not think we need to add this method in the Admin interface, it is very easy to implement by our users. And even if we really want to add this support, a default method at the AsyncAdmin interface is enough? It can be done through public methods in AsyncAdmin interface... Thanks Duo, i understand your point but using FutureUtil utilities by client applications is not recommended given it is IA#Private. > a default method at the AsyncAdmin interface is enough? fair enough, let me check this, thank you! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] frostruan commented on a diff in pull request #5256: HBASE-26867 Introduce a FlushProcedure
frostruan commented on code in PR #5256: URL: https://github.com/apache/hbase/pull/5256#discussion_r1232450031 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/FlushTableProcedure.java: ## @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master.procedure; + +import java.io.IOException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; +import org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager; +import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; +import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableProcedureStateData; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableState; + +@InterfaceAudience.Private +public class FlushTableProcedure extends AbstractStateMachineTableProcedure { + private static final Logger LOG = LoggerFactory.getLogger(FlushTableProcedure.class); + + private TableName tableName; + + private byte[] columnFamily; + + public FlushTableProcedure() { +super(); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName) { +this(env, tableName, null); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName, byte[] columnFamily) { +super(env); +this.tableName = tableName; +this.columnFamily = columnFamily; + } + + @Override + protected LockState acquireLock(MasterProcedureEnv env) { +// Here we don't acquire table lock because the flush operation and other operations (like +// split or merge) are not mutually exclusive. Region will flush memstore when being closed. +// It's safe even if we don't have lock. However, currently we are limited by the scheduling +// mechanism of the procedure scheduler and have to acquire table shared lock here. See +// HBASE-27905 for details. +if (env.getProcedureScheduler().waitTableSharedLock(this, getTableName())) { + return LockState.LOCK_EVENT_WAIT; +} +return LockState.LOCK_ACQUIRED; + } + + @Override + protected void releaseLock(MasterProcedureEnv env) { +env.getProcedureScheduler().wakeTableSharedLock(this, getTableName()); + } + + @Override + protected Flow executeFromState(MasterProcedureEnv env, FlushTableState state) +throws ProcedureSuspendedException, ProcedureYieldException, InterruptedException { +LOG.info("{} execute state={}", this, state); + +try { + switch (state) { +case FLUSH_TABLE_PREPARE: + preflightChecks(env, true); + setNextState(FlushTableState.FLUSH_TABLE_FLUSH_REGIONS); + return Flow.HAS_MORE_STATE; +case FLUSH_TABLE_FLUSH_REGIONS: + addChildProcedure(createFlushRegionProcedures(env)); + return Flow.NO_MORE_STATE; +default: + throw new UnsupportedOperationException("unhandled state=" + state); + } +} catch (Exception e) { + setFailure("master-flush-table", e); Review Comment: Thank you for pointing this out Duo, it is necessary that we adjust it after we make the FlushTableProcedure not support rollback. My original thought was this, when we are in the state **FLUSH_TABLE_PREPARE**, we need to confirm that the state of the table is **Online**, which may require an RPC to access the meta table, because the **TableStateManager** loads the state of the table lazily. if the request fails, an exception may be thrown, so I originally thought that if the request fails, then just roll back the procedure. Since we do not support rollback now, let me fix this. -- This is an automated message from the Apache Git Service. To respond
[GitHub] [hbase] frostruan commented on a diff in pull request #5256: HBASE-26867 Introduce a FlushProcedure
frostruan commented on code in PR #5256: URL: https://github.com/apache/hbase/pull/5256#discussion_r1232450031 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/FlushTableProcedure.java: ## @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master.procedure; + +import java.io.IOException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; +import org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager; +import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; +import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableProcedureStateData; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableState; + +@InterfaceAudience.Private +public class FlushTableProcedure extends AbstractStateMachineTableProcedure { + private static final Logger LOG = LoggerFactory.getLogger(FlushTableProcedure.class); + + private TableName tableName; + + private byte[] columnFamily; + + public FlushTableProcedure() { +super(); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName) { +this(env, tableName, null); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName, byte[] columnFamily) { +super(env); +this.tableName = tableName; +this.columnFamily = columnFamily; + } + + @Override + protected LockState acquireLock(MasterProcedureEnv env) { +// Here we don't acquire table lock because the flush operation and other operations (like +// split or merge) are not mutually exclusive. Region will flush memstore when being closed. +// It's safe even if we don't have lock. However, currently we are limited by the scheduling +// mechanism of the procedure scheduler and have to acquire table shared lock here. See +// HBASE-27905 for details. +if (env.getProcedureScheduler().waitTableSharedLock(this, getTableName())) { + return LockState.LOCK_EVENT_WAIT; +} +return LockState.LOCK_ACQUIRED; + } + + @Override + protected void releaseLock(MasterProcedureEnv env) { +env.getProcedureScheduler().wakeTableSharedLock(this, getTableName()); + } + + @Override + protected Flow executeFromState(MasterProcedureEnv env, FlushTableState state) +throws ProcedureSuspendedException, ProcedureYieldException, InterruptedException { +LOG.info("{} execute state={}", this, state); + +try { + switch (state) { +case FLUSH_TABLE_PREPARE: + preflightChecks(env, true); + setNextState(FlushTableState.FLUSH_TABLE_FLUSH_REGIONS); + return Flow.HAS_MORE_STATE; +case FLUSH_TABLE_FLUSH_REGIONS: + addChildProcedure(createFlushRegionProcedures(env)); + return Flow.NO_MORE_STATE; +default: + throw new UnsupportedOperationException("unhandled state=" + state); + } +} catch (Exception e) { + setFailure("master-flush-table", e); Review Comment: Thank you for pointing this out Duo, it is necessary that we adjust it after we make the FlushTableProcedure not support rollback. My original thought was this, when we are in the state **FLUSH_TABLE_PREPARE**, we need to confirm that the state of the table is **Online**, which may require an RPC to access the meta table, because the **TableStateManager** loads the state of the table lazily. if the request fails, an exception nay be thrown, so I originally thought that if the request fails, then just roll back the procedure. Since we do not support rollback now, let me fix this. -- This is an automated message from the Apache Git Service. To respond
[jira] [Commented] (HBASE-27871) Meta replication stuck forever if wal it's still reading gets rolled and deleted
[ https://issues.apache.org/jira/browse/HBASE-27871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733583#comment-17733583 ] Hudson commented on HBASE-27871: Results for branch branch-2.5 [build #368 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/368/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/368/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/368/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/368/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/368/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Meta replication stuck forever if wal it's still reading gets rolled and > deleted > > > Key: HBASE-27871 > URL: https://issues.apache.org/jira/browse/HBASE-27871 > Project: HBase > Issue Type: Bug > Components: meta replicas >Affects Versions: 2.6.0, 2.4.17, 2.5.4 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 2.6.0, 2.5.6 > > > This affects branch-2 based releases only (in master, HBASE-26416 refactored > region replication to not rely on the replication framework anymore). > Per the original [meta region replicas > design|https://docs.google.com/document/d/1jJWVc-idHhhgL4KDRpjMsQJKCl_NRaCLGiH3Wqwd3O8/edit], > we use most of the replication framework for communicating changes in the > primary replica back to the secondary ones, but we skip storing the queue > state in ZK. In the event of a region replication crash, we should let the > related replication source thread be interrupted, so that > RegionReplicaReplicationEndpoint would set a new source from the scratch and > make sure to update the secondary replicas. > > We have run into a situation in one of our customers' cluster where the > region replica source faced a long lag (probably because the RSes hosting the > secondary replicas were busy and slower in processing the region replication > entries), so that the current wal got rolled and eventually deleted whilst > the replication source reader was still referring it. In such cases, > ReplicationSourceReader only sees the IOException and keeps retrying the read > indefinitely, but since the file is now gone, it will just get stuck there > forever. In the particular case of FNFE (which I believe would only happen > for region replication), we should just raise an exception and let > RegionReplicaReplicationEndpoint handle it to reset the region replication > source. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27939) Bump snappy-java from 1.1.9.1 to 1.1.10.1
[ https://issues.apache.org/jira/browse/HBASE-27939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-27939. --- Fix Version/s: 2.6.0 2.5.6 3.0.0-beta-1 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.5+. > Bump snappy-java from 1.1.9.1 to 1.1.10.1 > - > > Key: HBASE-27939 > URL: https://issues.apache.org/jira/browse/HBASE-27939 > Project: HBase > Issue Type: Improvement > Components: dependabot, security >Reporter: Duo Zhang >Priority: Major > Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 merged pull request #5292: HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1
Apache9 merged PR #5292: URL: https://github.com/apache/hbase/pull/5292 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-27939) Bump snappy-java from 1.1.9.1 to 1.1.10.1
Duo Zhang created HBASE-27939: - Summary: Bump snappy-java from 1.1.9.1 to 1.1.10.1 Key: HBASE-27939 URL: https://issues.apache.org/jira/browse/HBASE-27939 Project: HBase Issue Type: Improvement Components: dependabot, security Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 merged pull request #5285: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler …
Apache9 merged PR #5285: URL: https://github.com/apache/hbase/pull/5285 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-27888) Record readBlock message in log when it takes too long time
[ https://issues.apache.org/jira/browse/HBASE-27888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-27888. --- Fix Version/s: 2.6.0 2.5.6 3.0.0-beta-1 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.5+. The code on branch-2.4 is a bit different so I did not port it to branch-2.4. Please fill the release note as we added a new config here. [~chaijunjie] Thanks. > Record readBlock message in log when it takes too long time > --- > > Key: HBASE-27888 > URL: https://issues.apache.org/jira/browse/HBASE-27888 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.5.3 >Reporter: chaijunjie >Assignee: chaijunjie >Priority: Minor > Fix For: 2.6.0, 2.5.6, 3.0.0-beta-1 > > > After HBASE-15160, we can record the readBlock message by TRACE log in > org.apache.hadoop.hbase.io.hfile.HFileBlock.FSReaderImpl#readBlockDataInternal, > But it record all read block message in TRACE log, some times, we only focus > the block read cost too much time.. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 commented on pull request #4423: HBASE-27026 Disable Style/FrozenStringLiteralComment for ruby
Apache9 commented on PR #4423: URL: https://github.com/apache/hbase/pull/4423#issuecomment-1594878270 Any updates here? We want to merge this PR first or just close this PR and open a new PR to upgrade rubocop? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #4721: HBASE-27231 FSHLog should retry writing WAL entries when syncs to HDF…
Apache9 commented on PR #4721: URL: https://github.com/apache/hbase/pull/4721#issuecomment-1594875816 @comnetwork Do you still have interest in implementing this? WHen investigating how to better support sync replication in WAL framework, as well as how could we introduce new WAL implementations, I believe a better way is to also make sync replication work with FSHLog, and also, the new WAL implementation should try to implement AsyncWriter or Writer interface, if they choose any support Writer interface(this is possible if they only implement the hadoop fs APIs), then the new WAL implementation can only work FSHLog, so I think we still need FSHLog in the future. So now I'm OK without touching the code in FSHLog and also in AsyncFSWAL to make them share more code and use similiar mechanism to deal with errors. Let's revive this PR. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #5172: HBASE-27663 ChaosMonkey documentation enhancements
Apache9 commented on PR #5172: URL: https://github.com/apache/hbase/pull/5172#issuecomment-1594868302 Do you have any other concerns? @chrajeshbabu Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #5175: HBASE-27387 MetricsSource lastShippedTimeStamps ConcurrentModificatio…
Apache9 commented on PR #5175: URL: https://github.com/apache/hbase/pull/5175#issuecomment-1594867089 Any updates here? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27894) create-release is broken by recent gitbox changes
[ https://issues.apache.org/jira/browse/HBASE-27894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-27894: -- Component/s: scripts > create-release is broken by recent gitbox changes > - > > Key: HBASE-27894 > URL: https://issues.apache.org/jira/browse/HBASE-27894 > Project: HBase > Issue Type: Bug > Components: scripts >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 4.0.0-alpha-1 > > > The error looks like: > {noformat} > ... > GIT_BRANCH [branch-2.5]: > -:1: parser error : Space required after the Public Identifier > > {noformat} > but is misleading. What is going on is create-release tries to retrieve the > top level POM from the specified branch using a gitbox URL but gitbox is > returning an HTML redirect, which is not what is expected. > {noformat} > ++ read -r -p 'GIT_BRANCH [branch-2.5]: ' REPLY > GIT_BRANCH [branch-2.5]: > ++ local RETVAL=branch-2.5 > ++ '[' -z branch-2.5 ']' > ++ echo branch-2.5 > + GIT_BRANCH=branch-2.5 > + export GIT_BRANCH > + local version > ++ curl -s > 'https://gitbox.apache.org/repos/asf?p=hbase.git;a=blob_plain;f=pom.xml;hb=refs/heads/branch-2.5' > ++ parse_version > ++ xmllint --xpath > '//*[local-name()='\''project'\'']/*[local-name()='\''version'\'']/text()' - > -:1: parser error : Space required after the Public Identifier > > {noformat} > {noformat} > $ curl -s > 'https://gitbox.apache.org/repos/asf?p=hbase.git;a=blob_plain;f=pom.xml;hb=refs/heads/branch-2.5' > > > 302 Found > > Found > The document has moved href="https://raw.githubusercontent.com/apache/hbase/refs/heads/branch-2.5/pom.xml;>here. > > {noformat} > The solution is to retrieve content using github URLs (via > raw.githubusercontent.com) instead of gitbox URLs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27894) create-release is broken by recent gitbox changes
[ https://issues.apache.org/jira/browse/HBASE-27894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-27894: -- Fix Version/s: 4.0.0-alpha-1 (was: 3.0.0-beta-1) Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Merged to master. Thanks [~apurtell]! > create-release is broken by recent gitbox changes > - > > Key: HBASE-27894 > URL: https://issues.apache.org/jira/browse/HBASE-27894 > Project: HBase > Issue Type: Bug >Reporter: Andrew Kyle Purtell >Assignee: Andrew Kyle Purtell >Priority: Major > Fix For: 4.0.0-alpha-1 > > > The error looks like: > {noformat} > ... > GIT_BRANCH [branch-2.5]: > -:1: parser error : Space required after the Public Identifier > > {noformat} > but is misleading. What is going on is create-release tries to retrieve the > top level POM from the specified branch using a gitbox URL but gitbox is > returning an HTML redirect, which is not what is expected. > {noformat} > ++ read -r -p 'GIT_BRANCH [branch-2.5]: ' REPLY > GIT_BRANCH [branch-2.5]: > ++ local RETVAL=branch-2.5 > ++ '[' -z branch-2.5 ']' > ++ echo branch-2.5 > + GIT_BRANCH=branch-2.5 > + export GIT_BRANCH > + local version > ++ curl -s > 'https://gitbox.apache.org/repos/asf?p=hbase.git;a=blob_plain;f=pom.xml;hb=refs/heads/branch-2.5' > ++ parse_version > ++ xmllint --xpath > '//*[local-name()='\''project'\'']/*[local-name()='\''version'\'']/text()' - > -:1: parser error : Space required after the Public Identifier > > {noformat} > {noformat} > $ curl -s > 'https://gitbox.apache.org/repos/asf?p=hbase.git;a=blob_plain;f=pom.xml;hb=refs/heads/branch-2.5' > > > 302 Found > > Found > The document has moved href="https://raw.githubusercontent.com/apache/hbase/refs/heads/branch-2.5/pom.xml;>here. > > {noformat} > The solution is to retrieve content using github URLs (via > raw.githubusercontent.com) instead of gitbox URLs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase] Apache9 merged pull request #5262: HBASE-27894 create-release is broken by recent gitbox changes
Apache9 merged PR #5262: URL: https://github.com/apache/hbase/pull/5262 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a diff in pull request #5256: HBASE-26867 Introduce a FlushProcedure
Apache9 commented on code in PR #5256: URL: https://github.com/apache/hbase/pull/5256#discussion_r1223164292 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/FlushTableProcedure.java: ## @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master.procedure; + +import java.io.IOException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; +import org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager; +import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer; +import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException; +import org.apache.hadoop.hbase.procedure2.ProcedureYieldException; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.com.google.protobuf.UnsafeByteOperations; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableProcedureStateData; +import org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.FlushTableState; + +@InterfaceAudience.Private +public class FlushTableProcedure extends AbstractStateMachineTableProcedure { + private static final Logger LOG = LoggerFactory.getLogger(FlushTableProcedure.class); + + private TableName tableName; + + private byte[] columnFamily; + + public FlushTableProcedure() { +super(); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName) { +this(env, tableName, null); + } + + public FlushTableProcedure(MasterProcedureEnv env, TableName tableName, byte[] columnFamily) { +super(env); +this.tableName = tableName; +this.columnFamily = columnFamily; + } + + @Override + protected LockState acquireLock(MasterProcedureEnv env) { +// Here we don't acquire table lock because the flush operation and other operations (like +// split or merge) are not mutually exclusive. Region will flush memstore when being closed. +// It's safe even if we don't have lock. However, currently we are limited by the scheduling +// mechanism of the procedure scheduler and have to acquire table shared lock here. See +// HBASE-27905 for details. +if (env.getProcedureScheduler().waitTableSharedLock(this, getTableName())) { + return LockState.LOCK_EVENT_WAIT; +} +return LockState.LOCK_ACQUIRED; + } + + @Override + protected void releaseLock(MasterProcedureEnv env) { +env.getProcedureScheduler().wakeTableSharedLock(this, getTableName()); + } + + @Override + protected Flow executeFromState(MasterProcedureEnv env, FlushTableState state) +throws ProcedureSuspendedException, ProcedureYieldException, InterruptedException { +LOG.info("{} execute state={}", this, state); + +try { + switch (state) { +case FLUSH_TABLE_PREPARE: + preflightChecks(env, true); + setNextState(FlushTableState.FLUSH_TABLE_FLUSH_REGIONS); + return Flow.HAS_MORE_STATE; +case FLUSH_TABLE_FLUSH_REGIONS: + addChildProcedure(createFlushRegionProcedures(env)); + return Flow.NO_MORE_STATE; +default: + throw new UnsupportedOperationException("unhandled state=" + state); + } +} catch (Exception e) { + setFailure("master-flush-table", e); Review Comment: What could casue the procedure failure? Since we do not support rollback, I do not think we can mark this procedure as failure after entering the fllush regions state... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #5260: HBASE-27766 Support steal job queue mode for read RPC queues of RWQue…
Apache9 commented on PR #5260: URL: https://github.com/apache/hbase/pull/5260#issuecomment-1594848397 Do we have any perf numbers here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-kustomize] busbey commented on a diff in pull request #3: HBASE-27829 Introduce build for kuttl image, basis for dev/test environment
busbey commented on code in PR #3: URL: https://github.com/apache/hbase-kustomize/pull/3#discussion_r1232369513 ## dockerfiles/kuttl/README.md: ## @@ -0,0 +1,71 @@ + + +# dockerfiles/kuttl + +This directory builds a docker image containing everything required to run `kubectl-kuttl`. It +includes all the dependencies to run in "mocked control plane" mode as well as targeting a full +cluster. This image is used as the basis for both dev and test environments. + Review Comment: Is the audience here folks who need dev/test environments within the Apache HBase project? Or is it folks who need to do dev/test stuff downstream of us? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a diff in pull request #5293: HBASE-27892 Report memstore on-heap and off-heap size as jmx metrics
Apache9 commented on code in PR #5293: URL: https://github.com/apache/hbase/pull/5293#discussion_r1232319452 ## hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsHeapMemoryManagerSource.java: ## @@ -118,6 +130,13 @@ public interface MetricsHeapMemoryManagerSource extends BaseSource { String UNBLOCKED_FLUSH_GAUGE_DESC = "Gauge for the unblocked flush count before tuning"; String MEMSTORE_SIZE_GAUGE_NAME = "memStoreSize"; String MEMSTORE_SIZE_GAUGE_DESC = "Global MemStore used in bytes by the RegionServer"; + String MEMSTORE_ONHEAP_SIZE_GAUGE_NAME = "memStoreOnHeapSize"; Review Comment: Use OnHeap or just Heap? I'm not an English expert, just asking... ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryManager.java: ## @@ -317,12 +317,15 @@ private void tune() { unblockedFlushCnt = unblockedFlushCount.getAndSet(0); tunerContext.setUnblockedFlushCount(unblockedFlushCnt); metricsHeapMemoryManager.updateUnblockedFlushCount(unblockedFlushCnt); - // TODO : add support for offheap metrics tunerContext.setCurBlockCacheUsed((float) blockCache.getCurrentSize() / maxHeapSize); metricsHeapMemoryManager.setCurBlockCacheSizeGauge(blockCache.getCurrentSize()); + long globalMemstoreDataSize = regionServerAccounting.getGlobalMemStoreDataSize(); long globalMemstoreHeapSize = regionServerAccounting.getGlobalMemStoreHeapSize(); + long globalMemStoreOffHeapSize = regionServerAccounting.getGlobalMemStoreOffHeapSize(); tunerContext.setCurMemStoreUsed((float) globalMemstoreHeapSize / maxHeapSize); - metricsHeapMemoryManager.setCurMemStoreSizeGauge(globalMemstoreHeapSize); Review Comment: So in the old time we just use heap size as memstore size? This should be bug? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #5243: HBASE-27873 Asyncfs may print too many WARN logs when replace writer
Apache9 commented on PR #5243: URL: https://github.com/apache/hbase/pull/5243#issuecomment-1594752038 > > Since we use exponential backoff here, the log output is acceptable? We will soon increase the interval between each warn message? > > Yes you are right, but still offen seen it. And its level is WARN, this makes me nervous, But after doing a little research I figured out that this shouldn't be a problem, that's why I want to change it. > > > At client side, we have configuration to not output the error message in the first several retries, it is called `hbase.client.start.log.errors.counter`. Maybe we can apply the same pattern here? > > Can do that, but this config is a client side config(as its name), do you think we need to introduce a new server side config? We can add a new config for this. And the default value should be 0, to keep the old behavior at least on 2.x. And we can discuss a better value for 3.x. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5285: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler …
Apache-HBase commented on PR #5285: URL: https://github.com/apache/hbase/pull/5285#issuecomment-1594672955 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 56s | master passed | | +1 :green_heart: | compile | 1m 1s | master passed | | +1 :green_heart: | shadedjars | 6m 33s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 34s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 18s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | shadedjars | 5m 45s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 247m 45s | hbase-server in the patch passed. | | | | 275m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5285 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d93324f89c95 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/testReport/ | | Max. process+thread count | 4695 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4670: HBASE-27237 Address is shoule be case insensitive
Apache-HBase commented on PR #4670: URL: https://github.com/apache/hbase/pull/4670#issuecomment-1594672290 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 8m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 24s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | shadedjars | 5m 23s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | master passed | | -0 :warning: | patch | 6m 29s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 6s | the patch passed | | +1 :green_heart: | compile | 1m 22s | the patch passed | | +1 :green_heart: | javac | 1m 22s | the patch passed | | +1 :green_heart: | shadedjars | 5m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 37s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 230m 56s | hbase-server in the patch passed. | | | | 267m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4670 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 046ad3de36bb 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/testReport/ | | Max. process+thread count | 4611 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5285: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler …
Apache-HBase commented on PR #5285: URL: https://github.com/apache/hbase/pull/5285#issuecomment-1594623584 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 30s | master passed | | +1 :green_heart: | compile | 0m 35s | master passed | | +1 :green_heart: | shadedjars | 4m 52s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 10s | the patch passed | | +1 :green_heart: | compile | 0m 34s | the patch passed | | +1 :green_heart: | javac | 0m 34s | the patch passed | | +1 :green_heart: | shadedjars | 4m 53s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 214m 42s | hbase-server in the patch passed. | | | | 238m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5285 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a08040f6cab4 5.4.0-1099-aws #107~18.04.1-Ubuntu SMP Fri Mar 17 16:49:05 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/testReport/ | | Max. process+thread count | 4453 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-kustomize] Apache-HBase commented on pull request #4: HBASE-27935 Introduce Jenkins PR job for hbase-kustomize (addendum)
Apache-HBase commented on PR #4: URL: https://github.com/apache/hbase-kustomize/pull/4#issuecomment-1594613990 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 3m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ main Compile Tests _ | _ Patch Compile Tests _ | | +1 :green_heart: | codespell | 0m 1s | | No new issues. | | +1 :green_heart: | detsecrets | 0m 2s | | The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | hadolint | 0m 0s | | No new issues. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | shelldocs | 0m 0s | | No new issues. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 0s | | The patch does not generate ASF License warnings. | | | | 3m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 | | GITHUB PR | https://github.com/apache/hbase-kustomize/pull/4 | | Optional Tests | dupname asflicense codespell detsecrets hadolint shellcheck shelldocs | | uname | Linux ec7400b475a6 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | nobuild | | Personality | /home/jenkins/jenkins-home/workspace/hbase-kustomize-github-pr_PR-4/yetus-precommit-check/src/.yetus/personality.sh | | git revision | main / 843e2bace2951a4743e1f5b4a320ace7cd48a3b3 | | Max. process+thread count | 10 (vs. ulimit of 5000) | | modules | C: . U: . | | Console output | https://ci-hbase.apache.org/job/hbase-kustomize-github-pr/job/PR-4/8/console | | versions | git=2.25.1 hadolint=2.10.0 codespell=2.2.1 detsecrets=1.2.0 shellcheck=0.8.0 | | Powered by | Apache Yetus 0.14.1 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4670: HBASE-27237 Address is shoule be case insensitive
Apache-HBase commented on PR #4670: URL: https://github.com/apache/hbase/pull/4670#issuecomment-1594422940 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 8m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 50s | master passed | | +1 :green_heart: | compile | 3m 46s | master passed | | +1 :green_heart: | checkstyle | 0m 55s | master passed | | +1 :green_heart: | spotless | 0m 47s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 3m 1s | master passed | | -0 :warning: | patch | 2m 18s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 10s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 33s | the patch passed | | +1 :green_heart: | compile | 3m 31s | the patch passed | | +1 :green_heart: | javac | 3m 31s | the patch passed | | -0 :warning: | checkstyle | 0m 15s | hbase-client: The patch generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 59s | Patch does not cause any errors with Hadoop 3.2.4 3.3.5. | | -1 :x: | spotless | 0m 24s | patch has 25 errors when running spotless:check, run spotless:apply to fix. | | +1 :green_heart: | spotbugs | 3m 48s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 56m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4670 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux cb2d4cc4ef16 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Eclipse Adoptium-11.0.17+8 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-general-check/output/patch-spotless.txt | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-kustomize] ndimiduk commented on pull request #4: HBASE-27935 Introduce Jenkins PR job for hbase-kustomize (addendum)
ndimiduk commented on PR #4: URL: https://github.com/apache/hbase-kustomize/pull/4#issuecomment-1594421097 PR build fails with: ``` [2023-06-16T09:24:09.010Z] Starting docker build... [2023-06-16T09:24:09.010Z] ERROR: BuildKit is enabled but the buildx component is missing or broken. [2023-06-16T09:24:09.010Z]Install the buildx component to build images with BuildKit: [2023-06-16T09:24:09.010Z]https://docs.docker.com/go/buildx/ ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #4670: HBASE-27237 Address is shoule be case insensitive
Apache-HBase commented on PR #4670: URL: https://github.com/apache/hbase/pull/4670#issuecomment-1594406157 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 32s | master passed | | +1 :green_heart: | compile | 1m 10s | master passed | | +1 :green_heart: | shadedjars | 5m 53s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | master passed | | -0 :warning: | patch | 6m 58s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 11s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 27s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 6m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 48s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 47s | hbase-client in the patch passed. | | -1 :x: | unit | 14m 14s | hbase-server in the patch failed. | | | | 42m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/4670 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 68eaf7d565c9 5.4.0-1101-aws #109~18.04.1-Ubuntu SMP Mon Apr 24 20:40:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Eclipse Adoptium-11.0.17+8 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/testReport/ | | Max. process+thread count | 1802 (vs. ulimit of 3) | | modules | C: hbase-client hbase-server U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4670/3/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-27938) Enable PE to load any custom implementation of tests at runtime
[ https://issues.apache.org/jira/browse/HBASE-27938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prathyusha reassigned HBASE-27938: -- Assignee: Prathyusha > Enable PE to load any custom implementation of tests at runtime > --- > > Key: HBASE-27938 > URL: https://issues.apache.org/jira/browse/HBASE-27938 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Prathyusha >Assignee: Prathyusha >Priority: Minor > > Right now to add any custom PE.Test implementation it has to have a compile > time dependency of those new test classes in PE, this is to enable PE to load > any custom impl of tests at runtime and utilise PE framework for any custom > implementations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase-kustomize] ndimiduk commented on pull request #4: HBASE-27935 Introduce Jenkins PR job for hbase-kustomize (addendum)
ndimiduk commented on PR #4: URL: https://github.com/apache/hbase-kustomize/pull/4#issuecomment-1594387451 PR build fails with ``` [2023-06-16T09:13:50.928Z] [INFO] Launching Yetus via /home/jenkins/jenkins-home/workspace/hbase-kustomize-github-pr_PR-3/yetus-precommit-check/src/dev-support/jenkins/jenkins_precommit_github_yetus.sh [2023-06-16T09:13:50.928Z] /home/jenkins/jenkins-home/workspace/hbase-kustomize-github-pr_PR-3/yetus-precommit-check@tmp/durable-10c0e8a4/script.sh: line 4: /home/jenkins/jenkins-home/workspace/hbase-kustomize-github-pr_PR-3/yetus-precommit-check/src/dev-support/jenkins/jenkins_precommit_github_yetus.sh: Permission denied script returned exit code 126 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #5285: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler …
Apache-HBase commented on PR #5285: URL: https://github.com/apache/hbase/pull/5285#issuecomment-1594394031 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 50s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | master passed | | +1 :green_heart: | compile | 3m 6s | master passed | | +1 :green_heart: | checkstyle | 0m 45s | master passed | | +1 :green_heart: | spotless | 0m 55s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 51s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 2m 48s | the patch passed | | +1 :green_heart: | javac | 2m 48s | the patch passed | | +1 :green_heart: | checkstyle | 0m 44s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 10m 42s | Patch does not cause any errors with Hadoop 3.2.4 3.3.5. | | +1 :green_heart: | spotless | 0m 42s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 39m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5285 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 82ff05bb7b9e 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 4be74d2455 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Max. process+thread count | 78 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5285/3/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-27938) Enable PE to load any custom implementation of tests at runtime
Prathyusha created HBASE-27938: -- Summary: Enable PE to load any custom implementation of tests at runtime Key: HBASE-27938 URL: https://issues.apache.org/jira/browse/HBASE-27938 Project: HBase Issue Type: Improvement Components: test Reporter: Prathyusha Right now to add any custom PE.Test implementation it has to have a compile time dependency of those new test classes in PE, this is to enable PE to load any custom impl of tests at runtime and utilise PE framework for any custom implementations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase-operator-tools] ndimiduk closed pull request #118: HBASE-27829 Introduce build for `kuttl` image, basis for dev/test environment
ndimiduk closed pull request #118: HBASE-27829 Introduce build for `kuttl` image, basis for dev/test environment URL: https://github.com/apache/hbase-operator-tools/pull/118 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-operator-tools] ndimiduk commented on pull request #118: HBASE-27829 Introduce build for `kuttl` image, basis for dev/test environment
ndimiduk commented on PR #118: URL: https://github.com/apache/hbase-operator-tools/pull/118#issuecomment-159437 Superseded by https://github.com/apache/hbase-kustomize/pull/3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-27871) Meta replication stuck forever if wal it's still reading gets rolled and deleted
[ https://issues.apache.org/jira/browse/HBASE-27871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27871: - Fix Version/s: 2.5.0 > Meta replication stuck forever if wal it's still reading gets rolled and > deleted > > > Key: HBASE-27871 > URL: https://issues.apache.org/jira/browse/HBASE-27871 > Project: HBase > Issue Type: Bug > Components: meta replicas >Affects Versions: 2.6.0, 2.4.17, 2.5.4 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 2.5.0, 2.6.0 > > > This affects branch-2 based releases only (in master, HBASE-26416 refactored > region replication to not rely on the replication framework anymore). > Per the original [meta region replicas > design|https://docs.google.com/document/d/1jJWVc-idHhhgL4KDRpjMsQJKCl_NRaCLGiH3Wqwd3O8/edit], > we use most of the replication framework for communicating changes in the > primary replica back to the secondary ones, but we skip storing the queue > state in ZK. In the event of a region replication crash, we should let the > related replication source thread be interrupted, so that > RegionReplicaReplicationEndpoint would set a new source from the scratch and > make sure to update the secondary replicas. > > We have run into a situation in one of our customers' cluster where the > region replica source faced a long lag (probably because the RSes hosting the > secondary replicas were busy and slower in processing the region replication > entries), so that the current wal got rolled and eventually deleted whilst > the replication source reader was still referring it. In such cases, > ReplicationSourceReader only sees the IOException and keeps retrying the read > indefinitely, but since the file is now gone, it will just get stuck there > forever. In the particular case of FNFE (which I believe would only happen > for region replication), we should just raise an exception and let > RegionReplicaReplicationEndpoint handle it to reset the region replication > source. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-27871) Meta replication stuck forever if wal it's still reading gets rolled and deleted
[ https://issues.apache.org/jira/browse/HBASE-27871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-27871: - Fix Version/s: 2.5.6 (was: 2.5.0) > Meta replication stuck forever if wal it's still reading gets rolled and > deleted > > > Key: HBASE-27871 > URL: https://issues.apache.org/jira/browse/HBASE-27871 > Project: HBase > Issue Type: Bug > Components: meta replicas >Affects Versions: 2.6.0, 2.4.17, 2.5.4 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 2.6.0, 2.5.6 > > > This affects branch-2 based releases only (in master, HBASE-26416 refactored > region replication to not rely on the replication framework anymore). > Per the original [meta region replicas > design|https://docs.google.com/document/d/1jJWVc-idHhhgL4KDRpjMsQJKCl_NRaCLGiH3Wqwd3O8/edit], > we use most of the replication framework for communicating changes in the > primary replica back to the secondary ones, but we skip storing the queue > state in ZK. In the event of a region replication crash, we should let the > related replication source thread be interrupted, so that > RegionReplicaReplicationEndpoint would set a new source from the scratch and > make sure to update the secondary replicas. > > We have run into a situation in one of our customers' cluster where the > region replica source faced a long lag (probably because the RSes hosting the > secondary replicas were busy and slower in processing the region replication > entries), so that the current wal got rolled and eventually deleted whilst > the replication source reader was still referring it. In such cases, > ReplicationSourceReader only sees the IOException and keeps retrying the read > indefinitely, but since the file is now gone, it will just get stuck there > forever. In the particular case of FNFE (which I believe would only happen > for region replication), we should just raise an exception and let > RegionReplicaReplicationEndpoint handle it to reset the region replication > source. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27935) Introduce Jenkins PR job for hbase-kustomize
[ https://issues.apache.org/jira/browse/HBASE-27935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-27935. -- Resolution: Fixed > Introduce Jenkins PR job for hbase-kustomize > > > Key: HBASE-27935 > URL: https://issues.apache.org/jira/browse/HBASE-27935 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > > We need something to build off of. Let's start with a clone of what's on > hbase-operator-tools. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [hbase-kustomize] ndimiduk commented on a diff in pull request #2: HBASE-27935 Introduce Jenkins PR job for hbase-kustomize
ndimiduk commented on code in PR #2: URL: https://github.com/apache/hbase-kustomize/pull/2#discussion_r1231860432 ## dev-support/jenkins/Jenkinsfile: ## @@ -0,0 +1,140 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +pipeline { + +agent { +label 'hbase' +} Review Comment: I can, but even the large tests shouldn't be that intensive. Maybe I can do `agent { label('hbase || hbase-large') }` ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org