[hbase] branch branch-3 updated: HBASE-27904 (ADDENDUM) remove doc

2023-06-16 Thread vjasani
This is an automated email from the ASF dual-hosted git repository.

vjasani pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 3ac24a5a345 HBASE-27904 (ADDENDUM) remove doc
3ac24a5a345 is described below

commit 3ac24a5a345c14cd4d35af67cd0d26b594490c22
Author: Viraj Jasani 
AuthorDate: Fri Jun 16 10:10:01 2023 -0700

HBASE-27904 (ADDENDUM) remove doc
---
 .../_chapters/bulk_data_generator_tool.adoc| 132 -
 1 file changed, 132 deletions(-)

diff --git a/src/main/asciidoc/_chapters/bulk_data_generator_tool.adoc 
b/src/main/asciidoc/_chapters/bulk_data_generator_tool.adoc
deleted file mode 100644
index 3ac6ca69312..000
--- a/src/main/asciidoc/_chapters/bulk_data_generator_tool.adoc
+++ /dev/null
@@ -1,132 +0,0 @@
-
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-== Bulk Data Generator Tool
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-This is a random data generator tool for HBase tables leveraging Hbase bulk 
load.
-It can create pre-splited HBase table and the generated data is *uniformly 
distributed* to all the regions of the table.
-
-=== How to Use
-
-[source]
-
-usage: hbase 
org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool  
[-D]*
- -d,--delete-if-exist If it's set, the table will be deleted if 
already exist.
- -h,--helpShow help message for the tool
- -mc,--mapper-count  The number of mapper containers to be launched.
- -o,--table-options  Table options to be set while creating the table.
- -r,--rows-per-mapperThe number of rows to be generated PER mapper.
- -sc,--split-count   The number of regions/pre-splits to be created 
for the table.
- -t,--table  The table name for which data need to be 
generated.
-
-
-
-Examples:
-
-hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool -t 
TEST_TABLE -mc 10 -r 100 -sc 10
-
-hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool -t 
TEST_TABLE -mc 10 -r 100 -sc 10 -d -o "BACKUP=false,NORMALIZATION_ENABLED=false"
-
-hbase org.apache.hadoop.hbase.util.bulkdatagenerator.BulkDataGeneratorTool -t 
TEST_TABLE -mc 10 -r 100 -sc 10 -Dmapreduce.map.memory.mb=8192
-
-
-=== How it Works
-
- Table Schema
-Tool generates a HBase table with single column family, i.e. *cf* and 9 
columns i.e.
-
-ORG_ID, TOOL_EVENT_ID, EVENT_ID, VEHICLE_ID, SPEED, LATITUDE, LONGITUDE, 
LOCATION, TIMESTAMP
-
-with row key as
-
-:
-
-
- Table Creation
-Tool creates a pre-splited HBase Table having "*split-count*" splits (i.e. 
*split-count* + 1 regions) with sequential 6 digit region boundary prefix.
-Example: If a table is generated with "*split-count*" as 10, it will have 
(10+1) regions with following start-end keys.
-
-(-01, 01-02, 02-03, , 09-10, 010-)
-
-
- Data Generation
-Tool creates and run a MR job to generate the HFiles, which are bulk loaded to 
table regions via `org.apache.hadoop.hbase.tool.BulkLoadHFilesTool`.
-The number of mappers is defined in input as "*mapper-count*". Each mapper 
generates "*records-per-mapper*" rows.
-
-`org.apache.hadoop.hbase.util.bulkdatageneratorBulkDataGeneratorRecordReader` 
ensures that each record generated by mapper is associated with index (added to 
the key) ranging from 1 to "*records-per-mapper*".
-
-The TOOL_EVENT_ID column for each row has a 6 digit prefix as
-
-(index) mod ("split-count" + 1)
-
-Example, if 10 records are to be generated by each mapper and "*split-count*" 
is 4, the TOOL_EVENT_IDs for each record will have a prefix as
-[options="header"]
-|===
-|Record Index|TOOL_EVENT_ID's first six characters
-//--
-|1|01
-|2|02
-|3|03
-|4|04
-|5|00
-|6|01
-|7|02
-|8|03
-|9|04
-|10|05
-|===
-Since TOOL_EVENT_ID is first attribute of row key and table region boundaries 
are also having start-end keys as 6 digit sequential prefixes, this ensures 
that each mapper generates (nearly) same number of 

[hbase] branch branch-3 updated: HBASE-27904: A random data generator tool leveraging hbase bulk load (#5280)

2023-06-16 Thread vjasani
This is an automated email from the ASF dual-hosted git repository.

vjasani pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 5231991a3a7 HBASE-27904: A random data generator tool leveraging hbase 
bulk load (#5280)
5231991a3a7 is described below

commit 5231991a3a7a9c863e9ae678003e9b0d1d37a355
Author: Himanshu Gwalani 
AuthorDate: Fri Jun 16 22:32:00 2023 +0530

HBASE-27904: A random data generator tool leveraging hbase bulk load (#5280)

Signed-off-by: Viraj Jasani 
---
 .../BulkDataGeneratorInputFormat.java  |  87 ++
 .../bulkdatagenerator/BulkDataGeneratorMapper.java | 138 ++
 .../BulkDataGeneratorRecordReader.java |  75 +
 .../bulkdatagenerator/BulkDataGeneratorTool.java   | 301 +
 .../hbase/util/bulkdatagenerator/Utility.java  | 102 +++
 .../_chapters/bulk_data_generator_tool.adoc| 132 +
 6 files changed, 835 insertions(+)

diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorInputFormat.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorInputFormat.java
new file mode 100644
index 000..f40951e945d
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorInputFormat.java
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util.bulkdatagenerator;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+public class BulkDataGeneratorInputFormat extends InputFormat {
+
+  public static final String MAPPER_TASK_COUNT_KEY =
+BulkDataGeneratorInputFormat.class.getName() + "mapper.task.count";
+
+  @Override
+  public List getSplits(JobContext job) throws IOException {
+// Get the number of mapper tasks configured
+int mapperCount = job.getConfiguration().getInt(MAPPER_TASK_COUNT_KEY, -1);
+Preconditions.checkArgument(mapperCount > 1, MAPPER_TASK_COUNT_KEY + " is 
not set.");
+
+// Create a number of input splits equal to the number of mapper tasks
+ArrayList splits = new ArrayList();
+for (int i = 0; i < mapperCount; ++i) {
+  splits.add(new FakeInputSplit());
+}
+return splits;
+  }
+
+  @Override
+  public RecordReader createRecordReader(InputSplit split,
+TaskAttemptContext context) throws IOException, InterruptedException {
+BulkDataGeneratorRecordReader bulkDataGeneratorRecordReader =
+  new BulkDataGeneratorRecordReader();
+bulkDataGeneratorRecordReader.initialize(split, context);
+return bulkDataGeneratorRecordReader;
+  }
+
+  /**
+   * Dummy input split to be used by {@link BulkDataGeneratorRecordReader}
+   */
+  private static class FakeInputSplit extends InputSplit implements Writable {
+
+@Override
+public void readFields(DataInput arg0) throws IOException {
+}
+
+@Override
+public void write(DataOutput arg0) throws IOException {
+}
+
+@Override
+public long getLength() throws IOException, InterruptedException {
+  return 0;
+}
+
+@Override
+public String[] getLocations() throws IOException, InterruptedException {
+  return new String[0];
+}
+  }
+}
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorMapper.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorMapper.java
new file mode 100644
index 000..35f8b9c471e
--- /dev/null
+++ 

[hbase] branch master updated (3ab81eb658d -> 622f4ae8628)

2023-06-16 Thread vjasani
This is an automated email from the ASF dual-hosted git repository.

vjasani pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


from 3ab81eb658d HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 
(#5292)
 add 622f4ae8628 HBASE-27904: A random data generator tool leveraging hbase 
bulk load (#5280)

No new revisions were added by this update.

Summary of changes:
 .../BulkDataGeneratorInputFormat.java  |  87 ++
 .../bulkdatagenerator/BulkDataGeneratorMapper.java | 138 ++
 .../BulkDataGeneratorRecordReader.java |  75 +
 .../bulkdatagenerator/BulkDataGeneratorTool.java   | 301 +
 .../hbase/util/bulkdatagenerator/Utility.java  | 102 +++
 .../_chapters/bulk_data_generator_tool.adoc| 132 +
 src/main/asciidoc/book.adoc|   1 +
 7 files changed, 836 insertions(+)
 create mode 100644 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorInputFormat.java
 create mode 100644 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorMapper.java
 create mode 100644 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorRecordReader.java
 create mode 100644 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/BulkDataGeneratorTool.java
 create mode 100644 
hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/bulkdatagenerator/Utility.java
 create mode 100644 src/main/asciidoc/_chapters/bulk_data_generator_tool.adoc



[hbase] branch branch-3 updated: HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new b3a5889b765 HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 
(#5292)
b3a5889b765 is described below

commit b3a5889b765d2820d046cf4a31f31a0aa2c3d169
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:53:33 2023 +0800

HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.9.1 to 
1.1.10.1.
- [Release notes](https://github.com/xerial/snappy-java/releases)
- 
[Commits](https://github.com/xerial/snappy-java/compare/v1.1.9.1...v1.1.10.1)

---
updated-dependencies:
- dependency-name: org.xerial.snappy:snappy-java
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] 
Co-authored-by: dependabot[bot] 
<49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Duo Zhang 
(cherry picked from commit 3ab81eb658df981b8e7206e89b96df94155d2581)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 7ec4a1dc9d9..2203665c037 100644
--- a/pom.xml
+++ b/pom.xml
@@ -889,7 +889,7 @@
 0.24
 1.11.0
 1.8.0
-1.1.9.1
+1.1.10.1
 1.9
 1.5.5-2
 4.1.4



[hbase] branch branch-2.5 updated: HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new e5ccfba6a85 HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 
(#5292)
e5ccfba6a85 is described below

commit e5ccfba6a85d877ec286a53f8ff20d4c14a29bc8
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:53:33 2023 +0800

HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.9.1 to 
1.1.10.1.
- [Release notes](https://github.com/xerial/snappy-java/releases)
- 
[Commits](https://github.com/xerial/snappy-java/compare/v1.1.9.1...v1.1.10.1)

---
updated-dependencies:
- dependency-name: org.xerial.snappy:snappy-java
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] 
Co-authored-by: dependabot[bot] 
<49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Duo Zhang 
(cherry picked from commit 3ab81eb658df981b8e7206e89b96df94155d2581)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index dcf601fbd4f..13cf5e22a26 100644
--- a/pom.xml
+++ b/pom.xml
@@ -643,7 +643,7 @@
 0.24
 1.11.0
 1.8.0
-1.1.9.1
+1.1.10.1
 1.9
 1.5.5-2
 4.1.4



[hbase] branch branch-2 updated: HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new b7fa9866309 HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 
(#5292)
b7fa9866309 is described below

commit b7fa986630989ec4a09815fa2447d60c9cfd2bbc
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:53:33 2023 +0800

HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.9.1 to 
1.1.10.1.
- [Release notes](https://github.com/xerial/snappy-java/releases)
- 
[Commits](https://github.com/xerial/snappy-java/compare/v1.1.9.1...v1.1.10.1)

---
updated-dependencies:
- dependency-name: org.xerial.snappy:snappy-java
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] 
Co-authored-by: dependabot[bot] 
<49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Duo Zhang 
(cherry picked from commit 3ab81eb658df981b8e7206e89b96df94155d2581)
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 4da4a029902..5ddccb7e76b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -644,7 +644,7 @@
 0.24
 1.11.0
 1.8.0
-1.1.9.1
+1.1.10.1
 1.9
 1.5.5-2
 4.1.4



[hbase] branch dependabot/maven/org.xerial.snappy-snappy-java-1.1.10.1 deleted (was b9fbbc64034)

2023-06-16 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch 
dependabot/maven/org.xerial.snappy-snappy-java-1.1.10.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


 was b9fbbc64034 Bump snappy-java from 1.1.9.1 to 1.1.10.1

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[hbase] branch master updated: HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 3ab81eb658d HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 
(#5292)
3ab81eb658d is described below

commit 3ab81eb658df981b8e7206e89b96df94155d2581
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:53:33 2023 +0800

HBASE-27939 Bump snappy-java from 1.1.9.1 to 1.1.10.1 (#5292)

Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.9.1 to 
1.1.10.1.
- [Release notes](https://github.com/xerial/snappy-java/releases)
- 
[Commits](https://github.com/xerial/snappy-java/compare/v1.1.9.1...v1.1.10.1)

---
updated-dependencies:
- dependency-name: org.xerial.snappy:snappy-java
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] 
Co-authored-by: dependabot[bot] 
<49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Duo Zhang 
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 1503a95266f..7958e5c221f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -889,7 +889,7 @@
 0.24
 1.11.0
 1.8.0
-1.1.9.1
+1.1.10.1
 1.9
 1.5.5-2
 4.1.4



[hbase] branch branch-3 updated: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler … (#5285)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 7160aa38e1e HBASE-27924 Remove duplicate code for 
NettyHBaseSaslRpcServerHandler … (#5285)
7160aa38e1e is described below

commit 7160aa38e1e8b02f64b19a516a401a1940bbc62b
Author: chenglei 
AuthorDate: Fri Jun 16 23:36:23 2023 +0800

HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler … 
(#5285)

Co-authored-by: comnetwork 
Signed-off-by: Duo Zhang 
(cherry picked from commit 0703d36daf8dd5c36164419032ff0760bb3f65cc)
---
 .../hbase/ipc/NettyHBaseSaslRpcServerHandler.java  |  25 +---
 .../hbase/ipc/TestSecurityRpcSentBytesMetrics.java | 155 +
 2 files changed, 157 insertions(+), 23 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
index cb7a173625e..dd6f84daae3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
@@ -17,20 +17,16 @@
  */
 package org.apache.hadoop.hbase.ipc;
 
-import java.io.IOException;
 import org.apache.hadoop.hbase.security.HBaseSaslRpcServer;
 import org.apache.hadoop.hbase.security.SaslStatus;
 import org.apache.hadoop.hbase.security.SaslUnwrapHandler;
 import org.apache.hadoop.hbase.security.SaslWrapHandler;
 import org.apache.hadoop.hbase.util.NettyFutureUtils;
 import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.Writable;
-import org.apache.hadoop.io.WritableUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
-import org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream;
 import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
 import org.apache.hbase.thirdparty.io.netty.channel.ChannelPipeline;
 import 
org.apache.hbase.thirdparty.io.netty.channel.SimpleChannelInboundHandler;
@@ -54,23 +50,6 @@ class NettyHBaseSaslRpcServerHandler extends 
SimpleChannelInboundHandlerhttp://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import static org.apache.hadoop.hbase.ipc.TestProtobufRpcServiceImpl.SERVICE;
+import static 
org.apache.hadoop.hbase.ipc.TestProtobufRpcServiceImpl.newBlockingStub;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.getKeytabFileForTesting;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.getPrincipalForTesting;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.loginKerberosPrincipal;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.setSecuredConfiguration;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.net.InetSocketAddress;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.security.SecurityInfo;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.SecurityTests;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
+
+import org.apache.hadoop.hbase.shaded.ipc.protobuf.generated.TestProtos;
+import 
org.apache.hadoop.hbase.shaded.ipc.protobuf.generated.TestRpcServiceProtos.TestProtobufRpcProto.BlockingInterface;
+
+@Category({ SecurityTests.class, MediumTests.class })
+public class TestSecurityRpcSentBytesMetrics {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestSecurityRpcSentBytesMetrics.class);
+
+  protected static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+
+  protected static final File KEYTAB_FILE =
+new File(TEST_UTIL.getDataTestDir("keytab").toUri().getPath());
+
+  protected static MiniKdc KDC;
+  

[hbase] branch branch-2 updated: HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler … (#5285)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 69fcce20c74 HBASE-27924 Remove duplicate code for 
NettyHBaseSaslRpcServerHandler … (#5285)
69fcce20c74 is described below

commit 69fcce20c7411988e0c7e6d8c76a9e1010d82ce1
Author: chenglei 
AuthorDate: Fri Jun 16 23:36:23 2023 +0800

HBASE-27924 Remove duplicate code for NettyHBaseSaslRpcServerHandler … 
(#5285)

Co-authored-by: comnetwork 
Signed-off-by: Duo Zhang 
(cherry picked from commit 0703d36daf8dd5c36164419032ff0760bb3f65cc)
---
 .../hbase/ipc/NettyHBaseSaslRpcServerHandler.java  |  25 +---
 .../hbase/ipc/TestSecurityRpcSentBytesMetrics.java | 155 +
 2 files changed, 157 insertions(+), 23 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
index cb7a173625e..dd6f84daae3 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/NettyHBaseSaslRpcServerHandler.java
@@ -17,20 +17,16 @@
  */
 package org.apache.hadoop.hbase.ipc;
 
-import java.io.IOException;
 import org.apache.hadoop.hbase.security.HBaseSaslRpcServer;
 import org.apache.hadoop.hbase.security.SaslStatus;
 import org.apache.hadoop.hbase.security.SaslUnwrapHandler;
 import org.apache.hadoop.hbase.security.SaslWrapHandler;
 import org.apache.hadoop.hbase.util.NettyFutureUtils;
 import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.Writable;
-import org.apache.hadoop.io.WritableUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf;
-import org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream;
 import org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext;
 import org.apache.hbase.thirdparty.io.netty.channel.ChannelPipeline;
 import 
org.apache.hbase.thirdparty.io.netty.channel.SimpleChannelInboundHandler;
@@ -54,23 +50,6 @@ class NettyHBaseSaslRpcServerHandler extends 
SimpleChannelInboundHandlerhttp://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import static org.apache.hadoop.hbase.ipc.TestProtobufRpcServiceImpl.SERVICE;
+import static 
org.apache.hadoop.hbase.ipc.TestProtobufRpcServiceImpl.newBlockingStub;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.getKeytabFileForTesting;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.getPrincipalForTesting;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.loginKerberosPrincipal;
+import static 
org.apache.hadoop.hbase.security.HBaseKerberosUtils.setSecuredConfiguration;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.net.InetSocketAddress;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.security.SecurityInfo;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.SecurityTests;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mockito;
+
+import org.apache.hbase.thirdparty.com.google.common.collect.Lists;
+
+import org.apache.hadoop.hbase.shaded.ipc.protobuf.generated.TestProtos;
+import 
org.apache.hadoop.hbase.shaded.ipc.protobuf.generated.TestRpcServiceProtos.TestProtobufRpcProto.BlockingInterface;
+
+@Category({ SecurityTests.class, MediumTests.class })
+public class TestSecurityRpcSentBytesMetrics {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+HBaseClassTestRule.forClass(TestSecurityRpcSentBytesMetrics.class);
+
+  protected static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+
+  protected static final File KEYTAB_FILE =
+new File(TEST_UTIL.getDataTestDir("keytab").toUri().getPath());
+
+  protected static MiniKdc 

[hbase] branch master updated (663bc642b6d -> 0703d36daf8)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


from 663bc642b6d HBASE-27888 Record readBlock message in log when it takes 
too long time (#5255)
 add 0703d36daf8 HBASE-27924 Remove duplicate code for 
NettyHBaseSaslRpcServerHandler … (#5285)

No new revisions were added by this update.

Summary of changes:
 .../hbase/ipc/NettyHBaseSaslRpcServerHandler.java  | 25 +
 ...e.java => TestSecurityRpcSentBytesMetrics.java} | 43 ++
 2 files changed, 13 insertions(+), 55 deletions(-)
 copy 
hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/{TestRpcSkipInitialSaslHandshake.java
 => TestSecurityRpcSentBytesMetrics.java} (79%)



[hbase] branch branch-3 updated: HBASE-27888 Record readBlock message in log when it takes too long time (#5255)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-3 by this push:
 new 5e5f3b58b39 HBASE-27888 Record readBlock message in log when it takes 
too long time (#5255)
5e5f3b58b39 is described below

commit 5e5f3b58b39373037007fd8ed623186636c0af6e
Author: chaijunjie0101 <64140218+chaijunjie0...@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:15:24 2023 +0800

HBASE-27888 Record readBlock message in log when it takes too long time 
(#5255)

Signed-off-by: Duo Zhang 
(cherry picked from commit 663bc642b6d6b4e364bdeddcf197a0fa2fd8e228)
---
 .../java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java   | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index c84836bcd53..434529ec46f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -1385,6 +1385,13 @@ public class HFileBlock implements Cacheable {
 
 private final boolean isPreadAllBytes;
 
+private final long readWarnTime;
+
+/**
+ * If reading block cost time in milliseconds more than the threshold, a 
warning will be logged.
+ */
+public static final String FS_READER_WARN_TIME_MS = 
"hbase.fs.reader.warn.time.ms";
+
 FSReaderImpl(ReaderContext readerContext, HFileContext fileContext, 
ByteBuffAllocator allocator,
   Configuration conf) throws IOException {
   this.fileSize = readerContext.getFileSize();
@@ -1402,6 +1409,8 @@ public class HFileBlock implements Cacheable {
   defaultDecodingCtx = new HFileBlockDefaultDecodingContext(conf, 
fileContext);
   encodedBlockDecodingCtx = defaultDecodingCtx;
   isPreadAllBytes = readerContext.isPreadAllBytes();
+  // Default warn threshold set to -1, it means skipping record the read 
block slow warning log.
+  readWarnTime = conf.getLong(FS_READER_WARN_TIME_MS, -1L);
 }
 
 @Override
@@ -1759,6 +1768,10 @@ public class HFileBlock implements Cacheable {
   hFileBlock.sanityCheckUncompressed();
 }
 LOG.trace("Read {} in {} ms", hFileBlock, duration);
+if (!LOG.isTraceEnabled() && this.readWarnTime >= 0 && duration > 
this.readWarnTime) {
+  LOG.warn("Read Block Slow: read {} cost {} ms, threshold = {} ms", 
hFileBlock, duration,
+this.readWarnTime);
+}
 span.addEvent("Read block", attributesBuilder.build());
 // Cache next block header if we read it for the next time through 
here.
 if (nextBlockOnDiskSize != -1) {



[hbase] branch branch-2 updated: HBASE-27888 Record readBlock message in log when it takes too long time (#5255)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 2ffd868f9c9 HBASE-27888 Record readBlock message in log when it takes 
too long time (#5255)
2ffd868f9c9 is described below

commit 2ffd868f9c9047d9ed80cfa2fa2fafe5c8c72ffb
Author: chaijunjie0101 <64140218+chaijunjie0...@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:15:24 2023 +0800

HBASE-27888 Record readBlock message in log when it takes too long time 
(#5255)

Signed-off-by: Duo Zhang 
(cherry picked from commit 663bc642b6d6b4e364bdeddcf197a0fa2fd8e228)
---
 .../java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java   | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index a067e50f30a..d3f8fd1ea84 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -1385,6 +1385,13 @@ public class HFileBlock implements Cacheable {
 
 private final boolean isPreadAllBytes;
 
+private final long readWarnTime;
+
+/**
+ * If reading block cost time in milliseconds more than the threshold, a 
warning will be logged.
+ */
+public static final String FS_READER_WARN_TIME_MS = 
"hbase.fs.reader.warn.time.ms";
+
 FSReaderImpl(ReaderContext readerContext, HFileContext fileContext, 
ByteBuffAllocator allocator,
   Configuration conf) throws IOException {
   this.fileSize = readerContext.getFileSize();
@@ -1402,6 +1409,8 @@ public class HFileBlock implements Cacheable {
   defaultDecodingCtx = new HFileBlockDefaultDecodingContext(conf, 
fileContext);
   encodedBlockDecodingCtx = defaultDecodingCtx;
   isPreadAllBytes = readerContext.isPreadAllBytes();
+  // Default warn threshold set to -1, it means skipping record the read 
block slow warning log.
+  readWarnTime = conf.getLong(FS_READER_WARN_TIME_MS, -1L);
 }
 
 @Override
@@ -1759,6 +1768,10 @@ public class HFileBlock implements Cacheable {
   hFileBlock.sanityCheckUncompressed();
 }
 LOG.trace("Read {} in {} ms", hFileBlock, duration);
+if (!LOG.isTraceEnabled() && this.readWarnTime >= 0 && duration > 
this.readWarnTime) {
+  LOG.warn("Read Block Slow: read {} cost {} ms, threshold = {} ms", 
hFileBlock, duration,
+this.readWarnTime);
+}
 span.addEvent("Read block", attributesBuilder.build());
 // Cache next block header if we read it for the next time through 
here.
 if (nextBlockOnDiskSize != -1) {



[hbase] branch branch-2.5 updated: HBASE-27888 Record readBlock message in log when it takes too long time (#5255)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch branch-2.5
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.5 by this push:
 new 1ae88f9c0fe HBASE-27888 Record readBlock message in log when it takes 
too long time (#5255)
1ae88f9c0fe is described below

commit 1ae88f9c0fe06f5e125fa25a05929d4b93e240d6
Author: chaijunjie0101 <64140218+chaijunjie0...@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:15:24 2023 +0800

HBASE-27888 Record readBlock message in log when it takes too long time 
(#5255)

Signed-off-by: Duo Zhang 
(cherry picked from commit 663bc642b6d6b4e364bdeddcf197a0fa2fd8e228)
---
 .../java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java   | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index 909f78e6e3d..1c7bc73e604 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -1356,6 +1356,13 @@ public class HFileBlock implements Cacheable {
 
 private final boolean isPreadAllBytes;
 
+private final long readWarnTime;
+
+/**
+ * If reading block cost time in milliseconds more than the threshold, a 
warning will be logged.
+ */
+public static final String FS_READER_WARN_TIME_MS = 
"hbase.fs.reader.warn.time.ms";
+
 FSReaderImpl(ReaderContext readerContext, HFileContext fileContext, 
ByteBuffAllocator allocator,
   Configuration conf) throws IOException {
   this.fileSize = readerContext.getFileSize();
@@ -1373,6 +1380,8 @@ public class HFileBlock implements Cacheable {
   defaultDecodingCtx = new HFileBlockDefaultDecodingContext(conf, 
fileContext);
   encodedBlockDecodingCtx = defaultDecodingCtx;
   isPreadAllBytes = readerContext.isPreadAllBytes();
+  // Default warn threshold set to -1, it means skipping record the read 
block slow warning log.
+  readWarnTime = conf.getLong(FS_READER_WARN_TIME_MS, -1L);
 }
 
 @Override
@@ -1730,6 +1739,10 @@ public class HFileBlock implements Cacheable {
   hFileBlock.sanityCheckUncompressed();
 }
 LOG.trace("Read {} in {} ms", hFileBlock, duration);
+if (!LOG.isTraceEnabled() && this.readWarnTime >= 0 && duration > 
this.readWarnTime) {
+  LOG.warn("Read Block Slow: read {} cost {} ms, threshold = {} ms", 
hFileBlock, duration,
+this.readWarnTime);
+}
 span.addEvent("Read block", attributesBuilder.build());
 // Cache next block header if we read it for the next time through 
here.
 if (nextBlockOnDiskSize != -1) {



[hbase] branch master updated: HBASE-27888 Record readBlock message in log when it takes too long time (#5255)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 663bc642b6d HBASE-27888 Record readBlock message in log when it takes 
too long time (#5255)
663bc642b6d is described below

commit 663bc642b6d6b4e364bdeddcf197a0fa2fd8e228
Author: chaijunjie0101 <64140218+chaijunjie0...@users.noreply.github.com>
AuthorDate: Fri Jun 16 23:15:24 2023 +0800

HBASE-27888 Record readBlock message in log when it takes too long time 
(#5255)

Signed-off-by: Duo Zhang 
---
 .../java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java   | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index c84836bcd53..434529ec46f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -1385,6 +1385,13 @@ public class HFileBlock implements Cacheable {
 
 private final boolean isPreadAllBytes;
 
+private final long readWarnTime;
+
+/**
+ * If reading block cost time in milliseconds more than the threshold, a 
warning will be logged.
+ */
+public static final String FS_READER_WARN_TIME_MS = 
"hbase.fs.reader.warn.time.ms";
+
 FSReaderImpl(ReaderContext readerContext, HFileContext fileContext, 
ByteBuffAllocator allocator,
   Configuration conf) throws IOException {
   this.fileSize = readerContext.getFileSize();
@@ -1402,6 +1409,8 @@ public class HFileBlock implements Cacheable {
   defaultDecodingCtx = new HFileBlockDefaultDecodingContext(conf, 
fileContext);
   encodedBlockDecodingCtx = defaultDecodingCtx;
   isPreadAllBytes = readerContext.isPreadAllBytes();
+  // Default warn threshold set to -1, it means skipping record the read 
block slow warning log.
+  readWarnTime = conf.getLong(FS_READER_WARN_TIME_MS, -1L);
 }
 
 @Override
@@ -1759,6 +1768,10 @@ public class HFileBlock implements Cacheable {
   hFileBlock.sanityCheckUncompressed();
 }
 LOG.trace("Read {} in {} ms", hFileBlock, duration);
+if (!LOG.isTraceEnabled() && this.readWarnTime >= 0 && duration > 
this.readWarnTime) {
+  LOG.warn("Read Block Slow: read {} cost {} ms, threshold = {} ms", 
hFileBlock, duration,
+this.readWarnTime);
+}
 span.addEvent("Read block", attributesBuilder.build());
 // Cache next block header if we read it for the next time through 
here.
 if (nextBlockOnDiskSize != -1) {



[hbase] branch master updated (4be74d2455a -> f534d828e95)

2023-06-16 Thread zhangduo
This is an automated email from the ASF dual-hosted git repository.

zhangduo pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


from 4be74d2455a HBASE-27917 Set version to 4.0.0-alpha-1-SNAPSHOT on 
master (#5276)
 add f534d828e95 HBASE-27894 create-release is broken by recent gitbox 
changes (#5262)

No new revisions were added by this update.

Summary of changes:
 dev-support/create-release/release-util.sh | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)



[hbase-kustomize] branch main updated: HBASE-27935 Introduce Jenkins PR job for hbase-kustomize

2023-06-16 Thread ndimiduk
This is an automated email from the ASF dual-hosted git repository.

ndimiduk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hbase-kustomize.git


The following commit(s) were added to refs/heads/main by this push:
 new 781615f  HBASE-27935 Introduce Jenkins PR job for hbase-kustomize
781615f is described below

commit 781615f230919cdcf42cc398cd1fc07a12ff2803
Author: Nick Dimiduk 
AuthorDate: Thu Jun 15 14:29:23 2023 +0200

HBASE-27935 Introduce Jenkins PR job for hbase-kustomize

Copy over the structure and content from hbase-operator-tools. Strip out 
the Maven and Java
bits. Created a Jenkins job at 
https://ci-hbase.apache.org/job/hbase-kustomize-github-pr/

Signed-off-by: Sean Busbey 
---
 dev-support/jenkins/Dockerfile |  49 
 dev-support/jenkins/Jenkinsfile| 140 +
 dev-support/jenkins/gather_machine_environment.sh  |  57 +
 .../jenkins/jenkins_precommit_github_yetus.sh  | 127 +++
 4 files changed, 373 insertions(+)

diff --git a/dev-support/jenkins/Dockerfile b/dev-support/jenkins/Dockerfile
new file mode 100644
index 000..0fdc6cc
--- /dev/null
+++ b/dev-support/jenkins/Dockerfile
@@ -0,0 +1,49 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Dockerfile for hbase-operator-tools pre-commit build.
+# https://ci-hbase.apache.org/job/HBase-Operator-Tools-PreCommit/
+
+ARG IMG_BASE='ubuntu'
+ARG IMG_BASE_TAG='22.04'
+
+FROM hadolint/hadolint:latest-debian as hadolint
+
+FROM ${IMG_BASE}:${IMG_BASE_TAG} as final
+ARG IMG_BASE
+ARG IMG_BASE_TAG
+
+SHELL ["/bin/bash", "-o", "pipefail", "-c"]
+
+# hadolint ignore=DL3008
+RUN apt-get -q update && apt-get -q install --no-install-recommends -y \
+   binutils \
+   git \
+   rsync \
+   shellcheck \
+   wget && \
+apt-get clean && \
+rm -rf /var/lib/apt/lists/*
+
+COPY --from=hadolint /bin/hadolint /bin/hadolint
+
+CMD ["/bin/bash"]
+
+###
+# Everything past this point is either not needed for testing or breaks Yetus.
+# So tell Yetus not to read the rest of the file:
+# YETUS CUT HERE
+###
diff --git a/dev-support/jenkins/Jenkinsfile b/dev-support/jenkins/Jenkinsfile
new file mode 100644
index 000..6d31cf5
--- /dev/null
+++ b/dev-support/jenkins/Jenkinsfile
@@ -0,0 +1,140 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+pipeline {
+
+agent {
+label 'hbase'
+}
+
+options {
+// N.B. this is per-branch, which means per PR
+disableConcurrentBuilds()
+buildDiscarder(logRotator(numToKeepStr: '15'))
+timeout (time: 1, unit: 'HOURS')
+timestamps()
+skipDefaultCheckout()
+}
+
+environment {
+SRC_REL = 'src'
+PATCH_REL = 'output'
+YETUS_REL = 'yetus'
+// Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
+YETUS_VERSION = 'rel/0.14.1'
+DOCKERFILE_REL = "${SRC_REL}/dev-support/jenkins/Dockerfile"
+YETUS_DRIVER_REL = 
"${SRC_REL}/dev-support/jenkins/jenkins_precommit_github_yetus.sh"
+ARCHIVE_PATTERN_LIST = 'TEST-*.xml'
+BUILD_URL_ARTIFACTS = "artifact/${WORKDIR_REL}/${PATCH_REL}"
+WORKDIR_REL = 'yetus-precommit-check'
+WORKDIR = "${WORKSPACE}/${WORKDIR_REL}"
+SOURCEDIR = "${WORKDIR}/${SRC_REL}"
+PATCHDIR = "${WORKDIR}/${PATCH_REL}"
+DOCKERFILE = "${WORKDIR}/${DOCKERFILE_REL}"
+