This is an automated email from the ASF dual-hosted git repository.

taklwu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-connectors.git

commit 3f15ae13745e065cdcbd4d6b1b44e5f2381b3610
Author: Tak Lon (Stephen) Wu <tak...@apache.org>
AuthorDate: Tue Oct 17 23:07:02 2023 -0700

    Preparing hbase-connectors release 1.0.1RC1; tagging and updates to 
CHANGESLOG.md and RELEASENOTES.md
    
    Signed-off-by: Tak Lon (Stephen) Wu <tak...@apache.org>
---
 CHANGELOG.md    |   2 +
 RELEASENOTES.md | 134 --------------------------------------------------------
 pom.xml         |   2 +-
 3 files changed, 3 insertions(+), 135 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index a52940e..9ea614d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -81,6 +81,7 @@
 | [HBASE-22711](https://issues.apache.org/jira/browse/HBASE-22711) | Spark 
connector doesn't use the given mapping when inserting data |  Major | 
hbase-connectors |
 | [HBASE-22674](https://issues.apache.org/jira/browse/HBASE-22674) | precommit 
docker image installs JRE over JDK (multiple repos) |  Critical | build, 
hbase-connectors |
 | [HBASE-22336](https://issues.apache.org/jira/browse/HBASE-22336) | Add 
CHANGELOG, README and RELEASENOTES to binary tarball |  Critical | 
hbase-connectors |
+| [HBASE-22329](https://issues.apache.org/jira/browse/HBASE-22329) | Fix for 
warning The parameter forkMode is deprecated since version in hbase-spark-it |  
Minor | hbase-connectors |
 | [HBASE-22320](https://issues.apache.org/jira/browse/HBASE-22320) | 
hbase-connectors personality skips non-scaladoc tests |  Critical | . |
 | [HBASE-22319](https://issues.apache.org/jira/browse/HBASE-22319) | Fix for 
warning The assembly descriptor contains a filesystem-root relative reference | 
 Minor | hbase-connectors |
 
@@ -113,6 +114,7 @@
 | [HBASE-25479](https://issues.apache.org/jira/browse/HBASE-25479) | 
[connectors] Purge use of VisibleForTesting |  Major | hbase-connectors |
 | [HBASE-25388](https://issues.apache.org/jira/browse/HBASE-25388) | Replacing 
Producer implementation with an extension of MockProducer on testing side in 
hbase-connectors |  Major | hbase-connectors |
 | [HBASE-24883](https://issues.apache.org/jira/browse/HBASE-24883) | Migrate 
hbase-connectors testing to ci-hadoop |  Major | build, hbase-connectors |
+| [HBASE-24261](https://issues.apache.org/jira/browse/HBASE-24261) | Redo all 
of our github notification integrations on new ASF infra feature |  Major | 
community, hbase-connectors |
 | [HBASE-23565](https://issues.apache.org/jira/browse/HBASE-23565) | Execute 
tests in hbase-connectors precommit |  Critical | hbase-connectors |
 | [HBASE-23032](https://issues.apache.org/jira/browse/HBASE-23032) | Upgrade 
to Curator 4.2.0 |  Major | . |
 | [HBASE-22599](https://issues.apache.org/jira/browse/HBASE-22599) | Let 
hbase-connectors compile against HBase 2.2.0 |  Major | hbase-connectors |
diff --git a/RELEASENOTES.md b/RELEASENOTES.md
index ebe91f4..0530e27 100644
--- a/RELEASENOTES.md
+++ b/RELEASENOTES.md
@@ -58,137 +58,3 @@ The HBase connector for working with Apache Spark now works 
with the shaded clie
 
 
 
-# HBase  connector-1.0.0 Release Notes
-
-These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
-
-
----
-
-* [HBASE-13992](https://issues.apache.org/jira/browse/HBASE-13992) | *Major* | 
**Integrate SparkOnHBase into HBase**
-
-This release includes initial support for running Spark against HBase with a 
richer feature set than was previously possible with MapReduce bindings:
-
-\* Support for Spark and Spark Streaming against Spark 2.1.1
-\* RDD/DStream formation from scan operations
-\* convenience methods for interacting with HBase from an HBase backed RDD / 
DStream instance
-\* examples in both the Spark Java API and Spark Scala API
-\* support for running against a secure HBase cluster
-
-
----
-
-* [HBASE-14849](https://issues.apache.org/jira/browse/HBASE-14849) | *Major* | 
**Add option to set block cache to false on SparkSQL executions**
-
-For user configurable parameters for HBase datasources. Please refer to 
org.apache.hadoop.hbase.spark.datasources.HBaseSparkConf for details.
-
-User can either set them in SparkConf, which will take effect globally, or 
configure it per table, which will overwrite the value set in SparkConf. If not 
set, the default value will take effect.
-
-Currently three parameters are supported.
-1. spark.hbase.blockcache.enable for blockcache enable/disable. Default is 
enable,  but note that this potentially may slow down the system.
-2. spark.hbase.cacheSize for cache size when performing HBase table scan. 
Default value is 1000
-3. spark.hbase.batchNum for the batch number when performing HBase table scan. 
Default value is 1000.
-
-
----
-
-* [HBASE-15184](https://issues.apache.org/jira/browse/HBASE-15184) | 
*Critical* | **SparkSQL Scan operation doesn't work on kerberos cluster**
-
-Before this patch, users of the spark HBaseContext would fail due to lack of  
privilege exceptions.
-
-Note:
-\* It is preferred to have spark in spark-on-yarn mode if Kerberos is used.
-\* This is orthogonal to issues with a kerberized spark cluster via 
InputFormats.
-
-
----
-
-* [HBASE-15572](https://issues.apache.org/jira/browse/HBASE-15572) | *Major* | 
**Adding optional timestamp semantics to HBase-Spark**
-
-Right now the timestamp is always latest. With this patch, users can select 
timestamps they want.
-In this patch, 4 parameters, "timestamp", "minTimestamp", "maxiTimestamp" and 
"maxVersions" are added to HBaseSparkConf. Users can select a timestamp, they 
can also select a time range with minimum timestamp and maximum timestamp.
-
-
----
-
-* [HBASE-17574](https://issues.apache.org/jira/browse/HBASE-17574) | *Major* | 
**Clean up how to run tests under hbase-spark module**
-
-Run test under root dir or hbase-spark dir
-1. mvn test //run all small and medium java tests, and all scala tests
-2. mvn test -P skipSparkTests //skip all scala and java tests in hbase-spark
-3. mvn test -P runAllTests //run all tests, including scala and all java test 
even the large test
-
-Run specified test case, since we have two plugins, we need specify both java 
and scala.
-When only test scala or jave test case, disable the other one use -Dxx=None as 
follow:
-1. mvn test -Dtest=TestJavaHBaseContext -DwildcardSuites=None // java unit test
-2. mvn test -Dtest=None 
-DwildcardSuites=org.apache.hadoop.hbase.spark.BulkLoadSuite //scala unit test, 
only support full name in scalatest plugin
-
-
----
-
-* [HBASE-17933](https://issues.apache.org/jira/browse/HBASE-17933) | *Major* | 
**[hbase-spark]  Support Java api for bulkload**
-
-<!-- markdown -->
-The integration module for Apache Spark now includes Java-friendly equivalents 
for the `bulkLoad` and `bulkLoadThinRows` methods in `JavaHBaseContext`.
-
-
----
-
-* [HBASE-18175](https://issues.apache.org/jira/browse/HBASE-18175) | 
*Critical* | **Add hbase-spark integration test into hbase-spark-it**
-
-<!-- markdown -->
-HBase now ships with an integration test for our integration with Apache Spark.
-
-You can run this test on a cluster by using an equivalent to the below, e.g. 
if the version of HBase is 2.0.0-alpha-2
-
-```
-spark-submit --class 
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
HBASE_HOME/lib/hbase-spark-it-2.0.0-alpha-2-tests.jar 
-Dhbase.spark.bulkload.chainlength=500000 -m slowDeterministic
-```
-
-
----
-
-* [HBASE-16179](https://issues.apache.org/jira/browse/HBASE-16179) | 
*Critical* | **Fix compilation errors when building hbase-spark against Spark 
2.0**
-
-As of this JIRA, Spark version is upgraded from 1.6 to 2.1.1
-
-
----
-
-* [HBASE-21002](https://issues.apache.org/jira/browse/HBASE-21002) | *Minor* | 
**Create assembly and scripts to start Kafka Proxy**
-
-Adds a kafka proxy that appears to hbase as a replication peer. Use to tee 
table edits to kafka. Has mechanism for dropping/routing updates. See 
https://github.com/apache/hbase-connectors/tree/master/kafka for documentation.
-
-
----
-
-* [HBASE-21434](https://issues.apache.org/jira/browse/HBASE-21434) | *Major* | 
**[hbase-connectors] Cleanup of kafka dependencies; clarify hadoop version**
-
-Cleaned up kafka submodule dependencies. Added used dependencies to pom and 
removed the unused. Depends explicitly on hadoop2. No messing w/ hadoop3 
versions.
-
-
----
-
-* [HBASE-21446](https://issues.apache.org/jira/browse/HBASE-21446) | *Major* | 
**[hbase-connectors] Update spark and scala versions; add some doc on how to 
generate artifacts with different versions**
-
-Updates our hbase-spark integration so defaults spark 2.4.0 (October 2018) 
from 2.1.1 and Scala 2.11.12 (from 2.11.8).
-
-
----
-
-* [HBASE-15320](https://issues.apache.org/jira/browse/HBASE-15320) | *Major* | 
**HBase connector for Kafka Connect**
-
-This commit adds a kafka connector. The connectors acts as a replication peer 
and sends modifications in HBase to kafka.
-
-For further information, please refer to kafka/README.md.
-
-
----
-
-* [HBASE-14789](https://issues.apache.org/jira/browse/HBASE-14789) | *Major* | 
**Enhance the current spark-hbase connector**
-
-New features in hbase-spark:
-\* native type support (short, int, long, float, double),
-\* support for Dataframe writes,
-\* avro support,
-\* catalog can be defined in json.
diff --git a/pom.xml b/pom.xml
index bdd41e9..e849cd1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -120,7 +120,7 @@
   </issueManagement>
   <properties>
     <!-- See https://maven.apache.org/maven-ci-friendly.html -->
-    <revision>1.1.0-SNAPSHOT</revision>
+    <revision>1.0.1</revision>
     <maven.javadoc.skip>true</maven.javadoc.skip>
     
<maven.build.timestamp.format>yyyy-MM-dd'T'HH:mm</maven.build.timestamp.format>
     <buildDate>${maven.build.timestamp}</buildDate>

Reply via email to