This is an automated email from the ASF dual-hosted git repository.
vinish pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-xtable.git
The following commit(s) were added to refs/heads/main by this push:
new 3bd1ff52 [MINOR] Moving to 0.2.0-SNAPSHOT on main branch
3bd1ff52 is described below
commit 3bd1ff527aa6a0a3dd1b8ffc19e5978b5e702a26
Author: Vinish Reddy <[email protected]>
AuthorDate: Wed Aug 21 19:03:46 2024 +0530
[MINOR] Moving to 0.2.0-SNAPSHOT on main branch
---
README.md | 2 +-
demo/notebook/demo.ipynb | 6 +++---
demo/start_demo.sh | 6 +++---
pom.xml | 2 +-
website/docs/biglake-metastore.md | 4 ++--
website/docs/bigquery.md | 4 ++--
website/docs/fabric.md | 2 +-
website/docs/glue-catalog.md | 4 ++--
website/docs/hms.md | 4 ++--
website/docs/how-to.md | 4 ++--
website/docs/unity-catalog.md | 4 ++--
xtable-api/pom.xml | 2 +-
xtable-core/pom.xml | 2 +-
xtable-hudi-support/pom.xml | 2 +-
xtable-hudi-support/xtable-hudi-support-extensions/README.md | 6 +++---
xtable-hudi-support/xtable-hudi-support-extensions/pom.xml | 2 +-
xtable-hudi-support/xtable-hudi-support-utils/pom.xml | 2 +-
xtable-utilities/pom.xml | 2 +-
18 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/README.md b/README.md
index 6f8cb41b..9eee56f8 100644
--- a/README.md
+++ b/README.md
@@ -110,7 +110,7 @@ catalogOptions: # all other options are passed through in a
map
key1: value1
key2: value2
```
-5. run with `java -jar
xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml [--hadoopConfig hdfs-site.xml]
[--convertersConfig converters.yaml] [--icebergCatalogConfig catalog.yaml]`
+5. run with `java -jar
xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml [--hadoopConfig hdfs-site.xml]
[--convertersConfig converters.yaml] [--icebergCatalogConfig catalog.yaml]`
The bundled jar includes hadoop dependencies for AWS, Azure, and GCP. Sample
hadoop configurations for configuring the converters
can be found in the
[xtable-hadoop-defaults.xml](https://github.com/apache/incubator-xtable/blob/main/utilities/src/main/resources/xtable-hadoop-defaults.xml)
file.
The custom hadoop configurations can be passed in with the `--hadoopConfig
[custom-hadoop-config-file]` option.
diff --git a/demo/notebook/demo.ipynb b/demo/notebook/demo.ipynb
index 40aa55ea..47bfd8ac 100644
--- a/demo/notebook/demo.ipynb
+++ b/demo/notebook/demo.ipynb
@@ -27,9 +27,9 @@
"import $ivy.`org.apache.hudi:hudi-spark3.2-bundle_2.12:0.14.0`\n",
"import $ivy.`org.apache.hudi:hudi-java-client:0.14.0`\n",
"import $ivy.`io.delta:delta-core_2.12:2.0.2`\n",
- "import $cp.`/home/jars/xtable-core-0.1.0-SNAPSHOT.jar`\n",
- "import $cp.`/home/jars/xtable-api-0.1.0-SNAPSHOT.jar`\n",
- "import $cp.`/home/jars/xtable-hudi-support-utils-0.1.0-SNAPSHOT.jar`\n",
+ "import $cp.`/home/jars/xtable-core-0.2.0-SNAPSHOT.jar`\n",
+ "import $cp.`/home/jars/xtable-api-0.2.0-SNAPSHOT.jar`\n",
+ "import $cp.`/home/jars/xtable-hudi-support-utils-0.2.0-SNAPSHOT.jar`\n",
"import $ivy.`org.apache.iceberg:iceberg-hive-runtime:1.3.1`\n",
"import $ivy.`io.trino:trino-jdbc:431`\n",
"import java.util._\n",
diff --git a/demo/start_demo.sh b/demo/start_demo.sh
index 2232ae85..e2c6d4dc 100755
--- a/demo/start_demo.sh
+++ b/demo/start_demo.sh
@@ -23,9 +23,9 @@ cd $XTABLE_HOME
mvn install -am -pl xtable-core -DskipTests -T 2
mkdir -p demo/jars
-cp
xtable-hudi-support/xtable-hudi-support-utils/target/xtable-hudi-support-utils-0.1.0-SNAPSHOT.jar
demo/jars
-cp xtable-api/target/xtable-api-0.1.0-SNAPSHOT.jar demo/jars
-cp xtable-core/target/xtable-core-0.1.0-SNAPSHOT.jar demo/jars
+cp
xtable-hudi-support/xtable-hudi-support-utils/target/xtable-hudi-support-utils-0.2.0-SNAPSHOT.jar
demo/jars
+cp xtable-api/target/xtable-api-0.2.0-SNAPSHOT.jar demo/jars
+cp xtable-core/target/xtable-core-0.2.0-SNAPSHOT.jar demo/jars
cd demo
docker-compose up
diff --git a/pom.xml b/pom.xml
index d0428d9f..ce7749e1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -24,7 +24,7 @@
<artifactId>xtable</artifactId>
<name>xtable</name>
<inceptionYear>2024</inceptionYear>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
<packaging>pom</packaging>
<parent>
diff --git a/website/docs/biglake-metastore.md
b/website/docs/biglake-metastore.md
index db10daa2..4ee4c2c2 100644
--- a/website/docs/biglake-metastore.md
+++ b/website/docs/biglake-metastore.md
@@ -25,7 +25,7 @@ This document walks through the steps to register an Apache
XTable™ (Incubatin
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_key.json
```
5. Clone the Apache XTable™ (Incubating)
[repository](https://github.com/apache/incubator-xtable) and create the
- `xtable-utilities-0.1.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
+ `xtable-utilities-0.2.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
6. Download the [BigLake Iceberg
JAR](gs://spark-lib/biglake/biglake-catalog-iceberg1.2.0-0.1.0-with-dependencies.jar)
locally.
Apache XTable™ (Incubating) requires the JAR to be present in the classpath.
@@ -117,7 +117,7 @@ catalogOptions:
From your terminal under the cloned Apache XTable™ (Incubating) directory, run
the sync process using the below command.
```shell md title="shell"
-java -cp
xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar:/path/to/downloaded/biglake-catalog-iceberg1.2.0-0.1.0-with-dependencies.jar
org.apache.xtable.utilities.RunSync --datasetConfig my_config.yaml
--icebergCatalogConfig catalog.yaml
+java -cp
xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar:/path/to/downloaded/biglake-catalog-iceberg1.2.0-0.1.0-with-dependencies.jar
org.apache.xtable.utilities.RunSync --datasetConfig my_config.yaml
--icebergCatalogConfig catalog.yaml
```
:::tip Note:
diff --git a/website/docs/bigquery.md b/website/docs/bigquery.md
index 46a928aa..3377d8b2 100644
--- a/website/docs/bigquery.md
+++ b/website/docs/bigquery.md
@@ -35,9 +35,9 @@ If you are not planning on using Iceberg, then you do not
need to add these to y
:::
#### Steps to add additional configurations to the Hudi writers:
-1. Add the extensions jar
(`xtable-hudi-extensions-0.1.0-SNAPSHOT-bundled.jar`) to your class path
+1. Add the extensions jar
(`xtable-hudi-extensions-0.2.0-SNAPSHOT-bundled.jar`) to your class path
For example, if you're using the Hudi [quick-start
guide](https://hudi.apache.org/docs/quick-start-guide#spark-shellsql)
- for spark you can just add `--jars
xtable-hudi-extensions-0.1.0-SNAPSHOT-bundled.jar` to the end of the command.
+ for spark you can just add `--jars
xtable-hudi-extensions-0.2.0-SNAPSHOT-bundled.jar` to the end of the command.
2. Set the following configurations in your writer options:
```shell md title="shell"
hoodie.avro.write.support.class:
org.apache.xtable.hudi.extensions.HoodieAvroWriteSupportWithFieldIds
diff --git a/website/docs/fabric.md b/website/docs/fabric.md
index 7bea88ee..9bae2d9b 100644
--- a/website/docs/fabric.md
+++ b/website/docs/fabric.md
@@ -98,7 +98,7 @@ An example hadoop configuration for authenticating to ADLS
storage account is as
```
```shell md title="shell"
-java -jar xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml --hadoopConfig hadoop.xml
+java -jar xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml --hadoopConfig hadoop.xml
```
Running the above command will translate the table `people` in Iceberg or Hudi
format to Delta Lake format. To validate
diff --git a/website/docs/glue-catalog.md b/website/docs/glue-catalog.md
index 62468ed9..6d1388c9 100644
--- a/website/docs/glue-catalog.md
+++ b/website/docs/glue-catalog.md
@@ -19,7 +19,7 @@ This document walks through the steps to register an Apache
XTable™ (Incubatin
also set up access credentials by following the steps
[here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html)
3. Clone the Apache XTable™ (Incubating)
[repository](https://github.com/apache/incubator-xtable) and create the
- `xtable-utilities-0.1.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
+ `xtable-utilities-0.2.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
## Steps
### Running sync
@@ -84,7 +84,7 @@ Replace with appropriate values for `sourceFormat`,
`tableBasePath` and `tableNa
From your terminal under the cloned xtable directory, run the sync process
using the below command.
```shell md title="shell"
- java -jar xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
+ java -jar xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
```
:::tip Note:
diff --git a/website/docs/hms.md b/website/docs/hms.md
index 2698c839..7a4696e8 100644
--- a/website/docs/hms.md
+++ b/website/docs/hms.md
@@ -17,7 +17,7 @@ This document walks through the steps to register an Apache
XTable™ (Incubatin
or a distributed system like Amazon EMR, Google Cloud's Dataproc, Azure
HDInsight etc.
This is a required step to register the table in HMS using a Spark client.
3. Clone the XTable™ (Incubating)
[repository](https://github.com/apache/incubator-xtable) and create the
- `xtable-utilities-0.1.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
+ `xtable-utilities-0.2.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
4. This guide also assumes that you have configured the Hive Metastore locally
or on EMR/Dataproc/HDInsight
and is already running.
@@ -88,7 +88,7 @@ datasets:
From your terminal under the cloned Apache XTable™ (Incubating) directory, run
the sync process using the below command.
```shell md title="shell"
-java -jar xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
+java -jar xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
```
:::tip Note:
diff --git a/website/docs/how-to.md b/website/docs/how-to.md
index 5e457c0f..ea18a663 100644
--- a/website/docs/how-to.md
+++ b/website/docs/how-to.md
@@ -24,7 +24,7 @@ history to enable proper point in time queries.
1. A compute instance where you can run Apache Spark. This can be your local
machine, docker,
or a distributed service like Amazon EMR, Google Cloud's Dataproc, Azure
HDInsight etc
2. Clone the Apache XTable™ (Incubating)
[repository](https://github.com/apache/incubator-xtable) and create the
- `xtable-utilities-0.1.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
+ `xtable-utilities-0.2.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
3. Optional: Setup access to write to and/or read from distributed storage
services like:
* Amazon S3 by following the steps
[here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
to install AWSCLIv2
@@ -351,7 +351,7 @@ Authentication for GCP requires service account credentials
to be exported. i.e.
In your terminal under the cloned Apache XTable™ (Incubating) directory, run
the below command.
```shell md title="shell"
-java -jar xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
+java -jar xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
```
**Optional:**
diff --git a/website/docs/unity-catalog.md b/website/docs/unity-catalog.md
index 2467a321..b2fb83fe 100644
--- a/website/docs/unity-catalog.md
+++ b/website/docs/unity-catalog.md
@@ -17,7 +17,7 @@ This document walks through the steps to register an Apache
XTable™ (Incubatin
3. Create a Unity Catalog metastore in Databricks as outlined
[here](https://docs.gcp.databricks.com/data-governance/unity-catalog/create-metastore.html#create-a-unity-catalog-metastore).
4. Create an external location in Databricks as outlined
[here](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-location.html).
5. Clone the Apache XTable™ (Incubating)
[repository](https://github.com/apache/incubator-xtable) and create the
- `xtable-utilities-0.1.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
+ `xtable-utilities-0.2.0-SNAPSHOT-bundled.jar` by following the steps on the
[Installation page](/docs/setup)
## Pre-requisites (for open-source Unity Catalog)
1. Source table(s) (Hudi/Iceberg) already written to external storage
locations like S3/GCS/ADLS or local.
@@ -48,7 +48,7 @@ datasets:
From your terminal under the cloned Apache XTable™ (Incubating) directory, run
the sync process using the below command.
```shell md title="shell"
-java -jar xtable-utilities/target/xtable-utilities-0.1.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
+java -jar xtable-utilities/target/xtable-utilities-0.2.0-SNAPSHOT-bundled.jar
--datasetConfig my_config.yaml
```
:::tip Note:
diff --git a/xtable-api/pom.xml b/xtable-api/pom.xml
index 71306aee..fd31cbd8 100644
--- a/xtable-api/pom.xml
+++ b/xtable-api/pom.xml
@@ -25,7 +25,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<dependencies>
diff --git a/xtable-core/pom.xml b/xtable-core/pom.xml
index f505d265..675915e4 100644
--- a/xtable-core/pom.xml
+++ b/xtable-core/pom.xml
@@ -25,7 +25,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<dependencies>
diff --git a/xtable-hudi-support/pom.xml b/xtable-hudi-support/pom.xml
index 9dd29d3c..84f66e81 100644
--- a/xtable-hudi-support/pom.xml
+++ b/xtable-hudi-support/pom.xml
@@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<artifactId>xtable-hudi-support</artifactId>
diff --git a/xtable-hudi-support/xtable-hudi-support-extensions/README.md
b/xtable-hudi-support/xtable-hudi-support-extensions/README.md
index 6e4e3575..713f6546 100644
--- a/xtable-hudi-support/xtable-hudi-support-extensions/README.md
+++ b/xtable-hudi-support/xtable-hudi-support-extensions/README.md
@@ -21,8 +21,8 @@
### When should you use them?
The Hudi extensions provide the ability to add field IDs to the parquet schema
when writing with Hudi. This is a requirement for some engines, like BigQuery
and Snowflake, when reading an Iceberg table. If you are not planning on using
Iceberg, then you do not need to add these to your Hudi writers.
### How do you use them?
-1. Add the extensions jar
(`xtable-hudi-extensions-0.1.0-SNAPSHOT-bundled.jar`) to your class path.
-For example, if you're using the Hudi [quick-start
guide](https://hudi.apache.org/docs/quick-start-guide#spark-shellsql) for spark
you can just add `--jars xtable-hudi-extensions-0.1.0-SNAPSHOT-bundled.jar` to
the end of the command.
+1. Add the extensions jar
(`xtable-hudi-extensions-0.2.0-SNAPSHOT-bundled.jar`) to your class path.
+For example, if you're using the Hudi [quick-start
guide](https://hudi.apache.org/docs/quick-start-guide#spark-shellsql) for spark
you can just add `--jars xtable-hudi-extensions-0.2.0-SNAPSHOT-bundled.jar` to
the end of the command.
2. Set the following configurations in your writer options:
`hoodie.avro.write.support.class:
org.apache.xtable.hudi.extensions.HoodieAvroWriteSupportWithFieldIds`
`hoodie.client.init.callback.classes:
org.apache.xtable.hudi.extensions.AddFieldIdsClientInitCallback`
@@ -33,7 +33,7 @@ For example, if you're using the Hudi [quick-start
guide](https://hudi.apache.or
### When should you use them?
If you want to use XTable with Hudi [streaming
ingestion](https://hudi.apache.org/docs/hoodie_streaming_ingestion) to sync
each commit into other table formats.
### How do you use them?
-1. Add the extensions jar
(`xtable-hudi-extensions-0.1.0-SNAPSHOT-bundled.jar`) to your class path.
+1. Add the extensions jar
(`xtable-hudi-extensions-0.2.0-SNAPSHOT-bundled.jar`) to your class path.
2. Add `org.apache.xtable.hudi.sync.XTableSyncTool` to your list of sync
classes
3. Set the following configurations based on your preferences:
`hoodie.xtable.formats.to.sync: "ICEBERG,DELTA"` (or simply use one format)
diff --git a/xtable-hudi-support/xtable-hudi-support-extensions/pom.xml
b/xtable-hudi-support/xtable-hudi-support-extensions/pom.xml
index f149cb62..a5965898 100644
--- a/xtable-hudi-support/xtable-hudi-support-extensions/pom.xml
+++ b/xtable-hudi-support/xtable-hudi-support-extensions/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable-hudi-support</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<dependencies>
diff --git a/xtable-hudi-support/xtable-hudi-support-utils/pom.xml
b/xtable-hudi-support/xtable-hudi-support-utils/pom.xml
index 5a44bde8..2f9396bc 100644
--- a/xtable-hudi-support/xtable-hudi-support-utils/pom.xml
+++ b/xtable-hudi-support/xtable-hudi-support-utils/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable-hudi-support</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<dependencies>
diff --git a/xtable-utilities/pom.xml b/xtable-utilities/pom.xml
index 9492aea3..e11ba253 100644
--- a/xtable-utilities/pom.xml
+++ b/xtable-utilities/pom.xml
@@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.xtable</groupId>
<artifactId>xtable</artifactId>
- <version>0.1.0-SNAPSHOT</version>
+ <version>0.2.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>