This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git


The following commit(s) were added to refs/heads/master by this push:
     new 57fe4f8a1 [hotfix] Fix hdfs path in the document. (#1082)
57fe4f8a1 is described below

commit 57fe4f8a1ef0d6845c2ad71378e7c76cfd24dbe7
Author: Kerwin <37063904+zhuangch...@users.noreply.github.com>
AuthorDate: Tue May 9 10:42:26 2023 +0800

    [hotfix] Fix hdfs path in the document. (#1082)
---
 docs/content/how-to/creating-catalogs.md | 16 ++++++++--------
 docs/content/how-to/creating-tables.md   | 18 +++++++++---------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/docs/content/how-to/creating-catalogs.md 
b/docs/content/how-to/creating-catalogs.md
index 2cb9ea143..db7e7b65a 100644
--- a/docs/content/how-to/creating-catalogs.md
+++ b/docs/content/how-to/creating-catalogs.md
@@ -39,12 +39,12 @@ See [CatalogOptions]({{< ref 
"maintenance/configurations#catalogoptions" >}}) fo
 
 {{< tab "Flink" >}}
 
-The following Flink SQL registers and uses a Paimon catalog named 
`my_catalog`. Metadata and table files are stored under 
`hdfs://path/to/warehouse`.
+The following Flink SQL registers and uses a Paimon catalog named 
`my_catalog`. Metadata and table files are stored under 
`hdfs:///path/to/warehouse`.
 
 ```sql
 CREATE CATALOG my_catalog WITH (
     'type' = 'paimon',
-    'warehouse' = 'hdfs://path/to/warehouse'
+    'warehouse' = 'hdfs:///path/to/warehouse'
 );
 
 USE CATALOG my_catalog;
@@ -56,12 +56,12 @@ You can define any default table options with the prefix 
`table-default.` for ta
 
 {{< tab "Spark3" >}}
 
-The following shell command registers a paimon catalog named `paimon`. 
Metadata and table files are stored under `hdfs://path/to/warehouse`.
+The following shell command registers a paimon catalog named `paimon`. 
Metadata and table files are stored under `hdfs:///path/to/warehouse`.
 
 ```bash
 spark-sql ... \
     --conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
-    --conf spark.sql.catalog.paimon.warehouse=hdfs://path/to/warehouse
+    --conf spark.sql.catalog.paimon.warehouse=hdfs:///path/to/warehouse
 ```
 
 You can define any default table options with the prefix 
`spark.sql.catalog.paimon.table-default.` for tables created in the catalog.
@@ -88,7 +88,7 @@ To use Hive catalog, Database name, Table name and Field 
names should be lower c
 
 Paimon Hive catalog in Flink relies on Flink Hive connector bundled jar. You 
should first download Flink Hive connector bundled jar and add it to classpath. 
See 
[here](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/hive/overview/#using-bundled-hive-jar)
 for more info.
 
-The following Flink SQL registers and uses a Paimon Hive catalog named 
`my_hive`. Metadata and table files are stored under 
`hdfs://path/to/warehouse`. In addition, metadata is also stored in Hive 
metastore.
+The following Flink SQL registers and uses a Paimon Hive catalog named 
`my_hive`. Metadata and table files are stored under 
`hdfs:///path/to/warehouse`. In addition, metadata is also stored in Hive 
metastore.
 
 If your Hive requires security authentication such as Kerberos, LDAP, Ranger 
and so on. You can specify the hive-conf-dir parameter to the hive-site.xml 
file path.
 
@@ -97,7 +97,7 @@ CREATE CATALOG my_hive WITH (
     'type' = 'paimon',
     'metastore' = 'hive',
     'uri' = 'thrift://<hive-metastore-host-name>:<port>',
-    'warehouse' = 'hdfs://path/to/warehouse'
+    'warehouse' = 'hdfs:///path/to/warehouse'
 );
 
 USE CATALOG my_hive;
@@ -111,12 +111,12 @@ You can define any default table options with the prefix 
`table-default.` for ta
 
 Your Spark installation should be able to detect, or already contains Hive 
dependencies. See 
[here](https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html) 
for more information.
 
-The following shell command registers a Paimon Hive catalog named `paimon`. 
Metadata and table files are stored under `hdfs://path/to/warehouse`. In 
addition, metadata is also stored in Hive metastore.
+The following shell command registers a Paimon Hive catalog named `paimon`. 
Metadata and table files are stored under `hdfs:///path/to/warehouse`. In 
addition, metadata is also stored in Hive metastore.
 
 ```bash
 spark-sql ... \
     --conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
-    --conf spark.sql.catalog.paimon.warehouse=hdfs://path/to/warehouse \
+    --conf spark.sql.catalog.paimon.warehouse=hdfs:///path/to/warehouse \
     --conf spark.sql.catalog.paimon.metastore=hive \
     --conf 
spark.sql.catalog.paimon.uri=thrift://<hive-metastore-host-name>:<port>
 ```
diff --git a/docs/content/how-to/creating-tables.md 
b/docs/content/how-to/creating-tables.md
index 95a5a6f7e..b77766fd9 100644
--- a/docs/content/how-to/creating-tables.md
+++ b/docs/content/how-to/creating-tables.md
@@ -383,7 +383,7 @@ Paimon external tables can be used in any catalog. If you 
do not want to create
 
 {{< tab "Flink" >}}
 
-Flink SQL supports reading and writing an external table. External Paimon 
tables are created by specifying the `connector` and `path` table properties. 
The following SQL creates an external table named `MyTable` with five columns, 
where the base path of table files is `hdfs://path/to/table`.
+Flink SQL supports reading and writing an external table. External Paimon 
tables are created by specifying the `connector` and `path` table properties. 
The following SQL creates an external table named `MyTable` with five columns, 
where the base path of table files is `hdfs:///path/to/table`.
 
 ```sql
 CREATE TABLE MyTable (
@@ -395,7 +395,7 @@ CREATE TABLE MyTable (
     PRIMARY KEY (dt, hh, user_id) NOT ENFORCED
 ) WITH (
     'connector' = 'paimon',
-    'path' = 'hdfs://path/to/table',
+    'path' = 'hdfs:///path/to/table',
     'auto-create' = 'true' -- this table property creates table files for an 
empty table if table path does not exist
                            -- currently only supported by Flink
 );
@@ -405,20 +405,20 @@ CREATE TABLE MyTable (
 
 {{< tab "Spark3" >}}
 
-Spark3 only supports creating external tables through Scala API. The following 
Scala code loads the table located at `hdfs://path/to/table` into a `DataSet`.
+Spark3 only supports creating external tables through Scala API. The following 
Scala code loads the table located at `hdfs:///path/to/table` into a `DataSet`.
 
 ```scala
-val dataset = spark.read.format("paimon").load("hdfs://path/to/table")
+val dataset = spark.read.format("paimon").load("hdfs:///path/to/table")
 ```
 
 {{< /tab >}}
 
 {{< tab "Spark2" >}}
 
-Spark2 only supports creating external tables through Scala API. The following 
Scala code loads the table located at `hdfs://path/to/table` into a `DataSet`.
+Spark2 only supports creating external tables through Scala API. The following 
Scala code loads the table located at `hdfs:///path/to/table` into a `DataSet`.
 
 ```scala
-val dataset = spark.read.format("paimon").load("hdfs://path/to/table")
+val dataset = spark.read.format("paimon").load("hdfs:///path/to/table")
 ```
 
 {{< /tab >}}
@@ -426,7 +426,7 @@ val dataset = 
spark.read.format("paimon").load("hdfs://path/to/table")
 {{< tab "Hive" >}}
 
 To access existing paimon table, you can also register them as external tables 
in Hive. The following SQL creates an
-external table named `my_table`, where the base path of table files is 
`hdfs://path/to/table`. As schemas are stored
+external table named `my_table`, where the base path of table files is 
`hdfs:///path/to/table`. As schemas are stored
 in table files, users do not need to write column definitions.
 
 ```sql
@@ -452,7 +452,7 @@ If you want to use Paimon catalog along with other tables 
but do not want to sto
 ```sql
 CREATE CATALOG my_catalog WITH (
     'type' = 'paimon',
-    'warehouse' = 'hdfs://path/to/warehouse'
+    'warehouse' = 'hdfs:///path/to/warehouse'
 );
 
 USE CATALOG my_catalog;
@@ -464,7 +464,7 @@ CREATE TEMPORARY TABLE temp_table (
     v STRING
 ) WITH (
     'connector' = 'filesystem',
-    'path' = 'hdfs://path/to/temp_table.csv',
+    'path' = 'hdfs:///path/to/temp_table.csv',
     'format' = 'csv'
 );
 

Reply via email to