This is an automated email from the ASF dual-hosted git repository.
yuqi4733 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/gravitino.git
The following commit(s) were added to refs/heads/main by this push:
new 783f990cd9 [MINOR] docs: fix several typo in docs (#6584)
783f990cd9 is described below
commit 783f990cd971cca9d8162d030a378d7e613181de
Author: Kang <[email protected]>
AuthorDate: Tue Mar 4 11:07:53 2025 +0800
[MINOR] docs: fix several typo in docs (#6584)
### What changes were proposed in this pull request?
fix several typo in docs
### Why are the changes needed?
typo
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
N/A
---
docs/apache-hive-catalog.md | 2 +-
docs/cli.md | 2 +-
docs/hadoop-catalog-with-s3.md | 2 +-
docs/hadoop-catalog.md | 2 +-
docs/hive-catalog-with-cloud-storage.md | 2 +-
docs/model-catalog.md | 2 +-
6 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/docs/apache-hive-catalog.md b/docs/apache-hive-catalog.md
index 09bded2c30..1a1aefef08 100644
--- a/docs/apache-hive-catalog.md
+++ b/docs/apache-hive-catalog.md
@@ -46,7 +46,7 @@ Besides the [common catalog
properties](./gravitino-server-config.md#gravitino-c
:::note
For `list-all-tables=false`, the Hive catalog will filter out:
- Iceberg tables by table property `table_type=ICEBERG`
-- Paimon tables by table property `table_type=PAINMON`
+- Paimon tables by table property `table_type=PAIMON`
- Hudi tables by table property `provider=hudi`
:::
diff --git a/docs/cli.md b/docs/cli.md
index cc710a2080..ebf4a38506 100644
--- a/docs/cli.md
+++ b/docs/cli.md
@@ -23,7 +23,7 @@ alias gcli='java -jar
../../cli/build/libs/gravitino-cli-*-incubating-SNAPSHOT.j
Or you use the `gcli.sh` script found in the `clients/cli/bin/` directory to
run the CLI.
## Usage
-f
+
The general structure for running commands with the Gravitino CLI is `gcli
entity command [options]`.
```bash
diff --git a/docs/hadoop-catalog-with-s3.md b/docs/hadoop-catalog-with-s3.md
index e5bd4c41f7..c7fcef3737 100644
--- a/docs/hadoop-catalog-with-s3.md
+++ b/docs/hadoop-catalog-with-s3.md
@@ -341,7 +341,7 @@ fileset_name = "your_s3_fileset"
os.environ["PYSPARK_SUBMIT_ARGS"] = "--jars
/path/to/gravitino-aws-${gravitino-version}.jar,/path/to/gravitino-filesystem-hadoop3-runtime-${gravitino-version}-SNAPSHOT.jar,/path/to/hadoop-aws-3.2.0.jar,/path/to/aws-java-sdk-bundle-1.11.375.jar
--master local[1] pyspark-shell"
spark = SparkSession.builder
- .appName("s3_fielset_test")
+ .appName("s3_fileset_test")
.config("spark.hadoop.fs.AbstractFileSystem.gvfs.impl",
"org.apache.gravitino.filesystem.hadoop.Gvfs")
.config("spark.hadoop.fs.gvfs.impl",
"org.apache.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem")
.config("spark.hadoop.fs.gravitino.server.uri", "http://localhost:8090")
diff --git a/docs/hadoop-catalog.md b/docs/hadoop-catalog.md
index 978efad90d..0bf480851f 100644
--- a/docs/hadoop-catalog.md
+++ b/docs/hadoop-catalog.md
@@ -27,7 +27,7 @@ Besides the [common catalog
properties](./gravitino-server-config.md#apache-grav
|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|----------|------------------|
| `location` | The storage location managed by Hadoop
catalog.
| (none) | No | 0.5.0 |
| `default-filesystem-provider` | The default filesystem provider of this
Hadoop catalog if users do not specify the scheme in the URI. Candidate values
are 'builtin-local', 'builtin-hdfs', 's3', 'gcs', 'abs' and 'oss'. Default
value is `builtin-local`. For S3, if we set this value to 's3', we can omit the
prefix 's3a://' in the location. | `builtin-local` | No |
0.7.0-incubating |
-| `filesystem-providers` | The file system providers to add. Users
needs to set this configuration to support cloud storage or custom HCFS. For
instance, set it to `s3` or a comma separated string that contains `s3` like
`gs,s3` to support multiple kinds of fileset including `s3`.
| (none) | Yes |
0.7.0-incubating |
+| `filesystem-providers` | The file system providers to add. Users
need to set this configuration to support cloud storage or custom HCFS. For
instance, set it to `s3` or a comma separated string that contains `s3` like
`gs,s3` to support multiple kinds of fileset including `s3`.
| (none) | Yes |
0.7.0-incubating |
| `credential-providers` | The credential provider types, separated by
comma.
| (none) | No | 0.8.0-incubating |
| `filesystem-conn-timeout-secs` | The timeout of getting the file system
using Hadoop FileSystem client instance. Time unit: seconds.
| 6 | No | 0.8.0-incubating |
diff --git a/docs/hive-catalog-with-cloud-storage.md
b/docs/hive-catalog-with-cloud-storage.md
index b1403ba5e1..f2a5fe20eb 100644
--- a/docs/hive-catalog-with-cloud-storage.md
+++ b/docs/hive-catalog-with-cloud-storage.md
@@ -44,7 +44,7 @@ Below are the essential properties to add or modify in the
`hive-site.xml` file
definition and table definition, as shown in the examples below. After
explicitly setting this
property, you can omit the location property in the schema and table
definitions.
-It's also applicable for Azure Blob Storage(ADSL) and GCS.
+It's also applicable for Azure Blob Storage(ADLS) and GCS.
-->
<property>
<name>hive.metastore.warehouse.dir</name>
diff --git a/docs/model-catalog.md b/docs/model-catalog.md
index a9da0c8b3f..a96214b70f 100644
--- a/docs/model-catalog.md
+++ b/docs/model-catalog.md
@@ -16,7 +16,7 @@ managing the versions for each model.
The advantages of using model catalog are:
* Centralized management of ML models with user defined namespaces. Users can
better discover
- and govern the models from sematic level, rather than managing the model
files directly.
+ and govern the models from semantic level, rather than managing the model
files directly.
* Version management for each model. Users can easily track the model versions
and manage the
model lifecycle.