This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.6
in repository https://gitbox.apache.org/repos/asf/gravitino.git


The following commit(s) were added to refs/heads/branch-0.6 by this push:
     new 33f89199f [MINOR] docs: polish 0.6 document  (#4516)
33f89199f is described below

commit 33f89199fb667aa59f378e62e0d827fc4aa365b1
Author: github-actions[bot] 
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Wed Aug 14 20:14:11 2024 +0800

    [MINOR] docs: polish 0.6 document  (#4516)
    
    ### What changes were proposed in this pull request?
    polish 0.6 document
    
    ### Why are the changes needed?
    
    fix some errors and make document more user friendly
    
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    just document
    
    Co-authored-by: FANNG <[email protected]>
---
 docs/flink-connector/flink-connector.md       |  2 +-
 docs/iceberg-rest-service.md                  | 10 +++++-----
 docs/lakehouse-paimon-catalog.md              | 13 +++----------
 docs/spark-connector/spark-catalog-iceberg.md |  8 +++++---
 4 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/docs/flink-connector/flink-connector.md 
b/docs/flink-connector/flink-connector.md
index c14186a2d..948e4554b 100644
--- a/docs/flink-connector/flink-connector.md
+++ b/docs/flink-connector/flink-connector.md
@@ -50,7 +50,7 @@ TableEnvironment tableEnv = 
TableEnvironment.create(builder.inBatchMode().build(
 
 3. Execute the Flink SQL query. 
 
-Suppose there is only one hive catalog with the name hive in the metalake test.
+Suppose there is only one hive catalog with the name `hive` in the metalake 
`test`.
 
 ```sql
 // use hive catalog
diff --git a/docs/iceberg-rest-service.md b/docs/iceberg-rest-service.md
index 6ae17f74b..e0e790b5e 100644
--- a/docs/iceberg-rest-service.md
+++ b/docs/iceberg-rest-service.md
@@ -60,7 +60,7 @@ Starting with version `0.6.0`, the prefix 
`gravitino.auxService.iceberg-rest.` f
 
 Please note that, it only takes affect in `gravitino.conf`, you don't need to 
specify the above configurations if start as a standalone server.
 
-### REST catalog server configuration
+### HTTP server configuration
 
 | Configuration item                               | Description               
                                                                                
                                                                                
                                                           | Default value      
                                                          | Required | Since 
Version |
 
|--------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------|---------------|
@@ -113,13 +113,13 @@ You should place HDFS configuration file to the classpath 
of the Iceberg REST se
 Builds with Hadoop 2.10.x. There may be compatibility issues when accessing 
Hadoop 3.x clusters.
 :::
 
-### Apache Gravitino Iceberg catalog backend configuration
+### Catalog backend configuration
 
 :::info
 The Gravitino Iceberg REST catalog service uses the memory catalog backend by 
default. You can specify a Hive or JDBC catalog backend for production 
environment.
 :::
 
-#### Apache Hive backend configuration
+#### Hive backend configuration
 
 | Configuration item                            | Description                  
                                                                                
                                | Default value                                 
                                | Required | Since Version |
 
|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------|---------------|
@@ -162,7 +162,7 @@ The `clients` property for example:
 
 ### Apache Iceberg metrics store configuration
 
-Gravitino provides a pluggable metrics store interface to store and delete 
Iceberg metrics. You can develop a class that implements 
`org.apache.gravitino.catalog.lakehouse.iceberg.web.metrics` and add the 
corresponding jar file to the Iceberg REST service classpath directory.
+Gravitino provides a pluggable metrics store interface to store and delete 
Iceberg metrics. You can develop a class that implements 
`org.apache.gravitino.iceberg.service.metrics.IcebergMetricsStore` and add the 
corresponding jar file to the Iceberg REST service classpath directory.
 
 | Configuration item                              | Description                
                                                                                
                         | Default value | Required | Since Version |
 
|-------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|---------------|
@@ -232,7 +232,7 @@ SELECT * FROM dml.test;
 
 ## Docker instructions
 
-You could run Gravitino server though docker container:
+You could run Gravitino Iceberg REST server though docker container:
 
 ```shell
 docker run -d -p 9001:9001 datastrato/iceberg-rest-server:0.6
diff --git a/docs/lakehouse-paimon-catalog.md b/docs/lakehouse-paimon-catalog.md
index 6ad85c732..2b669ff0b 100644
--- a/docs/lakehouse-paimon-catalog.md
+++ b/docs/lakehouse-paimon-catalog.md
@@ -76,6 +76,8 @@ Please refer to [Manage Relational Metadata Using 
Gravitino](./manage-relational
 dropTable will delete the table location directly, similar with purgeTable.
 ```
 - Supporting Column default value through table properties, such as 
`fields.{columnName}.default-value`, not column expression.
+ 
+- Doesn't support table distribution and sort orders.
 
 :::info
 Paimon does not support auto increment column.
@@ -89,19 +91,10 @@ Paimon does not support auto increment column.
 - UpdateColumnComment
 - UpdateColumnNullability
 - UpdateColumnPosition
-```
-UpdateColumnPosition only supports update a column position with first, after 
position, cannot use default position.
-```
 - UpdateColumnType
 - UpdateComment
 - SetProperty
-```
-SetProperty cannot update table comment, please use UpdateComment instead.
-```
 - RemoveProperty
-```
-RemoveProperty cannot remove table comment.
-```
 
 #### Table partitions
 
@@ -115,7 +108,7 @@ Please refer to [Paimon DDL Create 
Table](https://paimon.apache.org/docs/0.8/spa
 
 ### Table distributions
 
-- Only supporting `NoneDistribution` now.
+- Doesn't support table distributions.
 
 ### Table indexes
 
diff --git a/docs/spark-connector/spark-catalog-iceberg.md 
b/docs/spark-connector/spark-catalog-iceberg.md
index db5fa27c7..3bc616631 100644
--- a/docs/spark-connector/spark-catalog-iceberg.md
+++ b/docs/spark-connector/spark-catalog-iceberg.md
@@ -12,7 +12,9 @@ The Apache Gravitino Spark connector offers the capability to 
read and write Ice
 #### Support DML and DDL operations:
 
 - `CREATE TABLE`
-  - `Supports basic create table clause including table schema, properties, 
partition, does not support distribution and sort orders.`
+
+Doesn't support distribution and sort orders.
+
 - `DROP TABLE`
 - `ALTER TABLE`
 - `INSERT INTO&OVERWRITE`
@@ -29,7 +31,7 @@ The Apache Gravitino Spark connector offers the capability to 
read and write Ice
 - View operations.
 - Metadata tables, like:
   - `{iceberg_catalog}.{iceberg_database}.{iceberg_table}.snapshots`
-- Other Iceberg extension SQL, like:
+- Other Iceberg extension SQLs, like:
   - `ALTER TABLE prod.db.sample ADD PARTITION FIELD xx`
   - `ALTER TABLE ... WRITE ORDERED BY`
   - `ALTER TABLE prod.db.sample CREATE BRANCH branchName`
@@ -95,7 +97,7 @@ DESC EXTENDED employee;
 
 For more details about `CALL`, please refer to the [Spark Procedures 
description](https://iceberg.apache.org/docs/1.5.2/spark-procedures/#spark-procedures)
 in Iceberg official document. 
 
-## Apache Iceberg backend-catalog support
+## Apache Iceberg catalog backend support
 - HiveCatalog
 - JdbcCatalog
 - RESTCatalog

Reply via email to