lsyldliu commented on code in PR #24975:
URL: https://github.com/apache/flink/pull/24975#discussion_r1660970466


##########
docs/content/docs/dev/table/materialized-table/syntax.md:
##########
@@ -0,0 +1,337 @@
+---
+title: Syntax

Review Comment:
   Statements



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.

Review Comment:
   This guide will help you quickly understand and get started with 
materialized tables. It includes setting up the environment and creating, 
altering, and dropping materialized tables in CONTINUOUS and FULL mode.
   



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html

Review Comment:
   quick-start -> quickstart



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup
+
+### Directory Preparation
+
+**Replace the example paths below with real paths on your machine.**
+
+- Create directories for Catalog Store and Catalog dependencies:
+
+```
+# Directory for File Catalog Store to save catalog information
+mkdir -p /path/to/catalog/store
+
+# Directory for test-filesystem Catalog to save table metadata and table data
+mkdir -p /path/to/catalog/test-filesystem
+
+# Default database for test-filesystem Catalog
+mkdir -p /path/to/catalog/test-filesystem/mydb
+```
+
+- Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
+
+```
+mkdir -p /path/to/checkpoint
+
+mkdir -p /path/to/savepoint
+```
+
+### Dependency Preparation
+
+The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:
+
+```
+java -version
+```
+
+Next, [download](https://flink.apache.org/downloads/) the latest Flink binary 
package and extract it:
+
+```
+tar -xzf flink-*.tgz
+```
+
+Download the 
[test-filesystem](https://https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-filesystem-test-utils/)
 connector and place it in the lib directory.
+
+```
+cp flink-table-filesystem-test-utils-{VERSION}.jar flink-*/lib/
+```
+
+### Configuration Preparation
+
+Edit the config.yaml file and add the following configurations:
+
+```yaml
+execution:
+  checkpoints:
+    dir: file:///path/to/savepoint
+
+# Configure file catalog
+table:
+  catalog-store:
+    kind: file
+    file:
+      path: /path/to/catalog/store
+
+# Configure embedded scheduler
+workflow-scheduler:
+  type: embedded
+
+# Configure SQL gateway address and port
+sql-gateway:
+  endpoint:
+    rest:
+      address: 127.0.0.1
+      port: 8083
+```
+
+### Start Flink Cluster
+
+Run the following script to start the cluster locally:
+
+```
+./bin/start-cluster.sh
+```
+
+### Start SQL Gateway
+
+Run the following script to start the SQL Gateway locally:
+
+```
+./sql-gateway.sh start
+```
+
+### Start SQL Client
+
+Run the following script to start the SQL Client locally:
+
+```
+./sql-client.sh gateway --endpoint http://127.0.0.1:8083

Review Comment:
   ./bin/



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup
+
+### Directory Preparation
+
+**Replace the example paths below with real paths on your machine.**
+
+- Create directories for Catalog Store and Catalog dependencies:
+
+```
+# Directory for File Catalog Store to save catalog information
+mkdir -p /path/to/catalog/store
+
+# Directory for test-filesystem Catalog to save table metadata and table data
+mkdir -p /path/to/catalog/test-filesystem
+
+# Default database for test-filesystem Catalog
+mkdir -p /path/to/catalog/test-filesystem/mydb
+```
+
+- Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
+
+```
+mkdir -p /path/to/checkpoint
+
+mkdir -p /path/to/savepoint
+```
+
+### Dependency Preparation
+
+The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:

Review Comment:
   Java 8 also should work?



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup
+
+### Directory Preparation
+
+**Replace the example paths below with real paths on your machine.**
+
+- Create directories for Catalog Store and Catalog dependencies:
+
+```
+# Directory for File Catalog Store to save catalog information
+mkdir -p /path/to/catalog/store
+
+# Directory for test-filesystem Catalog to save table metadata and table data
+mkdir -p /path/to/catalog/test-filesystem
+
+# Default database for test-filesystem Catalog
+mkdir -p /path/to/catalog/test-filesystem/mydb
+```
+
+- Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
+
+```
+mkdir -p /path/to/checkpoint
+
+mkdir -p /path/to/savepoint
+```
+
+### Dependency Preparation
+
+The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:
+
+```
+java -version
+```
+
+Next, [download](https://flink.apache.org/downloads/) the latest Flink binary 
package and extract it:
+
+```
+tar -xzf flink-*.tgz
+```
+
+Download the 
[test-filesystem](https://https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-filesystem-test-utils/)
 connector and place it in the lib directory.
+
+```
+cp flink-table-filesystem-test-utils-{VERSION}.jar flink-*/lib/
+```
+
+### Configuration Preparation
+
+Edit the config.yaml file and add the following configurations:
+
+```yaml
+execution:
+  checkpoints:
+    dir: file:///path/to/savepoint
+
+# Configure file catalog
+table:
+  catalog-store:
+    kind: file
+    file:
+      path: /path/to/catalog/store
+
+# Configure embedded scheduler
+workflow-scheduler:
+  type: embedded
+
+# Configure SQL gateway address and port
+sql-gateway:
+  endpoint:
+    rest:
+      address: 127.0.0.1
+      port: 8083
+```
+
+### Start Flink Cluster
+
+Run the following script to start the cluster locally:
+
+```
+./bin/start-cluster.sh
+```
+
+### Start SQL Gateway
+
+Run the following script to start the SQL Gateway locally:
+
+```
+./sql-gateway.sh start

Review Comment:
   ./bin/



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup
+
+### Directory Preparation
+
+**Replace the example paths below with real paths on your machine.**
+
+- Create directories for Catalog Store and Catalog dependencies:
+
+```
+# Directory for File Catalog Store to save catalog information
+mkdir -p /path/to/catalog/store
+
+# Directory for test-filesystem Catalog to save table metadata and table data
+mkdir -p /path/to/catalog/test-filesystem
+
+# Default database for test-filesystem Catalog
+mkdir -p /path/to/catalog/test-filesystem/mydb
+```
+
+- Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
+
+```
+mkdir -p /path/to/checkpoint
+
+mkdir -p /path/to/savepoint
+```
+
+### Dependency Preparation
+
+The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:
+
+```
+java -version
+```
+
+Next, [download](https://flink.apache.org/downloads/) the latest Flink binary 
package and extract it:
+
+```
+tar -xzf flink-*.tgz
+```
+
+Download the 
[test-filesystem](https://https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-filesystem-test-utils/)
 connector and place it in the lib directory.
+
+```
+cp flink-table-filesystem-test-utils-{VERSION}.jar flink-*/lib/
+```
+
+### Configuration Preparation
+
+Edit the config.yaml file and add the following configurations:
+
+```yaml
+execution:
+  checkpoints:
+    dir: file:///path/to/savepoint
+
+# Configure file catalog
+table:
+  catalog-store:
+    kind: file
+    file:
+      path: /path/to/catalog/store
+
+# Configure embedded scheduler
+workflow-scheduler:
+  type: embedded
+
+# Configure SQL gateway address and port
+sql-gateway:
+  endpoint:
+    rest:
+      address: 127.0.0.1
+      port: 8083
+```
+
+### Start Flink Cluster
+
+Run the following script to start the cluster locally:
+
+```
+./bin/start-cluster.sh
+```
+
+### Start SQL Gateway
+
+Run the following script to start the SQL Gateway locally:
+
+```
+./sql-gateway.sh start
+```
+
+### Start SQL Client
+
+Run the following script to start the SQL Client locally:
+
+```
+./sql-client.sh gateway --endpoint http://127.0.0.1:8083
+```
+
+## Create Catalog and Source Table
+
+1. Create the test-filesystem catalog:
+
+```sql
+CREATE CATALOG mt_cat WITH (
+  'type' = 'test-filesystem',
+  'path' = '/path/to/catalog/test-filesystem',
+  'default-database' = 'mydb'
+);
+
+USE CATALOG mt_cat;
+```
+
+2. Create the Source table:
+
+```sql
+
+-- 1. Create Source table and specify the data format as json
+CREATE TABLE json_source (
+  order_id BIGINT,
+  user_id BIGINT,
+  user_name STRING,
+  order_created_at STRING,
+  payment_amount_cents BIGINT
+) WITH (
+  'format' = 'json',
+  'source.monitor-interval' = '10s'
+);
+
+-- 2. Insert some test data
+INSERT INTO json_source VALUES 
+  (1001, 1, 'user1', '2024-06-19', 10),
+  (1002, 2, 'user2', '2024-06-19', 20),
+  (1003, 3, 'user3', '2024-06-19', 30),
+  (1004, 4, 'user4', '2024-06-19', 40),
+  (1005, 1, 'user1', '2024-06-20', 10),
+  (1006, 2, 'user2', '2024-06-20', 20),
+  (1007, 3, 'user3', '2024-06-20', 30),
+  (1008, 4, 'user4', '2024-06-20', 40);
+  
+INSERT INTO json_source VALUES 
+  (1001, 1, 'user1', '2024-06-24', 10),
+  (1002, 2, 'user2', '2024-06-24', 20),
+  (1003, 3, 'user3', '2024-06-24', 30),
+  (1004, 4, 'user4', '2024-06-24', 40),
+  (1005, 1, 'user1', '2024-06-25', 10),
+  (1006, 2, 'user2', '2024-06-25', 20),
+  (1007, 3, 'user3', '2024-06-25', 30),
+  (1008, 4, 'user4', '2024-06-25', 40);
+  
+INSERT INTO json_source VALUES 
+  (1001, 1, 'user1', '2024-06-26 ', 10),
+  (1002, 2, 'user2', '2024-06-26 ', 20),
+  (1003, 3, 'user3', '2024-06-26 ', 30),
+  (1004, 4, 'user4', '2024-06-26 ', 40),
+  (1005, 1, 'user1', '2024-06-26 ', 10),
+  (1006, 2, 'user2', '2024-06-26 ', 20),
+  (1007, 3, 'user3', '2024-06-26 ', 30),
+  (1008, 4, 'user4', '2024-06-26 ', 40);
+```
+
+## CONTINUOUS Mode

Review Comment:
   # Create Continuous Mode Materialized Table?



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start

Review Comment:
   Quickstart



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup
+
+### Directory Preparation
+
+**Replace the example paths below with real paths on your machine.**
+
+- Create directories for Catalog Store and Catalog dependencies:
+
+```
+# Directory for File Catalog Store to save catalog information
+mkdir -p /path/to/catalog/store
+
+# Directory for test-filesystem Catalog to save table metadata and table data
+mkdir -p /path/to/catalog/test-filesystem
+
+# Default database for test-filesystem Catalog
+mkdir -p /path/to/catalog/test-filesystem/mydb
+```
+
+- Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
+
+```
+mkdir -p /path/to/checkpoint
+
+mkdir -p /path/to/savepoint
+```
+
+### Dependency Preparation
+
+The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:
+
+```
+java -version
+```
+
+Next, [download](https://flink.apache.org/downloads/) the latest Flink binary 
package and extract it:
+
+```
+tar -xzf flink-*.tgz
+```
+
+Download the 
[test-filesystem](https://https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-filesystem-test-utils/)
 connector and place it in the lib directory.
+
+```
+cp flink-table-filesystem-test-utils-{VERSION}.jar flink-*/lib/
+```
+
+### Configuration Preparation
+
+Edit the config.yaml file and add the following configurations:
+
+```yaml
+execution:
+  checkpoints:
+    dir: file:///path/to/savepoint

Review Comment:
   savepoint -> checkpoint?



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+
+## Environment Setup

Review Comment:
   # Environment Setup
   
   ## Directory Preparation
   
   **Replace the example paths below with real paths on your machine.**
   
   - Create directories for Catalog Store and Catalog dependencies:
   
   ```
   # Directory for File Catalog Store to save catalog information
   mkdir -p /path/to/catalog/store
   
   # Directory for test-filesystem Catalog to save table metadata and table data
   mkdir -p /path/to/catalog/test-filesystem
   
   # Default database for test-filesystem Catalog
   mkdir -p /path/to/catalog/test-filesystem/mydb
   ```
   
   - Create directories for Checkpoints and Savepoints to save Checkpoints and 
Savepoints respectively:
   
   ```
   mkdir -p /path/to/checkpoint
   
   mkdir -p /path/to/savepoint
   ```
   
   ## Dependency Preparation
   
   The method here is similar to the steps recorded in [local installation]({{< 
ref "docs/try-flink/local_installation" >}}). Flink can run on any UNIX-like 
operating system, such as Linux, Mac OS X, and Cygwin (for Windows). You need 
to have __Java 11__ installed locally. You can check the installed Java version 
with the following command:
   
   ```
   java -version
   ```
   
   Next, [download](https://flink.apache.org/downloads/) the latest Flink 
binary package and extract it:
   
   ```
   tar -xzf flink-*.tgz
   ```
   
   Download the 
[test-filesystem](https://https://repo.maven.apache.org/maven2/org/apache/flink/flink-table-filesystem-test-utils/)
 connector and place it in the lib directory.
   
   ```
   cp flink-table-filesystem-test-utils-{VERSION}.jar flink-*/lib/
   ```
   
   ## Configuration Preparation
   
   Edit the config.yaml file and add the following configurations:
   
   ```yaml
   execution:
     checkpoints:
       dir: file:///path/to/savepoint
   
   # Configure file catalog
   table:
     catalog-store:
       kind: file
       file:
         path: /path/to/catalog/store
   
   # Configure embedded scheduler
   workflow-scheduler:
     type: embedded
   
   # Configure SQL gateway address and port
   sql-gateway:
     endpoint:
       rest:
         address: 127.0.0.1
         port: 8083
   ```
   
   ## Start Flink Cluster
   
   Run the following script to start the cluster locally:
   
   ```
   ./bin/start-cluster.sh
   ```
   
   ## Start SQL Gateway
   
   Run the following script to start the SQL Gateway locally:
   
   ```
   ./sql-gateway.sh start
   ```
   
   ## Start SQL Client
   
   Run the following script to start the SQL Client locally:
   
   ```
   ./sql-client.sh gateway --endpoint http://127.0.0.1:8083
   ```
   
   ## Create Catalog and Source Table
   
   1. Create the test-filesystem catalog:
   
   ```sql
   CREATE CATALOG mt_cat WITH (
     'type' = 'test-filesystem',
     'path' = '/path/to/catalog/test-filesystem',
     'default-database' = 'mydb'
   );
   
   USE CATALOG mt_cat;
   ```
   
   2. Create the source table:
   
   ```sql
   
   -- 1. Create source table and specify the data format as json
   CREATE TABLE json_source (
     order_id BIGINT,
     user_id BIGINT,
     user_name STRING,
     order_created_at STRING,
     payment_amount_cents BIGINT
   ) WITH (
     'format' = 'json',
     'source.monitor-interval' = '10s'
   );
   
   -- 2. Insert some test data
   INSERT INTO json_source VALUES 
     (1001, 1, 'user1', '2024-06-19', 10),
     (1002, 2, 'user2', '2024-06-19', 20),
     (1003, 3, 'user3', '2024-06-19', 30),
     (1004, 4, 'user4', '2024-06-19', 40),
     (1005, 1, 'user1', '2024-06-20', 10),
     (1006, 2, 'user2', '2024-06-20', 20),
     (1007, 3, 'user3', '2024-06-20', 30),
     (1008, 4, 'user4', '2024-06-20', 40);
   ```
   
   # Create Continuous Mode Materialized Table
   
   ## Create Materialized Table
   
   Create a materialized table in CONTINUOUS mode with 30 seconds data 
freshness.
   
   ```sql
   CREATE MATERIALIZED TABLE continuous_users_shops
   PARTITIONED BY (ds)
   WITH (
     'format' = 'debezium-json',
     'sink.rolling-policy.rollover-interval' = '10s',
     'sink.rolling-policy.check-interval' = '10s'
   )
   FRESHNESS = INTERVAL '30' SECOND
   AS SELECT
     user_id,
     ds,
     SUM (payment_amount_cents) AS payed_buy_fee_sum,
     SUM (1) AS PV
   FROM (
     SELECT user_id, order_created_at AS ds, payment_amount_cents
       FROM json_source
     ) AS tmp
   GROUP BY user_id, ds;
   ```
   
   You can view the corresponding Flink streaming refresh job on the page 
http://localhost:8081. It should be in the RUNNING state with a checkpoint 
interval of 30s.
   
   ## Suspend Materialized Table
   
   Before executing the suspend operation, you need to set the savepoint path. 
Check the Flink streaming job status on the page http://localhost:8081, and it 
should normally transition to the FINISHED state.
   
   ```sql
   -- Set savepoint path before suspending
   SET 'execution.checkpointing.savepoint-dir' = 'file:///path/to/savepoint';
   
   ALTER MATERIALIZED TABLE continuous_users_shops SUSPEND;
   ```
   
   ## Query Materialized Table
   
   Query the materialized table data to find that some data has already been 
written.
   
   ```sql
   SELECT * FROM continuous_users_shops;
   ```
   
   ## Resume Materialized Table
   
   Resume the refresh job of the materialized table. You will find that a new 
Flink streaming job is started from the specified savepoint path on 
http://localhost:8081.
   
   ```sql
   ALTER MATERIALIZED TABLE continuous_users_shops RESUME;
   ```
   
   ## Drop Materialized Table
   
   Drop the materialized table, and you will see that the corresponding refresh 
job transitions to the CANCELED state on http://localhost:8081.
   
   ```sql
   DROP MATERIALIZED TABLE continuous_users_shops;
   ```
   
   # Create Full Mode Materialized Table
   
   ## Create Materialized Table
   
   Create a materialized table in FULL mode with 1 minute data freshness.
   
   ```sql
   CREATE MATERIALIZED TABLE full_users_shops
   PARTITIONED BY (ds)
   WITH (
     'format' = 'json',
     'partition.fields.ds.date-formatter' = 'yyyy-MM-dd'
   )
   FRESHNESS = INTERVAL '1' MINUTE
   REFRESH_MODE = FULL
   AS SELECT
     user_id,
     ds,
     SUM (payment_amount_cents) AS payed_buy_fee_sum,
     SUM (1) AS PV
   FROM (
     SELECT user_id, order_created_at AS ds, payment_amount_cents
     FROM json_source
   ) AS tmp
   GROUP BY user_id, ds;
   ```
   
   On the http://localhost:8081 page, you will see that the refresh job for the 
materialized table is scheduled every 1 minute.
   
   ## Query Materialized Table
   
   Insert some data for today's partition. When the materialized table is 
partitioned and the partition's date format is set, each refresh will only 
refresh the latest partition.
   
   ```sql
   INSERT INTO json_source VALUES 
     (1001, 1, 'user1', CAST(CURRENT_DATE AS STRING), 10),
     (1002, 2, 'user2', CAST(CURRENT_DATE AS STRING), 20),
     (1003, 3, 'user3', CAST(CURRENT_DATE AS STRING), 30),
     (1004, 4, 'user4', CAST(CURRENT_DATE AS STRING), 40);
   ```
   
   Wait at least 1 minute and query the materialized table results to see that 
only today's partition data is present.
   
   ```sql
   SELECT * FROM full_users_shops;
   ```
   
   ## Manually Refresh Historical Partition
   
   Manually refresh the partition `ds = '2024-06-25'` to see the partition data 
for `2024-06-25` in the materialized table.
   
   ```sql
   -- Manually refresh historical partition
   ALTER MATERIALIZED TABLE full_users_shops REFRESH PARTITIONS(ds = 
'2024-06-25')
   
   -- Query materialized table data
   SELECT * FROM full_users_shops;
   ```
   
   ## Suspend and Resume Materialized Table
   
   By suspending and resuming operations, you can control the refresh jobs 
corresponding to the materialized table. After suspending, the corresponding 
refresh job will not be scheduled. After resuming, the corresponding refresh 
job will be rescheduled. You can check the job scheduling status on the 
http://localhost:8081 page.
   
   ```sql
   -- Suspend background refresh task
   ALTER MATERIALIZED TABLE full_users_shops SUSPEND;
   
   -- Resume background refresh task
   ALTER MATERIALIZED TABLE full_users_shops RESUME;
   ```
   
   ## Drop Materialized Table
   
   After dropping the materialized table, the corresponding refresh job will 
not be scheduled again. You can confirm this on the http://localhost:8081 page.
   
   ```sql
   DROP MATERIALIZED TABLE full_users_shops;
   ```



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide
+
+This guide will help you quickly understand and get started with materialized 
tables. It includes setting up the environment, creating materialized tables in 
CONTINUOUS mode, and creating materialized tables in FULL mode.
+

Review Comment:
   Can you give an picture to introduce the overall architecture of 
materialized table work process, similar to 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/olap_quickstart/#architecture-introduction



##########
docs/content/docs/dev/table/materialized-table/quick-start.md:
##########
@@ -0,0 +1,333 @@
+---
+title: Quick Start
+weight: 3
+type: docs
+aliases:
+- /dev/table/materialized-table/quick-start.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Quick Start Guide

Review Comment:
   Quickstart Guide



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to