SteNicholas commented on code in PR #547:
URL: https://github.com/apache/flink-table-store/pull/547#discussion_r1111831443


##########
docs/content/docs/how-to/creating-tables.md:
##########
@@ -114,6 +114,78 @@ Partition keys must be a subset of primary keys if primary 
keys are defined.
 By configuring [partition.expiration-time]({{< ref 
"docs/maintenance/manage-partition" >}}), expired partitions can be 
automatically deleted.
 {{< /hint >}}
 
+## Create Table As
+
+Tables can also be created and populated by the results of a query. 
+
+{{< tabs "create-table-as" >}}
+
+{{< tab "Flink" >}}
+
+// Flink will create target table first and then start a `INSERT INTO 
target_table SELECT * FROM source_table` job,
+// for batch mode, the job will exit when the job finished, for streaming 
mode, the job will not exit.

Review Comment:
   @zhangjun0x01, because the explanation is in the `Flink` tab, you could 
explain that enable the checkpoint for streaming mode, no need to explain the 
job behavior. BTW, you could add the `Spark3` tab for `Create Table As`, 
because you have added the test case of `Create Table As` in `SparkReadITCase`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to