leonardBang commented on a change in pull request #12577:
URL: https://github.com/apache/flink/pull/12577#discussion_r438660767



##########
File path: docs/dev/table/common.zh.md
##########
@@ -828,18 +830,16 @@ Table API 和 SQL 查询会被翻译成 [DataStream]({{ site.baseurl 
}}/zh/dev/d
 1. 优化逻辑执行计划
 2. 翻译成 DataStream 或 DataSet 程序
 
-对于 Streaming 而言,Table API 或者 SQL 查询在下列情况下会被翻译:
-
-* 当 `TableEnvironment.execute()` 被调用时。`Table` (通过 `Table.insertInto()` 输出给 
`TableSink`)和 SQL (通过调用 `TableEnvironment.sqlUpdate()`)会先被缓存到 
`TableEnvironment` 中,每个 sink 会被单独优化。执行计划将包括多个独立的有向无环子图。
-* `Table` 被转换成 `DataStream` 时(参阅[与 DataStream 和 DataSet API 
结合](#integration-with-datastream-and-dataset-api))。转换完成后,它就成为一个普通的 DataStream 
程序,并且会在调用 `StreamExecutionEnvironment.execute()` 的时候被执行。
+Table API 或者 SQL 查询在下列情况下会被翻译:
 
-对于 Batch 而言,Table API 或者 SQL 查询在下列情况下会被翻译:
+* 当 `TableEnvironment.executeSql()` 被调用时。该方法是用来执行一个 SQL 语句,一旦该方法被调用 SQL 
语句立即被翻译。
+* 当 `Table.executeInsert()` 被调用时。该方法是用来将一个表的内容插入到目标表中,一旦该方法被调用 TABLE API 立即被翻译。
+* 当 `Table.execute()` 被调用时。该方法是用来将一个表的内容收集到本地,一旦该方法被调用 TABLE API 立即被翻译。

Review comment:
       Add a comma after '被调用', and 'TABLE API' -> 'TABLE API 程序'

##########
File path: 
flink-end-to-end-tests/flink-batch-sql-test/src/main/java/org/apache/flink/sql/tests/BatchSQLTestProgram.java
##########
@@ -48,7 +48,7 @@
  *
  * <p>Parameters:
  * -outputPath output file path for CsvTableSink;
- * -sqlStatement SQL statement that will be executed as sqlUpdate
+ * -sqlStatement SQL statement that will be executed as executeSql

Review comment:
       Do we need replace all `sqlUpdate` with `executeSql` in docs and code 
comments?
   (1) There’re some TODO like 
https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java#L684
   (2) and there're many `sqlUpdate` in code comments

##########
File path: docs/dev/table/common.zh.md
##########
@@ -818,8 +815,13 @@ result.insert_into("CsvSinkTable")
 
 Table API 或者 SQL 查询在下列情况下会被翻译:
 
-* 当 `TableEnvironment.execute()` 被调用时。`Table` (通过 `Table.insertInto()` 输出给 
`TableSink`)和 SQL (通过调用 `TableEnvironment.sqlUpdate()`)会先被缓存到 
`TableEnvironment` 中,所有的 sink 会被优化成一张有向无环图。
-* `Table` 被转换成 `DataStream` 时(参阅[与 DataStream 和 DataSet API 
结合](#integration-with-datastream-and-dataset-api))。转换完成后,它就成为一个普通的 DataStream 
程序,并且会在调用 `StreamExecutionEnvironment.execute()` 的时候被执行。
+* 当 `TableEnvironment.executeSql()` 被调用时。该方法是用来执行一个 SQL 语句,一旦该方法被调用 SQL 
语句立即被翻译。

Review comment:
       ```suggestion
   * 当 `TableEnvironment.executeSql()` 被调用时。该方法是用来执行一个 SQL 语句,一旦该方法被调用, SQL 
语句立即被翻译。
   ```
   Add a comma

##########
File path: docs/dev/table/sql/describe.zh.md
##########
@@ -0,0 +1,201 @@
+---
+title: "DESCRIBE 语句"
+nav-parent_id: sql
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+DESCRIBE 语句用来描述一个表或者视图的原数据信息。

Review comment:
       ```suggestion
   DESCRIBE 语句用来描述一个表或者视图的元数据信息。
   ```

##########
File path: docs/dev/table/sql/insert.zh.md
##########
@@ -40,12 +41,31 @@ EnvironmentSettings settings = 
EnvironmentSettings.newInstance()...
 TableEnvironment tEnv = TableEnvironment.create(settings);
 
 // 注册一个 "Orders" 源表,和 "RubberOrders" 结果表
-tEnv.sqlUpdate("CREATE TABLE Orders (`user` BIGINT, product VARCHAR, amount 
INT) WITH (...)");
-tEnv.sqlUpdate("CREATE TABLE RubberOrders(product VARCHAR, amount INT) WITH 
(...)");
+tEnv.executeSql("CREATE TABLE Orders (`user` BIGINT, product VARCHAR, amount 
INT) WITH (...)");
+tEnv.executeSql("CREATE TABLE RubberOrders(product VARCHAR, amount INT) WITH 
(...)");
 
 // 运行一个 INSERT 语句,将源表的数据输出到结果表中
-tEnv.sqlUpdate(
+TableResult tableResult1 = tEnv.executeSql(
   "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Rubber%'");
+// 可以通过 TableResult 来获取作业状态
+System.out.println(tableResult1.getJobClient().get().getJobStatus());
+
+//----------------------------------------------------------------------------
+// 注册一个 "GlassOrders" 结果表用于运行多 INSERT 语句
+tEnv.executeSql("CREATE TABLE GlassOrders(product VARCHAR, amount INT) WITH 
(...)");
+
+// 运行多个 INSERT 语句,将原表数据输出到多个结果表中
+StatementSet stmtSet = tEnv.createStatementSet();
+// `addInsertSql` 方法每次只接收单条 INSERT 语句
+stmtSet.addInsertSql(
+  "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Rubber%'");
+stmtSet.addInsertSql(
+  "INSERT INTO GlassOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Glass%'");
+// 执行刚刚添加的所有的 INSERT 语句
+TableResult tableResult2 = stmtSet.execute();
+// 可以通过 TableResult 来获取作业状态

Review comment:
       ```suggestion
   // 通过 TableResult 来获取作业状态
   ```

##########
File path: docs/dev/table/sql/queries.zh.md
##########
@@ -136,7 +136,78 @@ t_env.connect(FileSystem().path("/path/to/file")))
 
 # 在表上执行 SQL 更新操作,并把结果发出到 TableSink
 table_env \
-    .sql_update("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+    .execute_sql("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Execute SELECT query
+SELECT 语句可以通过 `TableEnvironment.executeSql()` 方法来执行,将选择的结果收集到本地。该方法返回 
`TableResult` 对象用于包装查询的结果。和 SELECT 语句很像,`Table.execute()` 方法可以将 `Table` 
的内容收集到本地。
+`TableResult.collect()` 方法返回一个可以关闭的行迭代器。除非所有的数据都被搜集到本地,否则一个查询作业一直不会结束。所以我们应该通过 
`CloseableIterator#close()` 方法主动的关闭作业以防止资源泄露。

Review comment:
       搜集->收集, 
   否则一个查询作业一直不会结束-> 否则一个查询作业永远不会结束
   主动的 -> 主动地

##########
File path: docs/dev/table/common.md
##########
@@ -839,8 +828,14 @@ Table API and SQL queries are translated into 
[DataStream]({{ site.baseurl }}/de
 
 a Table API or SQL query is translated when:
 
-* `TableEnvironment.execute()` is called. A `Table` (emitted to a `TableSink` 
through `Table.insertInto()`) or a SQL update query (specified through 
`TableEnvironment.sqlUpdate()`) will be buffered in `TableEnvironment` first. 
All sinks will be optimized into one DAG.
+* `TableEnvironment.executeSql()` is called. This method is used for executing 
a given statement, and the sql query is translated immediately once this method 
is called.
+* `Table.executeInsert()` is called. This method is used for inserting the 
table content to the given sink path, and the Table API is translated 
immediately once this method is called.
+* `Table.execute()` is called. This method is used for collecting the table 
content to local client, and the Table API is translated immediately once this 
method is called.
+* `SatementSet.execute()` is called. A `Table` (emitted to a sink through 
`SatementSet.addInsert()`) or an INSERT statement (specified through 
`SatementSet.addInsertSql()`) will be buffered in `SatementSet` first. They are 
translated once `SatementSet.execute()` is called. All sinks will be optimized 
into one DAG.
 * A `Table` is translated when it is converted into a `DataStream` (see 
[Integration with DataStream and DataSet 
API](#integration-with-datastream-and-dataset-api)). Once translated, it's a 
regular DataStream program and is executed when 
`StreamExecutionEnvironment.execute()` is called.
+
+**Note:** Since 1.11 version, `sqlUpdate()` and `insertInto()` are deprecated. 
If the table program is built from these two methods, we must use 
`StreamTableEnvironment.execute()` method instead of 
`StreamExecutionEnvironment.execute()` method to execute it.

Review comment:
       method `sqlUpdate()` and `insertInto()`

##########
File path: 
flink-walkthroughs/flink-walkthrough-table-java/src/main/resources/archetype-resources/src/main/java/SpendReport.java
##########
@@ -39,10 +40,12 @@ public static void main(String[] args) throws Exception {
                                "spend_report", new SpendReportTableSink());
                tEnv.registerFunction("truncateDateToHour", new 
TruncateDateToHour());
 
-               tEnv
-                       .scan("transactions")
-                       .insertInto("spend_report");
+               TableResult tableResult = tEnv
+                               .scan("transactions")
+                               .executeInsert("spend_report");

Review comment:
       ident

##########
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/TableSinkITCase.scala
##########
@@ -48,12 +48,10 @@ class TableSinkITCase extends BatchTestBase {
 
     registerCollection("MyTable", data3, type3, "a, b, c", nullablesOfData3)
 
-    tEnv.from("MyTable")
+    val table = tEnv.from("MyTable")
       .where('a > 20)
       .select("12345", 55.cast(DataTypes.DECIMAL(10, 0)), 
"12345".cast(DataTypes.CHAR(5)))
-      .insertInto("sink")

Review comment:
       Add do we need to replace all `insertInto` with 
`execInsertSqlAndWaitResult ` ?

##########
File path: 
flink-walkthroughs/flink-walkthrough-table-java/src/main/resources/archetype-resources/src/main/java/SpendReport.java
##########
@@ -39,10 +40,12 @@ public static void main(String[] args) throws Exception {
                                "spend_report", new SpendReportTableSink());
                tEnv.registerFunction("truncateDateToHour", new 
TruncateDateToHour());
 
-               tEnv
-                       .scan("transactions")
-                       .insertInto("spend_report");
+               TableResult tableResult = tEnv
+                               .scan("transactions")
+                               .executeInsert("spend_report");
 
-               tEnv.execute("Spend Report");
+               // wait job finished
+               tableResult.getJobClient().get()
+                               
.getJobExecutionResult(Thread.currentThread().getContextClassLoader()).get();

Review comment:
       ident

##########
File path: docs/dev/table/sql/describe.zh.md
##########
@@ -0,0 +1,201 @@
+---
+title: "DESCRIBE 语句"
+nav-parent_id: sql
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+DESCRIBE 语句用来描述一个表或者视图的原数据信息。
+
+
+## 执行 DESCRIBE 语句
+
+DESCRIBE 语句可以通过 `TableEnvironment` 的 `executeSql()` 执行,也可以在 [SQL CLI]({{ 
site.baseurl }}/dev/table/sqlClient.html) 中执行 DROP 语句。 若 DESCRIBE 
操作执行成功,executeSql() 方法返回 该表的 schema 信息,否则会抛出异常。

Review comment:
       “返回 该表” -> “返回该表”

##########
File path: docs/dev/table/sql/explain.zh.md
##########
@@ -0,0 +1,192 @@
+---
+title: "EXPLAIN 语句"
+nav-parent_id: sql
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+EXPLAIN 语句用来解释一个 SELECT 语句或者 DML 的逻辑计划和优化后的计划。
+
+
+## 运行一个 EXPLAIN 语句
+
+EXPLAIN 语句可以通过 `TableEnvironment` 的 `executeSql()` 执行,也可以在 [SQL CLI]({{ 
site.baseurl }}/zh/dev/table/sqlClient.html) 中执行 EXPLAIN 语句。 若 EXPLAIN 
操作执行成功,executeSql() 方法返回解释的结果,否则会抛出异常。
+
+以下的例子展示了如何在 TableEnvironment 和 SQL CLI 中执行一个 EXPLAIN 语句。
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)");
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)");
+
+// explain SELECT statement through TableEnvironment.explainSql()
+String explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+System.out.println(explanation);
+
+// explain SELECT statement through TableEnvironment.executeSql()
+TableResult tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+tableResult.print();
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+val tEnv = StreamTableEnvironment.create(env)
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+// explain SELECT statement through TableEnvironment.explainSql()
+val explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+println(explanation)
+
+// explain SELECT statement through TableEnvironment.executeSql()
+val tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+tableResult.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% highlight python %}
+settings = EnvironmentSettings.new_instance()...
+table_env = StreamTableEnvironment.create(env, settings)
+
+t_env.execute_sql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+t_env.execute_sql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+# explain SELECT statement through TableEnvironment.explain_sql()
+explanation1 = t_env.explain_sql(
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+print(explanation1)
+
+# explain SELECT statement through TableEnvironment.execute_sql()
+table_result = t_env.execute_sql(
+    "EXPLAIN PLAN FOR "
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+table_result.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="SQL CLI" markdown="1">
+{% highlight sql %}
+Flink SQL> CREATE TABLE MyTable1 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> CREATE TABLE MyTable2 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> EXPLAIN PLAN FOR SELECT count, word FROM MyTable1 WHERE word LIKE 
'F%' 
+> UNION ALL 
+> SELECT count, word FROM MyTable2;
+
+{% endhighlight %}
+</div>
+</div>
+
+执行 `EXPLAIN` 语句后的结果为:
+<div>
+{% highlight text %}
+== Abstract Syntax Tree ==
+LogicalUnion(all=[true])
+  LogicalFilter(condition=[LIKE($1, _UTF-16LE'F%')])
+    FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable1]], fields=[count, word])
+  FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable2]], fields=[count, word])
+  
+
+== Optimized Logical Plan ==
+DataStreamUnion(all=[true], union all=[count, word])
+  DataStreamCalc(select=[count, word], where=[LIKE(word, _UTF-16LE'F%')])
+    TableSourceScan(table=[[default_catalog, default_database, MyTable1]], 
fields=[count, word])
+  TableSourceScan(table=[[default_catalog, default_database, MyTable2]], 
fields=[count, word])
+
+== Physical Execution Plan ==
+Stage 1 : Data Source
+       content : collect elements with CollectionInputFormat
+
+Stage 2 : Data Source
+       content : collect elements with CollectionInputFormat
+
+       Stage 3 : Operator
+               content : from: (count, word)
+               ship_strategy : REBALANCE
+
+               Stage 4 : Operator
+                       content : where: (LIKE(word, _UTF-16LE'F%')), select: 
(count, word)
+                       ship_strategy : FORWARD
+
+                       Stage 5 : Operator
+                               content : from: (count, word)
+                               ship_strategy : REBALANCE
+{% endhighlight %}
+</div>
+
+和 `EXPLAIN PLAN FOR SELECT ...` 语句很像,我们也可以用 `EXPLAIN PLAN FOR INSERT ...` 
语句来获取一个 INSERT 语句的计划。
+
+{% top %}
+
+## 语法
+
+{% highlight sql %}
+EXPLAIN PLAN 
+[EXCLUDING ATTRIBUTES | INCLUDING ALL ATTRIBUTES] 
+[WITH TYPE | WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION]
+[AS XML | AS JSON] 
+FOR 
+<select_statement_or_insert_statement>
+
+{% endhighlight %}
+
+请参阅 [SELECT]({{ site.baseurl 
}}/zh/dev/table/sql/queries.html#supported-syntax) 页面获得 SELECT 的语法.

Review comment:
       Chinese period

##########
File path: docs/dev/table/common.zh.md
##########
@@ -818,8 +815,13 @@ result.insert_into("CsvSinkTable")
 
 Table API 或者 SQL 查询在下列情况下会被翻译:
 
-* 当 `TableEnvironment.execute()` 被调用时。`Table` (通过 `Table.insertInto()` 输出给 
`TableSink`)和 SQL (通过调用 `TableEnvironment.sqlUpdate()`)会先被缓存到 
`TableEnvironment` 中,所有的 sink 会被优化成一张有向无环图。
-* `Table` 被转换成 `DataStream` 时(参阅[与 DataStream 和 DataSet API 
结合](#integration-with-datastream-and-dataset-api))。转换完成后,它就成为一个普通的 DataStream 
程序,并且会在调用 `StreamExecutionEnvironment.execute()` 的时候被执行。
+* 当 `TableEnvironment.executeSql()` 被调用时。该方法是用来执行一个 SQL 语句,一旦该方法被调用 SQL 
语句立即被翻译。
+* 当 `Table.executeInsert()` 被调用时。该方法是用来将一个表的内容插入到目标表中,一旦该方法被调用 TABLE API 立即被翻译。
+* 当 `Table.execute()` 被调用时。该方法是用来将一个表的内容收集到本地,一旦该方法被调用 TABLE API 立即被翻译。

Review comment:
       Add a comma too. TABLE API -> TABLE API 程序 ?

##########
File path: docs/dev/table/tableApi.zh.md
##########
@@ -2119,13 +2119,13 @@ orders.insertInto("OutOrders")
         <span class="label label-primary">批处理</span> <span class="label 
label-primary">流处理</span>
       </td>
       <td>
-        <p>类似于SQL请求中的INSERT INTO子句。将数据输出到一个已注册的输出表中。</p>
+        <p>类似于SQL请求中的INSERT INTO子句。将数据输出到一个已注册的输出表中。execute_insert 方法会立即提交一个 
Flink 作业,触发插入操作。</p>

Review comment:
       ```suggestion
           <p>类似于SQL请求中的INSERT INTO子句。将数据输出到一个已注册的输出表中。`execute_insert` 
方法会立即提交一个 Flink 作业,触发插入操作。</p>
   ```

##########
File path: docs/dev/table/sql/explain.zh.md
##########
@@ -0,0 +1,192 @@
+---
+title: "EXPLAIN 语句"
+nav-parent_id: sql
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+EXPLAIN 语句用来解释一个 SELECT 语句或者 DML 的逻辑计划和优化后的计划。
+
+
+## 运行一个 EXPLAIN 语句
+
+EXPLAIN 语句可以通过 `TableEnvironment` 的 `executeSql()` 执行,也可以在 [SQL CLI]({{ 
site.baseurl }}/zh/dev/table/sqlClient.html) 中执行 EXPLAIN 语句。 若 EXPLAIN 
操作执行成功,executeSql() 方法返回解释的结果,否则会抛出异常。
+
+以下的例子展示了如何在 TableEnvironment 和 SQL CLI 中执行一个 EXPLAIN 语句。
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)");
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)");
+
+// explain SELECT statement through TableEnvironment.explainSql()
+String explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+System.out.println(explanation);
+
+// explain SELECT statement through TableEnvironment.executeSql()
+TableResult tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+tableResult.print();
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+val tEnv = StreamTableEnvironment.create(env)
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+// explain SELECT statement through TableEnvironment.explainSql()
+val explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+println(explanation)
+
+// explain SELECT statement through TableEnvironment.executeSql()
+val tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+tableResult.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% highlight python %}
+settings = EnvironmentSettings.new_instance()...
+table_env = StreamTableEnvironment.create(env, settings)
+
+t_env.execute_sql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+t_env.execute_sql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+# explain SELECT statement through TableEnvironment.explain_sql()
+explanation1 = t_env.explain_sql(
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+print(explanation1)
+
+# explain SELECT statement through TableEnvironment.execute_sql()
+table_result = t_env.execute_sql(
+    "EXPLAIN PLAN FOR "
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+table_result.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="SQL CLI" markdown="1">
+{% highlight sql %}
+Flink SQL> CREATE TABLE MyTable1 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> CREATE TABLE MyTable2 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> EXPLAIN PLAN FOR SELECT count, word FROM MyTable1 WHERE word LIKE 
'F%' 
+> UNION ALL 
+> SELECT count, word FROM MyTable2;
+
+{% endhighlight %}
+</div>
+</div>
+
+执行 `EXPLAIN` 语句后的结果为:
+<div>
+{% highlight text %}
+== Abstract Syntax Tree ==
+LogicalUnion(all=[true])
+  LogicalFilter(condition=[LIKE($1, _UTF-16LE'F%')])
+    FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable1]], fields=[count, word])
+  FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable2]], fields=[count, word])
+  
+
+== Optimized Logical Plan ==
+DataStreamUnion(all=[true], union all=[count, word])
+  DataStreamCalc(select=[count, word], where=[LIKE(word, _UTF-16LE'F%')])
+    TableSourceScan(table=[[default_catalog, default_database, MyTable1]], 
fields=[count, word])
+  TableSourceScan(table=[[default_catalog, default_database, MyTable2]], 
fields=[count, word])
+
+== Physical Execution Plan ==
+Stage 1 : Data Source
+       content : collect elements with CollectionInputFormat
+
+Stage 2 : Data Source
+       content : collect elements with CollectionInputFormat
+
+       Stage 3 : Operator
+               content : from: (count, word)
+               ship_strategy : REBALANCE
+
+               Stage 4 : Operator
+                       content : where: (LIKE(word, _UTF-16LE'F%')), select: 
(count, word)
+                       ship_strategy : FORWARD
+
+                       Stage 5 : Operator
+                               content : from: (count, word)
+                               ship_strategy : REBALANCE
+{% endhighlight %}
+</div>
+
+和 `EXPLAIN PLAN FOR SELECT ...` 语句很像,我们也可以用 `EXPLAIN PLAN FOR INSERT ...` 
语句来获取一个 INSERT 语句的计划。
+
+{% top %}
+
+## 语法
+
+{% highlight sql %}
+EXPLAIN PLAN 
+[EXCLUDING ATTRIBUTES | INCLUDING ALL ATTRIBUTES] 
+[WITH TYPE | WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION]
+[AS XML | AS JSON] 
+FOR 
+<select_statement_or_insert_statement>
+
+{% endhighlight %}
+
+请参阅 [SELECT]({{ site.baseurl 
}}/zh/dev/table/sql/queries.html#supported-syntax) 页面获得 SELECT 的语法.
+请参阅 [INSERT]({{ site.baseurl }}/zh/dev/table/sql/insert.html) 页面获得 INSERT 的语法.

Review comment:
       Chinese period

##########
File path: docs/dev/table/sql/explain.zh.md
##########
@@ -0,0 +1,192 @@
+---
+title: "EXPLAIN 语句"
+nav-parent_id: sql
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+EXPLAIN 语句用来解释一个 SELECT 语句或者 DML 的逻辑计划和优化后的计划。
+
+
+## 运行一个 EXPLAIN 语句
+
+EXPLAIN 语句可以通过 `TableEnvironment` 的 `executeSql()` 执行,也可以在 [SQL CLI]({{ 
site.baseurl }}/zh/dev/table/sqlClient.html) 中执行 EXPLAIN 语句。 若 EXPLAIN 
操作执行成功,executeSql() 方法返回解释的结果,否则会抛出异常。
+
+以下的例子展示了如何在 TableEnvironment 和 SQL CLI 中执行一个 EXPLAIN 语句。
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)");
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)");
+
+// explain SELECT statement through TableEnvironment.explainSql()
+String explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+System.out.println(explanation);
+
+// explain SELECT statement through TableEnvironment.executeSql()
+TableResult tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2");
+tableResult.print();
+
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+val tEnv = StreamTableEnvironment.create(env)
+
+// register a table named "Orders"
+tEnv.executeSql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+tEnv.executeSql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+// explain SELECT statement through TableEnvironment.explainSql()
+val explanation = tEnv.explainSql(
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+println(explanation)
+
+// explain SELECT statement through TableEnvironment.executeSql()
+val tableResult = tEnv.executeSql(
+  "EXPLAIN PLAN FOR " + 
+  "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' " +
+  "UNION ALL " + 
+  "SELECT count, word FROM MyTable2")
+tableResult.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% highlight python %}
+settings = EnvironmentSettings.new_instance()...
+table_env = StreamTableEnvironment.create(env, settings)
+
+t_env.execute_sql("CREATE TABLE MyTable1 (count bigint, work VARCHAR(256) WITH 
(...)")
+t_env.execute_sql("CREATE TABLE MyTable2 (count bigint, work VARCHAR(256) WITH 
(...)")
+
+# explain SELECT statement through TableEnvironment.explain_sql()
+explanation1 = t_env.explain_sql(
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+print(explanation1)
+
+# explain SELECT statement through TableEnvironment.execute_sql()
+table_result = t_env.execute_sql(
+    "EXPLAIN PLAN FOR "
+    "SELECT count, word FROM MyTable1 WHERE word LIKE 'F%' "
+    "UNION ALL "
+    "SELECT count, word FROM MyTable2")
+table_result.print()
+
+{% endhighlight %}
+</div>
+
+<div data-lang="SQL CLI" markdown="1">
+{% highlight sql %}
+Flink SQL> CREATE TABLE MyTable1 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> CREATE TABLE MyTable2 (count bigint, work VARCHAR(256);
+[INFO] Table has been created.
+
+Flink SQL> EXPLAIN PLAN FOR SELECT count, word FROM MyTable1 WHERE word LIKE 
'F%' 
+> UNION ALL 
+> SELECT count, word FROM MyTable2;
+
+{% endhighlight %}
+</div>
+</div>
+
+执行 `EXPLAIN` 语句后的结果为:
+<div>
+{% highlight text %}
+== Abstract Syntax Tree ==
+LogicalUnion(all=[true])
+  LogicalFilter(condition=[LIKE($1, _UTF-16LE'F%')])
+    FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable1]], fields=[count, word])
+  FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
MyTable2]], fields=[count, word])
+  
+
+== Optimized Logical Plan ==
+DataStreamUnion(all=[true], union all=[count, word])
+  DataStreamCalc(select=[count, word], where=[LIKE(word, _UTF-16LE'F%')])
+    TableSourceScan(table=[[default_catalog, default_database, MyTable1]], 
fields=[count, word])
+  TableSourceScan(table=[[default_catalog, default_database, MyTable2]], 
fields=[count, word])
+
+== Physical Execution Plan ==
+Stage 1 : Data Source
+       content : collect elements with CollectionInputFormat
+
+Stage 2 : Data Source
+       content : collect elements with CollectionInputFormat
+
+       Stage 3 : Operator
+               content : from: (count, word)
+               ship_strategy : REBALANCE
+
+               Stage 4 : Operator
+                       content : where: (LIKE(word, _UTF-16LE'F%')), select: 
(count, word)
+                       ship_strategy : FORWARD
+
+                       Stage 5 : Operator
+                               content : from: (count, word)
+                               ship_strategy : REBALANCE
+{% endhighlight %}
+</div>
+
+和 `EXPLAIN PLAN FOR SELECT ...` 语句很像,我们也可以用 `EXPLAIN PLAN FOR INSERT ...` 
语句来获取一个 INSERT 语句的计划。

Review comment:
       ```suggestion
   和 `EXPLAIN PLAN FOR SELECT ...` 语句类似,我们也可以用 `EXPLAIN PLAN FOR INSERT ...` 
语句来获取一个 INSERT 语句的计划。
   ```

##########
File path: docs/dev/table/sql/insert.zh.md
##########
@@ -40,12 +41,31 @@ EnvironmentSettings settings = 
EnvironmentSettings.newInstance()...
 TableEnvironment tEnv = TableEnvironment.create(settings);
 
 // 注册一个 "Orders" 源表,和 "RubberOrders" 结果表
-tEnv.sqlUpdate("CREATE TABLE Orders (`user` BIGINT, product VARCHAR, amount 
INT) WITH (...)");
-tEnv.sqlUpdate("CREATE TABLE RubberOrders(product VARCHAR, amount INT) WITH 
(...)");
+tEnv.executeSql("CREATE TABLE Orders (`user` BIGINT, product VARCHAR, amount 
INT) WITH (...)");
+tEnv.executeSql("CREATE TABLE RubberOrders(product VARCHAR, amount INT) WITH 
(...)");
 
 // 运行一个 INSERT 语句,将源表的数据输出到结果表中
-tEnv.sqlUpdate(
+TableResult tableResult1 = tEnv.executeSql(
   "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Rubber%'");
+// 可以通过 TableResult 来获取作业状态
+System.out.println(tableResult1.getJobClient().get().getJobStatus());
+
+//----------------------------------------------------------------------------
+// 注册一个 "GlassOrders" 结果表用于运行多 INSERT 语句
+tEnv.executeSql("CREATE TABLE GlassOrders(product VARCHAR, amount INT) WITH 
(...)");
+
+// 运行多个 INSERT 语句,将原表数据输出到多个结果表中
+StatementSet stmtSet = tEnv.createStatementSet();
+// `addInsertSql` 方法每次只接收单条 INSERT 语句
+stmtSet.addInsertSql(
+  "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Rubber%'");
+stmtSet.addInsertSql(
+  "INSERT INTO GlassOrders SELECT product, amount FROM Orders WHERE product 
LIKE '%Glass%'");
+// 执行刚刚添加的所有的 INSERT 语句

Review comment:
       ```suggestion
   // 执行刚刚添加的所有 INSERT 语句
   ```

##########
File path: docs/dev/table/sql/queries.md
##########
@@ -1,5 +1,5 @@
 ---
-title: "Queries"
+title: "SELECT queries"

Review comment:
       In this section, Do we need change this? all queries should be SELECT 
queries from my understanding. Others should belong to other kind of SQL, 
"query" should not equals "SQL" 

##########
File path: docs/dev/table/tableApi.md
##########
@@ -2120,13 +2120,13 @@ orders.insertInto("OutOrders")
         <span class="label label-primary">Batch</span> <span class="label 
label-primary">Streaming</span>
       </td>
       <td>
-        <p>Similar to the INSERT INTO clause in a SQL query. Performs a 
insertion into a registered output table.</p>
+        <p>Similar to the INSERT INTO clause in a SQL query. Performs a 
insertion into a registered output table. The executeInsert method will 
immediately submit a flink job which execute the insert operation.</p>
 
         <p>Output tables must be registered in the TableEnvironment (see <a 
href="common.html#register-a-tablesink">Register a TableSink</a>). Moreover, 
the schema of the registered table must match the schema of the query.</p>
 
 {% highlight python %}
-orders = table_env.from_path("Orders");
-orders.insert_into("OutOrders");
+orders = table_env.from_path("Orders")
+orders.execute_insert("OutOrders")

Review comment:
       ```suggestion
   orders = table_env.from_path("Orders");
   orders.execute_insert("OutOrders");
   ```

##########
File path: docs/dev/table/sql/queries.zh.md
##########
@@ -136,7 +136,78 @@ t_env.connect(FileSystem().path("/path/to/file")))
 
 # 在表上执行 SQL 更新操作,并把结果发出到 TableSink
 table_env \
-    .sql_update("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+    .execute_sql("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Execute SELECT query
+SELECT 语句可以通过 `TableEnvironment.executeSql()` 方法来执行,将选择的结果收集到本地。该方法返回 
`TableResult` 对象用于包装查询的结果。和 SELECT 语句很像,`Table.execute()` 方法可以将 `Table` 
的内容收集到本地。
+`TableResult.collect()` 方法返回一个可以关闭的行迭代器。除非所有的数据都被搜集到本地,否则一个查询作业一直不会结束。所以我们应该通过 
`CloseableIterator#close()` 方法主动的关闭作业以防止资源泄露。
+我们可以通过 `TableResult.print()` 
方法将查询结果打印到本地控制台,但是我们应该确保最终返回的数据量比较小。因为所有数据先都收集到本地内存再执行打印操作。

Review comment:
       因为所有数据都会先收集到本地内存再执行打印操作。
   

##########
File path: docs/dev/table/sql/queries.zh.md
##########
@@ -136,7 +136,78 @@ t_env.connect(FileSystem().path("/path/to/file")))
 
 # 在表上执行 SQL 更新操作,并把结果发出到 TableSink
 table_env \
-    .sql_update("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+    .execute_sql("INSERT INTO RubberOrders SELECT product, amount FROM Orders 
WHERE product LIKE '%Rubber%'")
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
+## Execute SELECT query
+SELECT 语句可以通过 `TableEnvironment.executeSql()` 方法来执行,将选择的结果收集到本地。该方法返回 
`TableResult` 对象用于包装查询的结果。和 SELECT 语句很像,`Table.execute()` 方法可以将 `Table` 
的内容收集到本地。

Review comment:
       和 SELECT 语句类似




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to