xintongsong commented on a change in pull request #11191:
URL: https://github.com/apache/flink/pull/11191#discussion_r422744781



##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。
 
+关于如何注册和使用每种类型的自定义函数(标量函数,表值函数,和聚合函数),更多示例可以看下面的部分。

Review comment:
       ```suggestion
   关于如何注册和使用每种类型的自定义函数(标量函数、表值函数和聚合函数),更多示例可以看下面的部分。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。

Review comment:
       ```suggestion
   通过调用 `registerFunction()` 把函数注册到 `TableEnvironment`。当一个函数注册之后,它就在 
`TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 解析器就可以识别并使用它。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。
 
+关于如何注册和使用每种类型的自定义函数(标量函数,表值函数,和聚合函数),更多示例可以看下面的部分。
 
 {% top %}
 
-Scalar Functions
+标量函数
 ----------------
 
-If a required scalar function is not contained in the built-in functions, it 
is possible to define custom, user-defined scalar functions for both the Table 
API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar 
values to a new scalar value.
+要求自定义标量函数不能覆盖内置函数,Table API 和 SQL 都可以定义和使用自定义标量函数。自定义标量函数可以把0到多个标量值映射成1个标量值。

Review comment:
       数字和汉字间应有空格
   ```suggestion
   要求自定义标量函数不能覆盖内置函数,Table API 和 SQL 都可以定义和使用自定义标量函数。自定义标量函数可以把 0 到多个标量值映射成 1 
个标量值。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -65,19 +63,19 @@ public class HashCode extends ScalarFunction {
 
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 
-// register the function
+// 注册函数
 tableEnv.registerFunction("hashCode", new HashCode(10));
 
-// use the function in Java Table API
+// 在 Java Table API 中使用函数
 myTable.select("string, string.hashCode(), hashCode(string)");
 
-// use the function in SQL API
+// 在 SQL API 中使用函数
 tableEnv.sqlQuery("SELECT string, hashCode(string) FROM MyTable");
 {% endhighlight %}
 
-By default the result type of an evaluation method is determined by Flink's 
type extraction facilities. This is sufficient for basic types or simple POJOs 
but might be wrong for more complex, custom, or composite types. In these cases 
`TypeInformation` of the result type can be manually defined by overriding 
`ScalarFunction#getResultType()`.
+求值方法的返回值类型默认是由 Flink 的类型推导来决定的。类型推导可以推导出基本数据类型以及简单的 
POJO,但是对于更复杂的、自定义的、或者组合类型,可能会推导出错误的结果。在这种情况下,可以通过覆盖 
`ScalarFunction#getResultType()` 的方式来定义复杂类型。

Review comment:
       `TypeInformation` 这个类名应该算一个关键信息点,最好在翻译中保留
   
   Same for Scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。
 
+关于如何注册和使用每种类型的自定义函数(标量函数,表值函数,和聚合函数),更多示例可以看下面的部分。
 
 {% top %}
 
-Scalar Functions
+标量函数
 ----------------
 
-If a required scalar function is not contained in the built-in functions, it 
is possible to define custom, user-defined scalar functions for both the Table 
API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar 
values to a new scalar value.
+要求自定义标量函数不能覆盖内置函数,Table API 和 SQL 都可以定义和使用自定义标量函数。自定义标量函数可以把0到多个标量值映射成1个标量值。

Review comment:
       这里逻辑关系好像不太对,应该是“需要的标量函数没有被内置函数覆盖”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。
 
+关于如何注册和使用每种类型的自定义函数(标量函数,表值函数,和聚合函数),更多示例可以看下面的部分。
 
 {% top %}
 
-Scalar Functions
+标量函数
 ----------------
 
-If a required scalar function is not contained in the built-in functions, it 
is possible to define custom, user-defined scalar functions for both the Table 
API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar 
values to a new scalar value.
+要求自定义标量函数不能覆盖内置函数,Table API 和 SQL 都可以定义和使用自定义标量函数。自定义标量函数可以把0到多个标量值映射成1个标量值。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
-In order to define a scalar function, one has to extend the base class 
`ScalarFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a scalar function is determined by 
the evaluation method. An evaluation method must be declared publicly and named 
`eval`. The parameter types and return type of the evaluation method also 
determine the parameter and return types of the scalar function. Evaluation 
methods can also be overloaded by implementing multiple methods named `eval`. 
Evaluation methods can also support variable arguments, such as `eval(String... 
strs)`.
+想要实现自定义标量函数,你需要扩展 `org.apache.flink.table.functions` 里面的 `ScalarFunction` 
并且实现一个或者多个求值方法。 标量函数的行为取决于你写的求值方法。求值方法并须是 `public` 的,而且名字必须是 
`eval`。求值方法的参数类型以及返回值类型就决定了标量函数的参数类型和返回值类型。求值方法也可以实现为多个重载的 `eval` 
方法。求值方法也支持可变参数,例如 `eval(String... strs)`。

Review comment:
       这里好像不太通顺
   “求值方法也可以实现为多个重载的 `eval` 方法。” -> “可以通过实现多个名为 `eval` 的方法对求值方法进行重载”
   
   Same for Scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -22,33 +22,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-User-defined functions are an important feature, because they significantly 
extend the expressiveness of queries.
+自定义函数是一个非常重要的功能,因为它极大的扩展了查询的表达能力。
 
 * This will be replaced by the TOC
 {:toc}
 
-Register User-Defined Functions
+注册自定义函数
 -------------------------------
-In most cases, a user-defined function must be registered before it can be 
used in an query. It is not necessary to register functions for the Scala Table 
API. 
+在大多数情况下,自定义函数在使用之前都需要注册。在 Scala Table API 中可以不用注册。
 
-Functions are registered at the `TableEnvironment` by calling a 
`registerFunction()` method. When a user-defined function is registered, it is 
inserted into the function catalog of the `TableEnvironment` such that the 
Table API or SQL parser can recognize and properly translate it. 
-
-Please find detailed examples of how to register and how to call each type of 
user-defined function 
-(`ScalarFunction`, `TableFunction`, and `AggregateFunction`) in the following 
sub-sessions.
+通过调用 `registerFunction()` 把函数注册到 `TableEnvironment` 的函数 catalog 
里面。当一个函数注册之后,它就在 `TableEnvironment` 的函数 catalog 里面了,这样 Table API 或者 SQL 
就可以识别并使用它。
 
+关于如何注册和使用每种类型的自定义函数(标量函数,表值函数,和聚合函数),更多示例可以看下面的部分。
 
 {% top %}
 
-Scalar Functions
+标量函数
 ----------------
 
-If a required scalar function is not contained in the built-in functions, it 
is possible to define custom, user-defined scalar functions for both the Table 
API and SQL. A user-defined scalar functions maps zero, one, or multiple scalar 
values to a new scalar value.
+要求自定义标量函数不能覆盖内置函数,Table API 和 SQL 都可以定义和使用自定义标量函数。自定义标量函数可以把0到多个标量值映射成1个标量值。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
-In order to define a scalar function, one has to extend the base class 
`ScalarFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a scalar function is determined by 
the evaluation method. An evaluation method must be declared publicly and named 
`eval`. The parameter types and return type of the evaluation method also 
determine the parameter and return types of the scalar function. Evaluation 
methods can also be overloaded by implementing multiple methods named `eval`. 
Evaluation methods can also support variable arguments, such as `eval(String... 
strs)`.
+想要实现自定义标量函数,你需要扩展 `org.apache.flink.table.functions` 里面的 `ScalarFunction` 
并且实现一个或者多个求值方法。 标量函数的行为取决于你写的求值方法。求值方法并须是 `public` 的,而且名字必须是 
`eval`。求值方法的参数类型以及返回值类型就决定了标量函数的参数类型和返回值类型。求值方法也可以实现为多个重载的 `eval` 
方法。求值方法也支持可变参数,例如 `eval(String... strs)`。
 
-The following example shows how to define your own hash code function, 
register it in the TableEnvironment, and call it in a query. Note that you can 
configure your scalar function via a constructor before it is registered:
+下面的示例展示了如何实现一个求哈希值的函数。先把它注册到 `TableEnvironment` 
里,然后在查询的时候就可以直接使用了。需要注意的是,你可以通过构造方法来配置你的标量函数:

Review comment:
       "before it is registered" 应该算一个关键信息点,最好翻译出来
   
   Same for Scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -65,19 +63,19 @@ public class HashCode extends ScalarFunction {
 
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 
-// register the function
+// 注册函数
 tableEnv.registerFunction("hashCode", new HashCode(10));
 
-// use the function in Java Table API
+// 在 Java Table API 中使用函数
 myTable.select("string, string.hashCode(), hashCode(string)");
 
-// use the function in SQL API
+// 在 SQL API 中使用函数
 tableEnv.sqlQuery("SELECT string, hashCode(string) FROM MyTable");
 {% endhighlight %}
 
-By default the result type of an evaluation method is determined by Flink's 
type extraction facilities. This is sufficient for basic types or simple POJOs 
but might be wrong for more complex, custom, or composite types. In these cases 
`TypeInformation` of the result type can be manually defined by overriding 
`ScalarFunction#getResultType()`.
+求值方法的返回值类型默认是由 Flink 的类型推导来决定的。类型推导可以推导出基本数据类型以及简单的 
POJO,但是对于更复杂的、自定义的、或者组合类型,可能会推导出错误的结果。在这种情况下,可以通过覆盖 
`ScalarFunction#getResultType()` 的方式来定义复杂类型。
 
-The following example shows an advanced example which takes the internal 
timestamp representation and also returns the internal timestamp representation 
as a long value. By overriding `ScalarFunction#getResultType()` we define that 
the returned long value should be interpreted as a `Types.TIMESTAMP` by the 
code generation.
+下面的示例展示了一个高级一点的自定义标量函数用法,它接收一个内部的时间戳参数,并且以 `long` 的形式返回一个内部的时间戳。通过覆盖 
`ScalarFunction#getResultType()`,我们定义了我们返回的 `long` 类型可以被解析为 `Types.TIMESTAMP` 
类型,并被代码生成所使用。

Review comment:
       ```suggestion
   下面的示例展示了一个高级一点的自定义标量函数用法,它接收一个内部的时间戳参数,并且以 `long` 的形式返回该内部的时间戳。通过覆盖 
`ScalarFunction#getResultType()`,我们定义了我们返回的 `long` 类型可以被解析为 `Types.TIMESTAMP` 
类型,并被代码生成所使用。
   ```
   Same for Scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -65,19 +63,19 @@ public class HashCode extends ScalarFunction {
 
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 
-// register the function
+// 注册函数
 tableEnv.registerFunction("hashCode", new HashCode(10));
 
-// use the function in Java Table API
+// 在 Java Table API 中使用函数
 myTable.select("string, string.hashCode(), hashCode(string)");
 
-// use the function in SQL API
+// 在 SQL API 中使用函数
 tableEnv.sqlQuery("SELECT string, hashCode(string) FROM MyTable");
 {% endhighlight %}
 
-By default the result type of an evaluation method is determined by Flink's 
type extraction facilities. This is sufficient for basic types or simple POJOs 
but might be wrong for more complex, custom, or composite types. In these cases 
`TypeInformation` of the result type can be manually defined by overriding 
`ScalarFunction#getResultType()`.
+求值方法的返回值类型默认是由 Flink 的类型推导来决定的。类型推导可以推导出基本数据类型以及简单的 
POJO,但是对于更复杂的、自定义的、或者组合类型,可能会推导出错误的结果。在这种情况下,可以通过覆盖 
`ScalarFunction#getResultType()` 的方式来定义复杂类型。
 
-The following example shows an advanced example which takes the internal 
timestamp representation and also returns the internal timestamp representation 
as a long value. By overriding `ScalarFunction#getResultType()` we define that 
the returned long value should be interpreted as a `Types.TIMESTAMP` by the 
code generation.
+下面的示例展示了一个高级一点的自定义标量函数用法,它接收一个内部的时间戳参数,并且以 `long` 的形式返回一个内部的时间戳。通过覆盖 
`ScalarFunction#getResultType()`,我们定义了我们返回的 `long` 类型可以被解析为 `Types.TIMESTAMP` 
类型,并被代码生成所使用。

Review comment:
       ```suggestion
   下面的示例展示了一个高级一点的自定义标量函数用法,它接收一个内部的时间戳参数,并且以 `long` 的形式返回一个内部的时间戳。通过覆盖 
`ScalarFunction#getResultType()`,我们定义了我们返回的 `long` 类型在代码生成时可以被解析为 
`Types.TIMESTAMP` 类型。
   ```
   Same for Scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -93,12 +91,12 @@ public static class TimestampModifier extends 
ScalarFunction {
 </div>
 
 <div data-lang="scala" markdown="1">
-In order to define a scalar function, one has to extend the base class 
`ScalarFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a scalar function is determined by 
the evaluation method. An evaluation method must be declared publicly and named 
`eval`. The parameter types and return type of the evaluation method also 
determine the parameter and return types of the scalar function. Evaluation 
methods can also be overloaded by implementing multiple methods named `eval`. 
Evaluation methods can also support variable arguments, such as `@varargs def 
eval(str: String*)`.
+想要实现自定义标量函数,你需要扩展 `org.apache.flink.table.functions` 里面的 `ScalarFunction` 
并且实现一个或者多个求值方法. 标量函数的行为取决于你写的求值方法。求值方法并须是 `public` 的,而且名字必须是 
`eval`。求值方法的参数类型以及返回值类型就决定了标量函数的参数类型和返回值类型。求值方法也可以实现多个重载的 `eval` 
方法。求值方法也支持可变参数,例如 `@varargs def eval(str: String*)`。

Review comment:
       ```suggestion
   想要实现自定义标量函数,你需要扩展 `org.apache.flink.table.functions` 里面的 `ScalarFunction` 
并且实现一个或者多个求值方法。标量函数的行为取决于你写的求值方法。求值方法并须是 `public` 的,而且名字必须是 
`eval`。求值方法的参数类型以及返回值类型就决定了标量函数的参数类型和返回值类型。求值方法也可以实现多个重载的 `eval` 
方法。求值方法也支持可变参数,例如 `@varargs def eval(str: String*)`。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -134,9 +132,9 @@ object TimestampModifier extends ScalarFunction {
 </div>
 
 <div data-lang="python" markdown="1">
-In order to define a Python scalar function, one can extend the base class 
`ScalarFunction` in `pyflink.table.udf` and implement an evaluation method. The 
behavior of a Python scalar function is determined by the evaluation method 
which is named `eval`.
+要定义一个 Python 标量函数,你需要继承 `pyflink.table.udf` 下的 
`ScalarFunction`,并且实现一个求值函数。Python 标量函数的行为取决于你实现的求值函数,它的名字必须是 `eval`。

Review comment:
       ```suggestion
   要定义一个 Python 标量函数,你可以继承 `pyflink.table.udf` 下的 
`ScalarFunction`,并且实现一个求值函数。Python 标量函数的行为取决于你实现的求值函数,它的名字必须是 `eval`。
   ```
   原文为 `can`,从下文看,继承 `ScalarFunction` 不是必须的。

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。

Review comment:
       “求值方法也可以实现为多个重载的 eval 方法。” -> “可以通过实现多个名为 eval 的方法对求值方法进行重载”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。

Review comment:
       `scalar values` 应该算关键信息点,最好翻译出来
   ```suggestion
   
跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个标量。但是跟标量函数只能返回一个标量值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。

Review comment:
       “求值方法通过 `collect(T)` 方法来输出结果。” -> “求值方法通过 `collect(T)` 方法来发送要输出的行”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。

Review comment:
       “在 SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 
`LATERAL TABLE(<TableFunction>)` 进行Join(见下面的例子)。” ->
   "在 SQL 里面用 CORSS JOIN 或者 以 ON TRUE 为条件的 LEFT JOIN 来配合 `LATERAL 
TABLE(<TableFunction>) `的使用"

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。

Review comment:
       “如果表值函数返回的是0行,就会保留外表的这一行” -> “并且如果表值函数返回 0 行也会保留外表的这一行”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -199,67 +197,67 @@ public class Split extends TableFunction<Tuple2<String, 
Integer>> {
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 Table myTable = ...         // table schema: [a: String]
 
-// Register the function.
+// 注册表值函数。
 tableEnv.registerFunction("split", new Split("#"));
 
-// Use the table function in the Java Table API. "as" specifies the field 
names of the table.
+// 在 Java Table API 中使用表值函数。"as" 指明了表的字段名字
 myTable.joinLateral("split(a) as (word, length)")
     .select("a, word, length");
 myTable.leftOuterJoinLateral("split(a) as (word, length)")
     .select("a, word, length");
 
-// Use the table function in SQL with LATERAL and TABLE keywords.
-// CROSS JOIN a table function (equivalent to "join" in Table API).
+// 在 SQL 中用 LATERAL 和 TABLE 关键字来使用表值函数
+// CROSS JOIN a table function (等价于 Table API 中的 "join").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)");
-// LEFT JOIN a table function (equivalent to "leftOuterJoin" in Table API).
+// LEFT JOIN a table function (等价于 in Table API 中的 "leftOuterJoin").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE");
 {% endhighlight %}
 </div>
 
 <div data-lang="scala" markdown="1">

Review comment:
       部分针对 Java 内容的 comments 同样适用于 scala,不再一一赘述

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。

Review comment:
       数字和汉字之间需要空格
   ```suggestion
   跟自定义标量函数一样,自定义表值函数的输入参数也可以是 0 到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含 1 
到多列。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -199,67 +197,67 @@ public class Split extends TableFunction<Tuple2<String, 
Integer>> {
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 Table myTable = ...         // table schema: [a: String]
 
-// Register the function.
+// 注册表值函数。
 tableEnv.registerFunction("split", new Split("#"));
 
-// Use the table function in the Java Table API. "as" specifies the field 
names of the table.
+// 在 Java Table API 中使用表值函数。"as" 指明了表的字段名字
 myTable.joinLateral("split(a) as (word, length)")
     .select("a, word, length");
 myTable.leftOuterJoinLateral("split(a) as (word, length)")
     .select("a, word, length");
 
-// Use the table function in SQL with LATERAL and TABLE keywords.
-// CROSS JOIN a table function (equivalent to "join" in Table API).
+// 在 SQL 中用 LATERAL 和 TABLE 关键字来使用表值函数
+// CROSS JOIN a table function (等价于 Table API 中的 "join").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)");
-// LEFT JOIN a table function (equivalent to "leftOuterJoin" in Table API).
+// LEFT JOIN a table function (等价于 in Table API 中的 "leftOuterJoin").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE");
 {% endhighlight %}
 </div>
 
 <div data-lang="scala" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你的求值方法。求值方法必须声明为 `public`,并且名字必须是 
`eval`。可以实现多个 `eval`方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。求值方法也可以支持变长参数,例如 
`eval(String... strs)`。返回值的类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出数据。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。
 
-The following example shows how to define table-valued function, register it 
in the TableEnvironment, and call it in a query. Note that you can configure 
your table function via a constructor before it is registered: 
+下面的例子展示了如何定义一个表值函数,如何在 TableEnvironment 
中注册表值函数,以及如何在查询中使用表值函数。你可以通过构造函数来配置你的表值函数:
 
 {% highlight scala %}
-// The generic type "(String, Int)" determines the schema of the returned 
table as (String, Integer).
+// 泛型参数的类型 "(String, Int)" 决定了返回类型是 (String, Integer)。
 class Split(separator: String) extends TableFunction[(String, Int)] {
   def eval(str: String): Unit = {
-    // use collect(...) to emit a row.
+    // 使用 collect(...) 来输出一行
     str.split(separator).foreach(x => collect((x, x.length)))
   }
 }
 
 val tableEnv = BatchTableEnvironment.create(env)
 val myTable = ...         // table schema: [a: String]
 
-// Use the table function in the Scala Table API (Note: No registration 
required in Scala Table API).
+// 在 Scala Table API 中使用表值函数(注意:在 Scala Table API 中不需要注册函数)
 val split = new Split("#")
-// "as" specifies the field names of the generated table.
+// "as" 指明了返回表的字段名字
 myTable.joinLateral(split('a) as ('word, 'length)).select('a, 'word, 'length)
 myTable.leftOuterJoinLateral(split('a) as ('word, 'length)).select('a, 'word, 
'length)
 
-// Register the table function to use it in SQL queries.
+// 注册表值函数,然后才能在 SQL 查询中使用
 tableEnv.registerFunction("split", new Split("#"))
 
-// Use the table function in SQL with LATERAL and TABLE keywords.
+// 在 SQL 中使用 LATERAL 和 TABLE 关键字类使用表值函数
 // CROSS JOIN a table function (equivalent to "join" in Table API)
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)")
 // LEFT JOIN a table function (equivalent to "leftOuterJoin" in Table API)
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE")
 {% endhighlight %}
-**IMPORTANT:** Do not implement TableFunction as a Scala object. Scala object 
is a singleton and will cause concurrency issues.
+**重要:**不要把表值函数实现成一个 Scala object。Scala object 是一个单例,会有并发的问题。
 </div>
 
 <div data-lang="python" markdown="1">

Review comment:
       部分针对 Java 内容的 comments 同样适用于 python,不再一一赘述

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。

Review comment:
       “表值函数的返回值类型” -> “表值函数返回的表的类型”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。
 
-The following example shows how to define table-valued function, register it 
in the TableEnvironment, and call it in a query. Note that you can configure 
your table function via a constructor before it is registered: 
+下面的例子展示了如何定义一个表值函数,如何在 TableEnvironment 
中注册表值函数,以及如何在查询中使用表值函数。你可以通过构造函数来配置你的表值函数:

Review comment:
       "before it is registered"

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -199,67 +197,67 @@ public class Split extends TableFunction<Tuple2<String, 
Integer>> {
 BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
 Table myTable = ...         // table schema: [a: String]
 
-// Register the function.
+// 注册表值函数。
 tableEnv.registerFunction("split", new Split("#"));
 
-// Use the table function in the Java Table API. "as" specifies the field 
names of the table.
+// 在 Java Table API 中使用表值函数。"as" 指明了表的字段名字
 myTable.joinLateral("split(a) as (word, length)")
     .select("a, word, length");
 myTable.leftOuterJoinLateral("split(a) as (word, length)")
     .select("a, word, length");
 
-// Use the table function in SQL with LATERAL and TABLE keywords.
-// CROSS JOIN a table function (equivalent to "join" in Table API).
+// 在 SQL 中用 LATERAL 和 TABLE 关键字来使用表值函数
+// CROSS JOIN a table function (等价于 Table API 中的 "join").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)");
-// LEFT JOIN a table function (equivalent to "leftOuterJoin" in Table API).
+// LEFT JOIN a table function (等价于 in Table API 中的 "leftOuterJoin").
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE");
 {% endhighlight %}
 </div>
 
 <div data-lang="scala" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你的求值方法。求值方法必须声明为 `public`,并且名字必须是 
`eval`。可以实现多个 `eval`方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。求值方法也可以支持变长参数,例如 
`eval(String... strs)`。返回值的类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出数据。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。
 
-The following example shows how to define table-valued function, register it 
in the TableEnvironment, and call it in a query. Note that you can configure 
your table function via a constructor before it is registered: 
+下面的例子展示了如何定义一个表值函数,如何在 TableEnvironment 
中注册表值函数,以及如何在查询中使用表值函数。你可以通过构造函数来配置你的表值函数:
 
 {% highlight scala %}
-// The generic type "(String, Int)" determines the schema of the returned 
table as (String, Integer).
+// 泛型参数的类型 "(String, Int)" 决定了返回类型是 (String, Integer)。
 class Split(separator: String) extends TableFunction[(String, Int)] {
   def eval(str: String): Unit = {
-    // use collect(...) to emit a row.
+    // 使用 collect(...) 来输出一行
     str.split(separator).foreach(x => collect((x, x.length)))
   }
 }
 
 val tableEnv = BatchTableEnvironment.create(env)
 val myTable = ...         // table schema: [a: String]
 
-// Use the table function in the Scala Table API (Note: No registration 
required in Scala Table API).
+// 在 Scala Table API 中使用表值函数(注意:在 Scala Table API 中不需要注册函数)
 val split = new Split("#")
-// "as" specifies the field names of the generated table.
+// "as" 指明了返回表的字段名字
 myTable.joinLateral(split('a) as ('word, 'length)).select('a, 'word, 'length)
 myTable.leftOuterJoinLateral(split('a) as ('word, 'length)).select('a, 'word, 
'length)
 
-// Register the table function to use it in SQL queries.
+// 注册表值函数,然后才能在 SQL 查询中使用
 tableEnv.registerFunction("split", new Split("#"))
 
-// Use the table function in SQL with LATERAL and TABLE keywords.
+// 在 SQL 中使用 LATERAL 和 TABLE 关键字类使用表值函数
 // CROSS JOIN a table function (equivalent to "join" in Table API)
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)")
 // LEFT JOIN a table function (equivalent to "leftOuterJoin" in Table API)
 tableEnv.sqlQuery("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE")
 {% endhighlight %}
-**IMPORTANT:** Do not implement TableFunction as a Scala object. Scala object 
is a singleton and will cause concurrency issues.
+**重要:**不要把表值函数实现成一个 Scala object。Scala object 是一个单例,会有并发的问题。
 </div>
 
 <div data-lang="python" markdown="1">
-In order to define a Python table function, one can extend the base class 
`TableFunction` in `pyflink.table.udtf` and Implement an evaluation method. The 
behavior of a Python table function is determined by the evaluation method 
which is named eval.
+要实现一个 Python 表值函数,你需要扩展 `pyflink.table.udtf` 下的 
`TableFunction`,并且实现一个求值方法。Python 表值函数的行为取决于你实现的求值方法,它的名字必须是 `eval`。

Review comment:
       ```suggestion
   要实现一个 Python 表值函数,你可以扩展 `pyflink.table.udtf` 下的 
`TableFunction`,并且实现一个求值方法。Python 表值函数的行为取决于你实现的求值方法,它的名字必须是 `eval`。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。

Review comment:
       ```suggestion
   在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是 0 行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -271,32 +269,32 @@ env = 
StreamExecutionEnvironment.get_execution_environment()
 table_env = StreamTableEnvironment.create(env)
 my_table = ...  # type: Table, table schema: [a: String]
 
-# register the Python Table Function
+# 注册 Python 表值函数
 table_env.register_function("split", udtf(Split(), DataTypes.STRING(), 
[DataTypes.STRING(), DataTypes.INT()]))
 
-# use the Python Table Function in Python Table API
+# 在 Python Table API 中使用 Python 表值函数
 my_table.join_lateral("split(a) as (word, length)")
 my_table.left_outer_join_lateral("split(a) as (word, length)")
 
-# use the Python Table function in SQL API
+# 在 SQL API 中使用 Python 表值函数
 table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)")
 table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE")
 
 {% endhighlight %}
 
-There are many ways to define a Python table function besides extending the 
base class `TableFunction`.
-Please refer to the [Python Table Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions) documentation for more 
details.
+除了继承 `TableFunction`,还有很多其它方法可以定义 Python 表值函数。
+更多信息,参考 [Python 表值函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions)文档。
 
 </div>
 </div>
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
-Please note that POJO types do not have a deterministic field order. 
Therefore, you cannot rename the fields of POJO returned by a table function 
using `AS`.
+需要注意的是 POJO 类型没有确定的字段顺序。所以,你不可以用 `AS` 来重命名返回的 POJO 的字段。
 
-By default the result type of a `TableFunction` is determined by Flink’s 
automatic type extraction facilities. This works well for basic types and 
simple POJOs but might be wrong for more complex, custom, or composite types. 
In such a case, the type of the result can be manually specified by overriding 
`TableFunction#getResultType()` which returns its `TypeInformation`.
+`TableFunction` 的返回类型默认是用 Flink 自动类型推导来决定的。对于基础类型和简单的 POJO 
类型推导是没有问题的,但是对于更复杂的、自定义的、以及组合的类型可能会推导错误。如果有这种情况,可以通过重写(override) 
`TableFunction#getResultType()` 并且返回 `TypeInformation` 来指定返回类型。

Review comment:
       TypeInformation 这个类名应该算一个关键信息点,最好在翻译中保留

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。

Review comment:
       “跟表值函数(算子右侧的表)返回的所有行” -> “跟表值函数返回的所有行(位于算子右侧)”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -271,32 +269,32 @@ env = 
StreamExecutionEnvironment.get_execution_environment()
 table_env = StreamTableEnvironment.create(env)
 my_table = ...  # type: Table, table schema: [a: String]
 
-# register the Python Table Function
+# 注册 Python 表值函数
 table_env.register_function("split", udtf(Split(), DataTypes.STRING(), 
[DataTypes.STRING(), DataTypes.INT()]))
 
-# use the Python Table Function in Python Table API
+# 在 Python Table API 中使用 Python 表值函数
 my_table.join_lateral("split(a) as (word, length)")
 my_table.left_outer_join_lateral("split(a) as (word, length)")
 
-# use the Python Table function in SQL API
+# 在 SQL API 中使用 Python 表值函数
 table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)")
 table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE")
 
 {% endhighlight %}
 
-There are many ways to define a Python table function besides extending the 
base class `TableFunction`.
-Please refer to the [Python Table Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions) documentation for more 
details.
+除了继承 `TableFunction`,还有很多其它方法可以定义 Python 表值函数。
+更多信息,参考 [Python 表值函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions)文档。
 
 </div>
 </div>
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
-Please note that POJO types do not have a deterministic field order. 
Therefore, you cannot rename the fields of POJO returned by a table function 
using `AS`.
+需要注意的是 POJO 类型没有确定的字段顺序。所以,你不可以用 `AS` 来重命名返回的 POJO 的字段。
 
-By default the result type of a `TableFunction` is determined by Flink’s 
automatic type extraction facilities. This works well for basic types and 
simple POJOs but might be wrong for more complex, custom, or composite types. 
In such a case, the type of the result can be manually specified by overriding 
`TableFunction#getResultType()` which returns its `TypeInformation`.
+`TableFunction` 的返回类型默认是用 Flink 自动类型推导来决定的。对于基础类型和简单的 POJO 
类型推导是没有问题的,但是对于更复杂的、自定义的、以及组合的类型可能会推导错误。如果有这种情况,可以通过重写(override) 
`TableFunction#getResultType()` 并且返回 `TypeInformation` 来指定返回类型。
 
-The following example shows an example of a `TableFunction` that returns a 
`Row` type which requires explicit type information. We define that the 
returned table type should be `RowTypeInfo(String, Integer)` by overriding 
`TableFunction#getResultType()`.
+下面的例子展示了 `TableFunction` 返回了一个 `Row` 类型,需要显示指定返回类型。我们通过重写 
`TableFunction#getResultType` 来返回 `RowTypeInfo` 作为返回类型。

Review comment:
       ```suggestion
   下面的例子展示了 `TableFunction` 返回了一个 `Row` 类型,需要显示指定返回类型。我们通过重写 
`TableFunction#getResultType` 来指定 `RowTypeInfo(String, Integer)` 作为返回的表的类型。
   ```
   Same for scala

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -346,40 +344,38 @@ class CustomTypeSplit extends TableFunction[Row] {
 {% top %}
 
 
-Aggregation Functions
+聚合函数
 ---------------------
 
-User-Defined Aggregate Functions (UDAGGs) aggregate a table (one or more rows 
with one or more attributes) to a scalar value. 
+自定义聚合函数(UDAGG)是把一个表(一行或者多行,每行可以有一列或者多列)聚合成一个标量值。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of an aggregation. Assume you have a table 
that contains data about beverages. The table consists of three columns, `id`, 
`name` and `price` and 5 rows. Imagine you need to find the highest price of 
all beverages in the table, i.e., perform a `max()` aggregation. You would need 
to check each of the 5 rows and the result would be a single numeric value.
+上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 
`id`、`name`、`price`,表里有5行数据。假设你需要找到所有饮料里最贵的饮料的价格,执行一个 `max()` 
聚合。你需要遍历所有5行数据,而结果就只有一个数值。

Review comment:
       ```suggestion
   上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 `id`、`name`、`price`,表里有 5 
行数据。假设你需要找到所有饮料里最贵的饮料的价格,执行一个 `max()` 聚合。你需要遍历所有 5 行数据,而结果就只有一个数值。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -148,39 +146,39 @@ class HashCode(ScalarFunction):
 
 table_env = BatchTableEnvironment.create(env)
 
-# register the Python function
+# 注册 Python 函数
 table_env.register_function("hash_code", udf(HashCode(), DataTypes.BIGINT(), 
DataTypes.BIGINT()))
 
-# use the function in Python Table API
+# 在 Python Table API 中使用函数
 my_table.select("string, bigint, string.hash_code(), hash_code(string)")
 
-# use the function in SQL API
+# 在 SQL API 中使用函数
 table_env.sql_query("SELECT string, bigint, hash_code(bigint) FROM MyTable")
 {% endhighlight %}
 
-There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`.
-Please refer to the [Python Scalar Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) documentation for 
more details.
+除了继承 `ScalarFunction`,还有很多方法可以定义 Python 标量函数。
+更多细节,可以参考 [Python 标量函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#scalar-functions) 文档。
 </div>
 </div>
 
 {% top %}
 
-Table Functions
+表值函数
 ---------------
 
-Similar to a user-defined scalar function, a user-defined table function takes 
zero, one, or multiple scalar values as input parameters. However in contrast 
to a scalar function, it can return an arbitrary number of rows as output 
instead of a single value. The returned rows may consist of one or more 
columns. 
+跟自定义标量函数一样,自定义表值函数的输入参数也可以是0到多个。但是跟标量函数只能返回一个值不同的是,它可以返回任意多行。返回的每一行可以包含1到多列。
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-In order to define a table function one has to extend the base class 
`TableFunction` in `org.apache.flink.table.functions` and implement (one or 
more) evaluation methods. The behavior of a table function is determined by its 
evaluation methods. An evaluation method must be declared `public` and named 
`eval`. The `TableFunction` can be overloaded by implementing multiple methods 
named `eval`. The parameter types of the evaluation methods determine all valid 
parameters of the table function. Evaluation methods can also support variable 
arguments, such as `eval(String... strs)`. The type of the returned table is 
determined by the generic type of `TableFunction`. Evaluation methods emit 
output rows using the protected `collect(T)` method.
+要定义一个表值函数,你需要扩展 `org.apache.flink.table.functions` 下的 
`TableFunction`,并且实现(一个或者多个)求值方法。表值函数的行为取决于你实现的求值方法。求值方法必须被声明为 `public`,并且名字必须是 
`eval`。你也可以写多个 `eval` 方法来重载表值函数。求值方法的参数类型决定了表值函数的参数类型。表值函数也可以支持变长参数,比如 
`eval(String... strs)`。表值函数的返回值类型取决于 `TableFunction` 的泛型参数。求值方法通过 `collect(T)` 
方法来输出结果。
 
-In the Table API, a table function is used with `.joinLateral` or 
`.leftOuterJoinLateral`. The `joinLateral` operator (cross) joins each row from 
the outer table (table on the left of the operator) with all rows produced by 
the table-valued function (which is on the right side of the operator). The 
`leftOuterJoinLateral` operator joins each row from the outer table (table on 
the left of the operator) with all rows produced by the table-valued function 
(which is on the right side of the operator) and preserves outer rows for which 
the table function returns an empty table. In SQL use `LATERAL 
TABLE(<TableFunction>)` with CROSS JOIN and LEFT JOIN with an ON TRUE join 
condition (see examples below).
+在 Table API 中,表值函数是通过 `.joinLateral` 或者 `.leftOuterJoinLateral` 
来使用的。`joinLateral` 算子会把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行 
(cross)join。`leftOuterJoinLateral` 
算子也是把外表(算子左侧的表)的每一行跟表值函数(算子右侧的表)返回的所有行进行(cross)join,如果表值函数返回的是0行,就会保留外表的这一行。在 
SQL 里面使用 CROSS JOIN 或者 LEFT JOIN 加上 ON TRUE 作为 Join 的条件来跟表值函数 `LATERAL 
TABLE(<TableFunction>)` 进行Join(见下面的例子)。
 
-The following example shows how to define table-valued function, register it 
in the TableEnvironment, and call it in a query. Note that you can configure 
your table function via a constructor before it is registered: 
+下面的例子展示了如何定义一个表值函数,如何在 TableEnvironment 
中注册表值函数,以及如何在查询中使用表值函数。你可以通过构造函数来配置你的表值函数:
 
 {% highlight java %}
-// The generic type "Tuple2<String, Integer>" determines the schema of the 
returned table as (String, Integer).
+// 泛型参数的类型 "Tuple2<String, Integer>" 决定了返回类型是(String,Integer)。

Review comment:
       ```suggestion
   // 泛型参数的类型 "Tuple2<String, Integer>" 决定了返回的表的 schema 是(String,Integer)。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -271,32 +269,32 @@ env = 
StreamExecutionEnvironment.get_execution_environment()
 table_env = StreamTableEnvironment.create(env)
 my_table = ...  # type: Table, table schema: [a: String]
 
-# register the Python Table Function
+# 注册 Python 表值函数
 table_env.register_function("split", udtf(Split(), DataTypes.STRING(), 
[DataTypes.STRING(), DataTypes.INT()]))
 
-# use the Python Table Function in Python Table API
+# 在 Python Table API 中使用 Python 表值函数
 my_table.join_lateral("split(a) as (word, length)")
 my_table.left_outer_join_lateral("split(a) as (word, length)")
 
-# use the Python Table function in SQL API
+# 在 SQL API 中使用 Python 表值函数
 table_env.sql_query("SELECT a, word, length FROM MyTable, LATERAL 
TABLE(split(a)) as T(word, length)")
 table_env.sql_query("SELECT a, word, length FROM MyTable LEFT JOIN LATERAL 
TABLE(split(a)) as T(word, length) ON TRUE")
 
 {% endhighlight %}
 
-There are many ways to define a Python table function besides extending the 
base class `TableFunction`.
-Please refer to the [Python Table Function]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions) documentation for more 
details.
+除了继承 `TableFunction`,还有很多其它方法可以定义 Python 表值函数。
+更多信息,参考 [Python 表值函数]({{ site.baseurl 
}}/zh/dev/table/python/python_udfs.html#table-functions)文档。
 
 </div>
 </div>
 
 <div class="codetabs" markdown="1">

Review comment:
       unrelated:
   
   It is wired that we have first a group of tabs (java/scala/python) then 
another group of tabs (java/scala), without any common contents in between. The 
two groups should be merged, by appending the java/scala contents of the second 
group to those of the first group. 
   
   If it is not too much trouble, we can include a hotfix commit in this PR to 
fix this for both the original English doc and this translated one. 
Alternatively, we can fix this as a separated issue after merging this one. 
WDYT?

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -346,40 +344,38 @@ class CustomTypeSplit extends TableFunction[Row] {
 {% top %}
 
 
-Aggregation Functions
+聚合函数
 ---------------------
 
-User-Defined Aggregate Functions (UDAGGs) aggregate a table (one or more rows 
with one or more attributes) to a scalar value. 
+自定义聚合函数(UDAGG)是把一个表(一行或者多行,每行可以有一列或者多列)聚合成一个标量值。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of an aggregation. Assume you have a table 
that contains data about beverages. The table consists of three columns, `id`, 
`name` and `price` and 5 rows. Imagine you need to find the highest price of 
all beverages in the table, i.e., perform a `max()` aggregation. You would need 
to check each of the 5 rows and the result would be a single numeric value.
+上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 
`id`、`name`、`price`,表里有5行数据。假设你需要找到所有饮料里最贵的饮料的价格,执行一个 `max()` 
聚合。你需要遍历所有5行数据,而结果就只有一个数值。

Review comment:
       ```suggestion
   上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 
`id`、`name`、`price`,表里有5行数据。假设你需要找到所有饮料里最贵的饮料的价格,即执行一个 `max()` 
聚合。你需要遍历所有5行数据,而结果就只有一个数值。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -346,40 +344,38 @@ class CustomTypeSplit extends TableFunction[Row] {
 {% top %}
 
 
-Aggregation Functions
+聚合函数
 ---------------------
 
-User-Defined Aggregate Functions (UDAGGs) aggregate a table (one or more rows 
with one or more attributes) to a scalar value. 
+自定义聚合函数(UDAGG)是把一个表(一行或者多行,每行可以有一列或者多列)聚合成一个标量值。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of an aggregation. Assume you have a table 
that contains data about beverages. The table consists of three columns, `id`, 
`name` and `price` and 5 rows. Imagine you need to find the highest price of 
all beverages in the table, i.e., perform a `max()` aggregation. You would need 
to check each of the 5 rows and the result would be a single numeric value.
+上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 
`id`、`name`、`price`,表里有5行数据。假设你需要找到所有饮料里最贵的饮料的价格,执行一个 `max()` 
聚合。你需要遍历所有5行数据,而结果就只有一个数值。
 
-User-defined aggregation functions are implemented by extending the 
`AggregateFunction` class. An `AggregateFunction` works as follows. First, it 
needs an `accumulator`, which is the data structure that holds the intermediate 
result of the aggregation. An empty accumulator is created by calling the 
`createAccumulator()` method of the `AggregateFunction`. Subsequently, the 
`accumulate()` method of the function is called for each input row to update 
the accumulator. Once all rows have been processed, the `getValue()` method of 
the function is called to compute and return the final result. 
+自定义聚合函数是通过扩展 `AggregateFunction` 来实现的。`AggregateFunction` 的工作过程如下。首先,它需要一个 
`accumulator`,它是一个数据结构,存储了聚合的中间结果。通过调用 `AggregateFunction` 的 
`createAccumulator()` 方法创建一个空的 accumulator。接下来,对于每一行数据,会调用 `accumulate()` 方法来更新 
accumulator。当所有的数据都处理完了之后,通过调用 `getValue` 方法来计算和返回最终的结果。
 
-**The following methods are mandatory for each `AggregateFunction`:**
+**下面几个方法是每个 `AggregateFunction` 必须要实现的:**
 
 - `createAccumulator()`
 - `accumulate()` 
 - `getValue()`
 
-Flink’s type extraction facilities can fail to identify complex data types, 
e.g., if they are not basic types or simple POJOs. So similar to 
`ScalarFunction` and `TableFunction`, `AggregateFunction` provides methods to 
specify the `TypeInformation` of the result type (through 
- `AggregateFunction#getResultType()`) and the type of the accumulator (through 
`AggregateFunction#getAccumulatorType()`).
+Flink 的类型推导不能处理复杂的数据类型,只能处理基础类型或者是简单的 POJO 类型。所以跟 `ScalarFunction` 和 
`TableFunction` 一样,`AggregateFunction` 也提供了 `AggregateFunction#getResultType()` 
和 `AggregateFunction#getAccumulatorType()` 来分别指定返回值类型和 accumulator 
的类型,两个函数的返回值类型也都是 `TypeInformation`。

Review comment:
       “Flink 的类型推导不能处理复杂的数据类型,只能处理基础类型或者是简单的 POJO 类型。”
   这个表述比原文要绝对,原文只是说对于非基础类型或简单 POJO 可能会推出错误的类型,而非绝对不能推导

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1126,15 +1121,15 @@ abstract class TableAggregateFunction[T, ACC] extends 
UserDefinedAggregateFuncti
 </div>
 
 
-The following example shows how to
+下面的例子展示了如何
 
-- define a `TableAggregateFunction` that calculates the top 2 values on a 
given column, 
-- register the function in the `TableEnvironment`, and 
-- use the function in a Table API query(TableAggregateFunction is only 
supported by Table API).  
+- 定义一个 `TableAggregateFunction` 来计算最大的2个值,

Review comment:
       "of a given column"

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1126,15 +1121,15 @@ abstract class TableAggregateFunction[T, ACC] extends 
UserDefinedAggregateFuncti
 </div>
 
 
-The following example shows how to
+下面的例子展示了如何
 
-- define a `TableAggregateFunction` that calculates the top 2 values on a 
given column, 
-- register the function in the `TableEnvironment`, and 
-- use the function in a Table API query(TableAggregateFunction is only 
supported by Table API).  
+- 定义一个 `TableAggregateFunction` 来计算最大的2个值,
+- 在 `TableEnvironment` 中注册函数,
+- 在 Table API 查询中使用函数(当前只在 Table API 中支持 TableAggregateFunction)。
 
-To calculate the top 2 values, the accumulator needs to store the biggest 2 
values of all the data that has been accumulated. In our example we define a 
class `Top2Accum` to be the accumulator. Accumulators are automatically 
backup-ed by Flink's checkpointing mechanism and restored in case of a failure 
to ensure exactly-once semantics.
+为了计算最大的2个值,accumulator需要保存当前看到的最大的2个值。在我们的例子中,我们定义了类 `Top2Accum` 来作为 
accumulator。Flink 的 checkpoint 机制会自动保存 accumulator,并且在失败时进行恢复,来保证精确一次的语义。

Review comment:
       ```suggestion
   为了计算最大的 2 个值,accumulator 需要保存当前看到的最大的 2 个值。在我们的例子中,我们定义了类 `Top2Accum` 来作为 
accumulator。Flink 的 checkpoint 机制会自动保存 accumulator,并且在失败时进行恢复,来保证精确一次的语义。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1126,15 +1121,15 @@ abstract class TableAggregateFunction[T, ACC] extends 
UserDefinedAggregateFuncti
 </div>
 
 
-The following example shows how to
+下面的例子展示了如何
 
-- define a `TableAggregateFunction` that calculates the top 2 values on a 
given column, 
-- register the function in the `TableEnvironment`, and 
-- use the function in a Table API query(TableAggregateFunction is only 
supported by Table API).  
+- 定义一个 `TableAggregateFunction` 来计算最大的2个值,

Review comment:
       ```suggestion
   - 定义一个 `TableAggregateFunction` 来计算最大的 2 个值,
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -346,40 +344,38 @@ class CustomTypeSplit extends TableFunction[Row] {
 {% top %}
 
 
-Aggregation Functions
+聚合函数
 ---------------------
 
-User-Defined Aggregate Functions (UDAGGs) aggregate a table (one or more rows 
with one or more attributes) to a scalar value. 
+自定义聚合函数(UDAGG)是把一个表(一行或者多行,每行可以有一列或者多列)聚合成一个标量值。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of an aggregation. Assume you have a table 
that contains data about beverages. The table consists of three columns, `id`, 
`name` and `price` and 5 rows. Imagine you need to find the highest price of 
all beverages in the table, i.e., perform a `max()` aggregation. You would need 
to check each of the 5 rows and the result would be a single numeric value.
+上面的图片展示了一个聚合的例子。假设你有一个关于饮料的表。表里面有三个字段,分别是 
`id`、`name`、`price`,表里有5行数据。假设你需要找到所有饮料里最贵的饮料的价格,执行一个 `max()` 
聚合。你需要遍历所有5行数据,而结果就只有一个数值。
 
-User-defined aggregation functions are implemented by extending the 
`AggregateFunction` class. An `AggregateFunction` works as follows. First, it 
needs an `accumulator`, which is the data structure that holds the intermediate 
result of the aggregation. An empty accumulator is created by calling the 
`createAccumulator()` method of the `AggregateFunction`. Subsequently, the 
`accumulate()` method of the function is called for each input row to update 
the accumulator. Once all rows have been processed, the `getValue()` method of 
the function is called to compute and return the final result. 
+自定义聚合函数是通过扩展 `AggregateFunction` 来实现的。`AggregateFunction` 的工作过程如下。首先,它需要一个 
`accumulator`,它是一个数据结构,存储了聚合的中间结果。通过调用 `AggregateFunction` 的 
`createAccumulator()` 方法创建一个空的 accumulator。接下来,对于每一行数据,会调用 `accumulate()` 方法来更新 
accumulator。当所有的数据都处理完了之后,通过调用 `getValue` 方法来计算和返回最终的结果。
 
-**The following methods are mandatory for each `AggregateFunction`:**
+**下面几个方法是每个 `AggregateFunction` 必须要实现的:**
 
 - `createAccumulator()`
 - `accumulate()` 
 - `getValue()`
 
-Flink’s type extraction facilities can fail to identify complex data types, 
e.g., if they are not basic types or simple POJOs. So similar to 
`ScalarFunction` and `TableFunction`, `AggregateFunction` provides methods to 
specify the `TypeInformation` of the result type (through 
- `AggregateFunction#getResultType()`) and the type of the accumulator (through 
`AggregateFunction#getAccumulatorType()`).
+Flink 的类型推导不能处理复杂的数据类型,只能处理基础类型或者是简单的 POJO 类型。所以跟 `ScalarFunction` 和 
`TableFunction` 一样,`AggregateFunction` 也提供了 `AggregateFunction#getResultType()` 
和 `AggregateFunction#getAccumulatorType()` 来分别指定返回值类型和 accumulator 
的类型,两个函数的返回值类型也都是 `TypeInformation`。
  
-Besides the above methods, there are a few contracted methods that can be 
-optionally implemented. While some of these methods allow the system more 
efficient query execution, others are mandatory for certain use cases. For 
instance, the `merge()` method is mandatory if the aggregation function should 
be applied in the context of a session group window (the accumulators of two 
session windows need to be joined when a row is observed that "connects" them). 
+除了上面的方法,还有几个方法可以选择实现。这些方法有些可以让查询更加高效,而有些是在某些特定场景下必须要实现的。例如,如果聚合函数用在会话窗口(当两个会话窗口合并的时候需要
 merge 他们的 accumulator)的话,`merge()` 方法就是必须要实现的。

Review comment:
       “如果聚合函数用在会话窗口(当两个会话窗口合并的时候需要 merge 他们的 accumulator)”
   这个翻译似乎和原文并不一致,比如原文中并没有提到“窗口合并”、以及原文中提到的“connects them”似乎并没有体现。
   我对这里描述内容的背景不是很了解,不确定这个是否属于合理的意译。

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1425,33 +1420,33 @@ tab
 
 {% top %}
 
-Best Practices for Implementing UDFs
+实现自定义函数的最佳实践
 ------------------------------------
 
-The Table API and SQL code generation internally tries to work with primitive 
values as much as possible. A user-defined function can introduce much overhead 
through object creation, casting, and (un)boxing. Therefore, it is highly 
recommended to declare parameters and result types as primitive types instead 
of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented 
as `int`. `Types.TIMESTAMP` can be represented as `long`. 
+在 Table API 和 SQL 
的内部,代码生成会尽量的使用基础类型。如果自定义函数使用的是对象,会有很多的对象创建、转换(cast)、以及自动拆装箱的开销。因此,强烈建议使用基础类型来作为参数以及返回值的类型。`Types.DATE`
 和 `Types.TIME` 可以用 `int` 来表示。`Types.TIMESTAMP` 可以用 `long` 来表示。
 
-We recommended that user-defined functions should be written by Java instead 
of Scala as Scala types pose a challenge for Flink's type extractor.
+我们建议自定义函数用 Java 来实现,而不是用 Scala 来实现,因为 Flink 的类型推导对 Scala 不是很友好。
 
 {% top %}
 
-Integrating UDFs with the Runtime
+自定义函数跟运行时集成
 ---------------------------------
 
-Sometimes it might be necessary for a user-defined function to get global 
runtime information or do some setup/clean-up work before the actual work. 
User-defined functions provide `open()` and `close()` methods that can be 
overridden and provide similar functionality as the methods in `RichFunction` 
of DataSet or DataStream API.
+有时候自定义函数需要获取一些全局信息,或者在真正被调用之前做一些配置(setup)/清理(clean-up)的工作。自定义函数也提供了 `open()` 和 
`close()` 方法,你可以重写这两个方法做到类似于 DataSet 或者 DataStream API 中 `RichFunction` 的功能。
 
-The `open()` method is called once before the evaluation method. The `close()` 
method after the last call to the evaluation method.
+`open()` 方法在自定义函数被调用之前先调用。`close()` 方法在自定义函数调用完之后被调用。
 
-The `open()` method provides a `FunctionContext` that contains information 
about the context in which user-defined functions are executed, such as the 
metric group, the distributed cache files, or the global job parameters.
+`open()` 方法提供了一个 `FunctionContext`,它包含了一些自定义函数被执行时的上下文信息,比如 metric 
group、分布式文件缓存,或者是全局的任务参数等。
 
-The following information can be obtained by calling the corresponding methods 
of `FunctionContext`:
+下面的信息可以通过调用 `FunctionContext` 的对应的方法来获得:
 
-| Method                                | Description                          
                  |
+| 方法                                  | 描述                                     
               |
 | :------------------------------------ | 
:----------------------------------------------------- |
-| `getMetricGroup()`                    | Metric group for this parallel 
subtask.                |
-| `getCachedFile(name)`                 | Local temporary file copy of a 
distributed cache file. |
-| `getJobParameter(name, defaultValue)` | Global job parameter value 
associated with given key.  |
+| `getMetricGroup()`                    | 执行该函数的 subtask 的 Metric Group。       
            |
+| `getCachedFile(name)`                 | 分布式文件缓存的本地临时文件拷贝。                    
     |

Review comment:
       ```suggestion
   | `getCachedFile(name)`                 | 分布式文件缓存的本地临时文件副本。                  
       |
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -826,46 +823,44 @@ t_env.sql_query("SELECT user, wAvg(points, level) AS 
avgPoints FROM userScores G
 
 {% top %}
 
-Table Aggregation Functions
+表值聚合函数
 ---------------------
 
-User-Defined Table Aggregate Functions (UDTAGGs) aggregate a table (one or 
more rows with one or more attributes) to a result table with multi rows and 
columns. 
+自定义表值聚合函数(UDTAGG)可以把一个表(一行或者多行,每行有一列或者多列)聚合成另一张表,结果中可以有多行多列。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udtagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of a table aggregation. Assume you have a 
table that contains data about beverages. The table consists of three columns, 
`id`, `name` and `price` and 5 rows. Imagine you need to find the top 2 highest 
prices of all beverages in the table, i.e., perform a `top2()` table 
aggregation. You would need to check each of the 5 rows and the result would be 
a table with the top 2 values.
+上图展示了一个表值聚合函数的例子。假设你有一个饮料的表,这个表有3列,分别是 `id`、`name` 和 
`price`,一共有5行。假设你需要找到价格最高的两个饮料,类似于 `top2()` 表值聚合函数。你需要遍历所有5行数据,结果是有2行数据的一个表。
 
-User-defined table aggregation functions are implemented by extending the 
`TableAggregateFunction` class. A `TableAggregateFunction` works as follows. 
First, it needs an `accumulator`, which is the data structure that holds the 
intermediate result of the aggregation. An empty accumulator is created by 
calling the `createAccumulator()` method of the `TableAggregateFunction`. 
Subsequently, the `accumulate()` method of the function is called for each 
input row to update the accumulator. Once all rows have been processed, the 
`emitValue()` method of the function is called to compute and return the final 
results. 
+用户自定义表值聚合函数是通过扩展 `TableAggregateFunction` 类来实现的。一个 `TableAggregateFunction` 
的工作过程如下。首先,它需要一个 `accumulator`,这个 `accumulator` 负责存储聚合的中间结果。 通过调用 
`TableAggregateFunction` 的 `createAccumulator` 方法来构造一个空的 
accumulator。接下来,对于每一行数据,会调用 `accumulate` 方法来更新 accumulator。当所有数据都处理完之后,调用 
`emitValue` 方法来计算和返回最终的结果。
 
-**The following methods are mandatory for each `TableAggregateFunction`:**
+**下面几个 `TableAggregateFunction` 的方法是必须要实现的:**
 
 - `createAccumulator()`
 - `accumulate()` 
 
-Flink’s type extraction facilities can fail to identify complex data types, 
e.g., if they are not basic types or simple POJOs. So similar to 
`ScalarFunction` and `TableFunction`, `TableAggregateFunction` provides methods 
to specify the `TypeInformation` of the result type (through 
- `TableAggregateFunction#getResultType()`) and the type of the accumulator 
(through `TableAggregateFunction#getAccumulatorType()`).
+Flink 类型推导在遇到复杂数据类型的时候可能会推导错误。所以类似于 `ScalarFunction` 和 
`TableFunction`,`TableAggregateFunction` 也提供了 
`TableAggregateFunction#getResultType()` 和 
`TableAggregateFunction#getAccumulatorType()` 方法来指定返回值类型和 accumulator 
的类型,这两个方法都需要返回 `TypeInformation`。

Review comment:
       “ e.g., if they are not basic types or simple POJOs” 被丢掉了

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -826,46 +823,44 @@ t_env.sql_query("SELECT user, wAvg(points, level) AS 
avgPoints FROM userScores G
 
 {% top %}
 
-Table Aggregation Functions
+表值聚合函数
 ---------------------
 
-User-Defined Table Aggregate Functions (UDTAGGs) aggregate a table (one or 
more rows with one or more attributes) to a result table with multi rows and 
columns. 
+自定义表值聚合函数(UDTAGG)可以把一个表(一行或者多行,每行有一列或者多列)聚合成另一张表,结果中可以有多行多列。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udtagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of a table aggregation. Assume you have a 
table that contains data about beverages. The table consists of three columns, 
`id`, `name` and `price` and 5 rows. Imagine you need to find the top 2 highest 
prices of all beverages in the table, i.e., perform a `top2()` table 
aggregation. You would need to check each of the 5 rows and the result would be 
a table with the top 2 values.
+上图展示了一个表值聚合函数的例子。假设你有一个饮料的表,这个表有3列,分别是 `id`、`name` 和 
`price`,一共有5行。假设你需要找到价格最高的两个饮料,类似于 `top2()` 表值聚合函数。你需要遍历所有5行数据,结果是有2行数据的一个表。

Review comment:
       ```suggestion
   上图展示了一个表值聚合函数的例子。假设你有一个饮料的表,这个表有 3 列,分别是 `id`、`name` 和 `price`,一共有 5 
行。假设你需要找到价格最高的两个饮料,类似于 `top2()` 表值聚合函数。你需要遍历所有 5 行数据,结果是有 2 行数据的一个表。
   ```

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1126,15 +1121,15 @@ abstract class TableAggregateFunction[T, ACC] extends 
UserDefinedAggregateFuncti
 </div>
 
 
-The following example shows how to
+下面的例子展示了如何
 
-- define a `TableAggregateFunction` that calculates the top 2 values on a 
given column, 
-- register the function in the `TableEnvironment`, and 
-- use the function in a Table API query(TableAggregateFunction is only 
supported by Table API).  
+- 定义一个 `TableAggregateFunction` 来计算最大的2个值,
+- 在 `TableEnvironment` 中注册函数,
+- 在 Table API 查询中使用函数(当前只在 Table API 中支持 TableAggregateFunction)。
 
-To calculate the top 2 values, the accumulator needs to store the biggest 2 
values of all the data that has been accumulated. In our example we define a 
class `Top2Accum` to be the accumulator. Accumulators are automatically 
backup-ed by Flink's checkpointing mechanism and restored in case of a failure 
to ensure exactly-once semantics.
+为了计算最大的2个值,accumulator需要保存当前看到的最大的2个值。在我们的例子中,我们定义了类 `Top2Accum` 来作为 
accumulator。Flink 的 checkpoint 机制会自动保存 accumulator,并且在失败时进行恢复,来保证精确一次的语义。
 
-The `accumulate()` method of our `Top2` `TableAggregateFunction` has two 
inputs. The first one is the `Top2Accum` accumulator, the other one is the 
user-defined input: input value `v`. Although the `merge()` method is not 
mandatory for most table aggregation types, we provide it below as examples. 
Please note that we used Java primitive types and defined `getResultType()` and 
`getAccumulatorType()` methods in the Scala example because Flink type 
extraction does not work very well for Scala types.
+我们的 `Top2` 表值聚合函数的 `accumulate()` 方法有两个输入,第一个是 `Top2Accum` 
accumulator,另一个是用户定义的输入:输入的值 `v`。尽管 `merge()` 
方法在大多数聚合类型中不是必须的,我们也在样例中提供了它的实现。请注意,我们在 Scala 样例中也使用的是 Java 的基础类型,并且定义了 
`getResultType()` 和 `getAccumulatorType()` 方法,因为 Flink 的类型推导对于 Scala 
的类型推导支持的不是很好。

Review comment:
       Same here. 没有说明 `Top2` 是一个 `TableAggregateFunction`

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -603,15 +599,15 @@ abstract class AggregateFunction[T, ACC] extends 
UserDefinedAggregateFunction[T,
 </div>
 
 
-The following example shows how to
+下面的例子展示了如何:
 
-- define an `AggregateFunction` that calculates the weighted average on a 
given column, 
-- register the function in the `TableEnvironment`, and 
-- use the function in a query.  
+- 定义一个聚合函数来计算某一列的加权平均,
+- 在 `TableEnvironment` 中注册函数,
+- 在查询中使用函数。
 
-To calculate an weighted average value, the accumulator needs to store the 
weighted sum and count of all the data that has been accumulated. In our 
example we define a class `WeightedAvgAccum` to be the accumulator. 
Accumulators are automatically backup-ed by Flink's checkpointing mechanism and 
restored in case of a failure to ensure exactly-once semantics.
+为了计算加权平均值,accumulator 需要存储加权总和以及数据的条数。在我们的例子里,我们定义了一个类 `WeightedAvgAccum` 来作为 
accumulator。Flink 的 checkpoint 机制会自动保存 accumulator,在失败时进行恢复,以此来保证精确一次的语义。
 
-The `accumulate()` method of our `WeightedAvg` `AggregateFunction` has three 
inputs. The first one is the `WeightedAvgAccum` accumulator, the other two are 
user-defined inputs: input value `ivalue` and weight of the input `iweight`. 
Although the `retract()`, `merge()`, and `resetAccumulator()` methods are not 
mandatory for most aggregation types, we provide them below as examples. Please 
note that we used Java primitive types and defined `getResultType()` and 
`getAccumulatorType()` methods in the Scala example because Flink type 
extraction does not work very well for Scala types.
+我们的 `WeightedAvg` 的 `accumulate` 方法有三个输入参数。第一个是 `WeightedAvgAccum` 
accumulator,另外两个是用户自定义的输入:输入的值 `ivalue` 和 输入的权重 `iweight`。尽管 
`retract()`、`merge()`、`resetAccumulator()` 
这几个方法在大多数聚合类型中都不是必须实现的,我们也在样例中提供了他们的实现。请注意我们在 Scala 样例中也是用的是 Java 的基础类型,并且定义了 
`getResultType()` 和 `getAccumulatorType()`,因为 Flink 的类型推导对于 Scala 的类型推导做的不是很好。

Review comment:
       “我们的 `WeightedAvg`” 初次提到 `WeightedAvg` 应该说明它是一个 `AggregateFunction`

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1425,33 +1420,33 @@ tab
 
 {% top %}
 
-Best Practices for Implementing UDFs
+实现自定义函数的最佳实践
 ------------------------------------
 
-The Table API and SQL code generation internally tries to work with primitive 
values as much as possible. A user-defined function can introduce much overhead 
through object creation, casting, and (un)boxing. Therefore, it is highly 
recommended to declare parameters and result types as primitive types instead 
of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented 
as `int`. `Types.TIMESTAMP` can be represented as `long`. 
+在 Table API 和 SQL 
的内部,代码生成会尽量的使用基础类型。如果自定义函数使用的是对象,会有很多的对象创建、转换(cast)、以及自动拆装箱的开销。因此,强烈建议使用基础类型来作为参数以及返回值的类型。`Types.DATE`
 和 `Types.TIME` 可以用 `int` 来表示。`Types.TIMESTAMP` 可以用 `long` 来表示。
 
-We recommended that user-defined functions should be written by Java instead 
of Scala as Scala types pose a challenge for Flink's type extractor.
+我们建议自定义函数用 Java 来实现,而不是用 Scala 来实现,因为 Flink 的类型推导对 Scala 不是很友好。
 
 {% top %}
 
-Integrating UDFs with the Runtime
+自定义函数跟运行时集成
 ---------------------------------
 
-Sometimes it might be necessary for a user-defined function to get global 
runtime information or do some setup/clean-up work before the actual work. 
User-defined functions provide `open()` and `close()` methods that can be 
overridden and provide similar functionality as the methods in `RichFunction` 
of DataSet or DataStream API.
+有时候自定义函数需要获取一些全局信息,或者在真正被调用之前做一些配置(setup)/清理(clean-up)的工作。自定义函数也提供了 `open()` 和 
`close()` 方法,你可以重写这两个方法做到类似于 DataSet 或者 DataStream API 中 `RichFunction` 的功能。
 
-The `open()` method is called once before the evaluation method. The `close()` 
method after the last call to the evaluation method.
+`open()` 方法在自定义函数被调用之前先调用。`close()` 方法在自定义函数调用完之后被调用。
 
-The `open()` method provides a `FunctionContext` that contains information 
about the context in which user-defined functions are executed, such as the 
metric group, the distributed cache files, or the global job parameters.
+`open()` 方法提供了一个 `FunctionContext`,它包含了一些自定义函数被执行时的上下文信息,比如 metric 
group、分布式文件缓存,或者是全局的任务参数等。

Review comment:
       “任务” -> “作业”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1425,33 +1420,33 @@ tab
 
 {% top %}
 
-Best Practices for Implementing UDFs
+实现自定义函数的最佳实践
 ------------------------------------
 
-The Table API and SQL code generation internally tries to work with primitive 
values as much as possible. A user-defined function can introduce much overhead 
through object creation, casting, and (un)boxing. Therefore, it is highly 
recommended to declare parameters and result types as primitive types instead 
of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented 
as `int`. `Types.TIMESTAMP` can be represented as `long`. 
+在 Table API 和 SQL 
的内部,代码生成会尽量的使用基础类型。如果自定义函数使用的是对象,会有很多的对象创建、转换(cast)、以及自动拆装箱的开销。因此,强烈建议使用基础类型来作为参数以及返回值的类型。`Types.DATE`
 和 `Types.TIME` 可以用 `int` 来表示。`Types.TIMESTAMP` 可以用 `long` 来表示。
 
-We recommended that user-defined functions should be written by Java instead 
of Scala as Scala types pose a challenge for Flink's type extractor.
+我们建议自定义函数用 Java 来实现,而不是用 Scala 来实现,因为 Flink 的类型推导对 Scala 不是很友好。
 
 {% top %}
 
-Integrating UDFs with the Runtime
+自定义函数跟运行时集成
 ---------------------------------
 
-Sometimes it might be necessary for a user-defined function to get global 
runtime information or do some setup/clean-up work before the actual work. 
User-defined functions provide `open()` and `close()` methods that can be 
overridden and provide similar functionality as the methods in `RichFunction` 
of DataSet or DataStream API.
+有时候自定义函数需要获取一些全局信息,或者在真正被调用之前做一些配置(setup)/清理(clean-up)的工作。自定义函数也提供了 `open()` 和 
`close()` 方法,你可以重写这两个方法做到类似于 DataSet 或者 DataStream API 中 `RichFunction` 的功能。
 
-The `open()` method is called once before the evaluation method. The `close()` 
method after the last call to the evaluation method.
+`open()` 方法在自定义函数被调用之前先调用。`close()` 方法在自定义函数调用完之后被调用。

Review comment:
       “自定义函数” -> “求值方法”

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -826,46 +823,44 @@ t_env.sql_query("SELECT user, wAvg(points, level) AS 
avgPoints FROM userScores G
 
 {% top %}
 
-Table Aggregation Functions
+表值聚合函数
 ---------------------
 
-User-Defined Table Aggregate Functions (UDTAGGs) aggregate a table (one or 
more rows with one or more attributes) to a result table with multi rows and 
columns. 
+自定义表值聚合函数(UDTAGG)可以把一个表(一行或者多行,每行有一列或者多列)聚合成另一张表,结果中可以有多行多列。
 
 <center>
 <img alt="UDAGG mechanism" src="{{ site.baseurl }}/fig/udtagg-mechanism.png" 
width="80%">
 </center>
 
-The above figure shows an example of a table aggregation. Assume you have a 
table that contains data about beverages. The table consists of three columns, 
`id`, `name` and `price` and 5 rows. Imagine you need to find the top 2 highest 
prices of all beverages in the table, i.e., perform a `top2()` table 
aggregation. You would need to check each of the 5 rows and the result would be 
a table with the top 2 values.
+上图展示了一个表值聚合函数的例子。假设你有一个饮料的表,这个表有3列,分别是 `id`、`name` 和 
`price`,一共有5行。假设你需要找到价格最高的两个饮料,类似于 `top2()` 表值聚合函数。你需要遍历所有5行数据,结果是有2行数据的一个表。
 
-User-defined table aggregation functions are implemented by extending the 
`TableAggregateFunction` class. A `TableAggregateFunction` works as follows. 
First, it needs an `accumulator`, which is the data structure that holds the 
intermediate result of the aggregation. An empty accumulator is created by 
calling the `createAccumulator()` method of the `TableAggregateFunction`. 
Subsequently, the `accumulate()` method of the function is called for each 
input row to update the accumulator. Once all rows have been processed, the 
`emitValue()` method of the function is called to compute and return the final 
results. 
+用户自定义表值聚合函数是通过扩展 `TableAggregateFunction` 类来实现的。一个 `TableAggregateFunction` 
的工作过程如下。首先,它需要一个 `accumulator`,这个 `accumulator` 负责存储聚合的中间结果。 通过调用 
`TableAggregateFunction` 的 `createAccumulator` 方法来构造一个空的 
accumulator。接下来,对于每一行数据,会调用 `accumulate` 方法来更新 accumulator。当所有数据都处理完之后,调用 
`emitValue` 方法来计算和返回最终的结果。
 
-**The following methods are mandatory for each `TableAggregateFunction`:**
+**下面几个 `TableAggregateFunction` 的方法是必须要实现的:**
 
 - `createAccumulator()`
 - `accumulate()` 
 
-Flink’s type extraction facilities can fail to identify complex data types, 
e.g., if they are not basic types or simple POJOs. So similar to 
`ScalarFunction` and `TableFunction`, `TableAggregateFunction` provides methods 
to specify the `TypeInformation` of the result type (through 
- `TableAggregateFunction#getResultType()`) and the type of the accumulator 
(through `TableAggregateFunction#getAccumulatorType()`).
+Flink 类型推导在遇到复杂数据类型的时候可能会推导错误。所以类似于 `ScalarFunction` 和 
`TableFunction`,`TableAggregateFunction` 也提供了 
`TableAggregateFunction#getResultType()` 和 
`TableAggregateFunction#getAccumulatorType()` 方法来指定返回值类型和 accumulator 
的类型,这两个方法都需要返回 `TypeInformation`。
  
-Besides the above methods, there are a few contracted methods that can be 
-optionally implemented. While some of these methods allow the system more 
efficient query execution, others are mandatory for certain use cases. For 
instance, the `merge()` method is mandatory if the aggregation function should 
be applied in the context of a session group window (the accumulators of two 
session windows need to be joined when a row is observed that "connects" them). 
+除了上面的方法,还有几个其他的方法可以选择性的实现。有些方法可以让查询更加高效,而有些方法对于某些特定场景是必须要实现的。比如,在会话窗口(当两个会话窗口合并时会合并两个
 accumulator)中使用聚合函数时,必须要实现`merge()` 方法。

Review comment:
       Same here
   "在会话窗口(当两个会话窗口合并时会合并两个 accumulator)中使用聚合函数时"

##########
File path: docs/dev/table/functions/udfs.zh.md
##########
@@ -1425,33 +1420,33 @@ tab
 
 {% top %}
 
-Best Practices for Implementing UDFs
+实现自定义函数的最佳实践
 ------------------------------------
 
-The Table API and SQL code generation internally tries to work with primitive 
values as much as possible. A user-defined function can introduce much overhead 
through object creation, casting, and (un)boxing. Therefore, it is highly 
recommended to declare parameters and result types as primitive types instead 
of their boxed classes. `Types.DATE` and `Types.TIME` can also be represented 
as `int`. `Types.TIMESTAMP` can be represented as `long`. 
+在 Table API 和 SQL 
的内部,代码生成会尽量的使用基础类型。如果自定义函数使用的是对象,会有很多的对象创建、转换(cast)、以及自动拆装箱的开销。因此,强烈建议使用基础类型来作为参数以及返回值的类型。`Types.DATE`
 和 `Types.TIME` 可以用 `int` 来表示。`Types.TIMESTAMP` 可以用 `long` 来表示。

Review comment:
       "如果自定义函数使用的是对象"
   这句话有歧义,应该是“自定义函数的参数及返回值类型是对象”。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to