yljee commented on a change in pull request #16316:
URL: https://github.com/apache/flink/pull/16316#discussion_r666151321
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -28,25 +28,24 @@ under the License.
# Table API
-The Table API is a unified, relational API for stream and batch processing.
Table API queries can be run on batch or streaming input without modifications.
The Table API is a super set of the SQL language and is specially designed for
working with Apache Flink. The Table API is a language-integrated API for
Scala, Java and Python. Instead of specifying queries as String values as
common with SQL, Table API queries are defined in a language-embedded style in
Java, Scala or Python with IDE support like autocompletion and syntax
validation.
+Table API 是批处理和流处理的统一的关系 API。Table API 的查询不需要修改代码就可以采用批输入或流输入来运行。Table API 是
SQL 语言的超集,并且是针对Apache Flink 专门设计的。Table API 集成了 Scala, Java 和 Python 语言的
API。Table API 的查询是使用 Java, Scala 或 Python 语言嵌入的风格定义的,有诸如自动补全和语法校验的 IDE
支持,而不是像普通 SQL 一样使用字符串类型的值来指定查询。
-The Table API shares many concepts and parts of its API with Flink's SQL
integration. Have a look at the [Common Concepts & API]({{< ref
"docs/dev/table/common" >}}) to learn how to register tables or to create a
`Table` object. The [Streaming Concepts]({{< ref
"docs/dev/table/concepts/overview" >}}) pages discuss streaming specific
concepts such as dynamic tables and time attributes.
+Table API 和 Flink SQL 共享许多概念以及部分集成的 API。通过查看 [公共概念 & API]({{< ref
"docs/dev/table/common" >}}) 来学习如何注册表或如何创建一个表对象。 [流概念]({{< ref
"docs/dev/table/concepts/overview" >}})页面讨论了诸如动态表和时间属性等流特有的概念。
Review comment:
好的,我修改一下
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -1016,8 +1018,8 @@ result = joined_table.select(joined_table.a,
joined_table.b, joined_table.e, joi
{{< label "Batch" >}} {{< label "Streaming" >}}
-Joins a table with the results of a table function. Each row of the left
(outer) table is joined with all rows produced by the corresponding call of the
table function.
-A row of the left (outer) table is dropped, if its table function call returns
an empty result.
+join表和表函数的结果。左(外部)表的每一行都会join表函数的相应调用产生的所有行。
Review comment:
好的,我修改一下
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -1345,7 +1347,7 @@ left.minusAll(right)
{{< label Batch >}} {{< label Streaming >}}
-Similar to a SQL `IN` clause. In returns true if an expression exists in a
given table sub-query. The sub-query table must consist of one column. This
column must have the same data type as the expression.
+和 SQL `IN` 子句类似。如果在给定表的子查询中存在包含in的表达式,则返回true。子查询表必须由一列组成。这个列必须与表达式具有相同的数据类型。
Review comment:
好的,我修改一下
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2155,7 +2158,7 @@ The row-based operations generate outputs with multiple
columns.
{{< tabs "map" >}}
{{< tab "Java" >}}
-Performs a map operation with a user-defined scalar function or built-in
scalar function. The output will be flattened if the output type is a composite
type.
+使用用户定义的标量函数或内置标量函数执行map操作。如果输出类型是复合类型,则输出将被展平。
Review comment:
好的,我修改一下
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2197,7 +2200,7 @@ val table = input
{{< /tab >}}
{{< tab "Python" >}}
-Performs a map operation with a python [general scalar function]({{< ref
"docs/dev/python/table/udfs/python_udfs" >}}#scalar-functions) or [vectorized
scalar function]({{< ref "docs/dev/python/table/udfs/vectorized_python_udfs"
>}}#vectorized-scalar-functions). The output will be flattened if the output
type is a composite type.
+使用 python 的[一般标量函数]({{< ref "docs/dev/python/table/udfs/python_udfs"
>}}#scalar-functions)或[向量化标量函数]({{< ref
"docs/dev/python/table/udfs/vectorized_python_udfs"
>}}#vectorized-scalar-functions)执行map操作。如果输出类型是复合类型,则输出将被展平。
Review comment:
好的,我修改一下
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]