RocMarshal commented on a change in pull request #16316:
URL: https://github.com/apache/flink/pull/16316#discussion_r672918446
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -60,44 +59,44 @@ EnvironmentSettings settings = EnvironmentSettings
TableEnvironment tEnv = TableEnvironment.create(env);
-// register Orders table in table environment
+// 在表环境中注册Orders表
Review comment:
```suggestion
// 在表环境中注册 Orders 表
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -60,44 +59,44 @@ EnvironmentSettings settings = EnvironmentSettings
TableEnvironment tEnv = TableEnvironment.create(env);
-// register Orders table in table environment
+// 在表环境中注册Orders表
// ...
-// specify table program
+// 指定表程序
Table orders = tEnv.from("Orders"); // schema (a, b, c, rowtime)
Table counts = orders
.groupBy($("a"))
.select($("a"), $("b").count().as("cnt"));
-// print
+// 打印
counts.execute().print();
```
{{< /tab >}}
{{< tab "Scala" >}}
-The Scala Table API is enabled by importing `org.apache.flink.table.api._`,
`org.apache.flink.api.scala._`, and `org.apache.flink.table.api.bridge.scala._`
(for bridging to/from DataStream).
+Scala 的 Table API 通过引入
`org.apache.flink.table.api._`、`org.apache.flink.api.scala._` 和
`org.apache.flink.table.api.bridge.scala._`(开启数据流的桥接支持)来使用。
-The following example shows how a Scala Table API program is constructed.
Table fields are referenced using Scala's String interpolation using a dollar
character (`$`).
+下面的例子展示了如何创建一个 Scala 的 Table API 程序。通过 Scala 的带美元符号(`$`)的字符串插值来实现表字段引用。
```scala
import org.apache.flink.api.scala._
import org.apache.flink.table.api._
import org.apache.flink.table.api.bridge.scala._
-// environment configuration
+// 环境配置
val settings = EnvironmentSettings
.newInstance()
.inStreamingMode()
.build();
val tEnv = TableEnvironment.create(settings);
-// register Orders table in table environment
+// 在表环境中注册Orders表
Review comment:
```suggestion
// 在表环境中注册 Orders 表
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2525,21 +2525,21 @@ t.select(t.b, t.rowtime) \
{{< tabs "flataggregate" >}}
{{< tab "Java" >}}
-Similar to a **GroupBy Aggregation**. Groups the rows on the grouping keys
with the following running table aggregation operator to aggregate rows
group-wise. The difference from an AggregateFunction is that
TableAggregateFunction may return 0 or more records for a group. You have to
close the "flatAggregate" with a select statement. And the select statement
does not support aggregate functions.
+和 **GroupBy Aggregation** 类似。使用运行中的表之后的聚合算子对分组键上的行进行分组,以按组聚合行。和
AggregateFunction 的不同之处在于,TableAggregateFunction 的每个分组可能返回0或多条记录。你必须使用 select
子句关闭 `flatAggregate`。并且 select 子句不支持聚合函数。
-Instead of using emitValue to output results, you can also use the
emitUpdateWithRetract method. Different from emitValue, emitUpdateWithRetract
is used to emit values that have been updated. This method outputs data
incrementally in retract mode, i.e., once there is an update, we have to
retract old records before sending new updated ones. The emitUpdateWithRetract
method will be used in preference to the emitValue method if both methods are
defined in the table aggregate function, because the method is treated to be
more efficient than emitValue as it can output values incrementally.
+除了使用 emitValue 输出结果,你还可以使用 emitUpdateWithRetract 方法。和 emitValue
不同的是,emitUpdateWithRetract 用于下发已更新的值。此方法在retract
模式下增量输出数据,例如,一旦有更新,我们必须在发送新的更新记录之前收回旧记录。如果在表聚合函数中定义了这两个方法,则将优先使用
emitUpdateWithRetract 方法而不是 emitValue 方法,这是因为该方法可以增量输出值,因此被视为比 emitValue 方法更有效。
```java
/**
- * Accumulator for Top2.
+ * Top2 聚合器。
Review comment:
```suggestion
* Top2 Accumulator。
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2312,7 +2313,7 @@ input.flat_map(split)
{{< tabs "aggregate" >}}
{{< tab "Java" >}}
-Performs an aggregate operation with an aggregate function. You have to close
the "aggregate" with a select statement and the select statement does not
support aggregate functions. The output of aggregate will be flattened if the
output type is a composite type.
+使用聚合函数来执行聚合操作。你必须使用 select 子句关闭 `aggregate` ,并且 select
子句不支持聚合函数。如果输出类型是复合类型,则聚合的输出将被展平。
Review comment:
```suggestion
使用聚合函数来执行聚合操作。你必须使用 select 子句关闭 `aggregate`,并且 select
子句不支持聚合函数。如果输出类型是复合类型,则聚合的输出将被展平。
```
same as the follows mentioned.
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2637,7 +2637,7 @@ class Top2 extends
TableAggregateFunction[JTuple2[JInteger, JInteger], Top2Accum
}
def emitValue(acc: Top2Accum, out: Collector[JTuple2[JInteger, JInteger]]):
Unit = {
- // emit the value and rank
+ // 下发原值与等级值
Review comment:
```suggestion
// 发送 value 与 rank
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -110,16 +109,16 @@ val result = orders
{{< /tab >}}
{{< tab "Python" >}}
-The following example shows how a Python Table API program is constructed and
how expressions are specified as strings.
+下面的例子展示了如何创建一个 Python 的 Table API 程序,以及表达式是如何指定为字符串的。
```python
from pyflink.table import *
-# environment configuration
+# 环境配置
t_env = TableEnvironment.create(
environment_settings=EnvironmentSettings.in_batch_mode())
-# register Orders table and Result table sink in table environment
+# 在表环境中注册Orders表和结果sink表
Review comment:
```suggestion
# 在表环境中注册 Orders 表和结果 sink 表
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2362,7 +2363,7 @@ Table table = input
{{< /tab >}}
{{< tab "Scala" >}}
-Performs an aggregate operation with an aggregate function. You have to close
the "aggregate" with a select statement and the select statement does not
support aggregate functions. The output of aggregate will be flattened if the
output type is a composite type.
+使用聚合函数来执行聚合操作。你必须使用 select 子句关闭 `aggregate` ,并且 select
子句不支持聚合函数。如果输出类型是复合类型,则聚合的输出将被展平。
Review comment:
```suggestion
使用聚合函数来执行聚合操作。你必须使用 select 子句关闭 `aggregate`,并且 select
子句不支持聚合函数。如果输出类型是复合类型,则聚合的输出将被展平。
```
Would you like to translate it in a better way?
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2713,16 +2713,17 @@ result = t.select(t.a, t.c) \
{{< query_state_warning >}}
-Data Types
+<a name="data-types"></a>
+数据类型
----------
-Please see the dedicated page about [data types]({{< ref
"docs/dev/table/types" >}}).
+请查看[数据类型]({{< ref "docs/dev/table/types" >}})的专门页面。
-Generic types and (nested) composite types (e.g., POJOs, tuples, rows, Scala
case classes) can be fields of a row as well.
+行中的字段可以是一般类型和(嵌套)复合类型(比如 POJO、元组、行、 Scala 案例类)。
-Fields of composite types with arbitrary nesting can be accessed with [value
access functions]({{< ref "docs/dev/table/functions/systemFunctions"
>}}#value-access-functions).
+任意嵌套的复合类型的字段都可以通过[值访问函数]({{< ref "docs/dev/table/functions/systemFunctions"
>}}#value-access-functions)来访问。
-Generic types are treated as a black box and can be passed on or processed by
[user-defined functions]({{< ref "docs/dev/table/functions/udfs" >}}).
+[用户定义函数]({{< ref "docs/dev/table/functions/udfs" >}})可以将一般类型当作黑匣子一样来传输和处理。
Review comment:
```suggestion
[用户自定义函数]({{< ref "docs/dev/table/functions/udfs" >}})可以将泛型当作黑匣子一样传输和处理。
```
only a minor opinion. Maybe you could translate it in a better way.
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -2569,7 +2569,7 @@ public class Top2 extends
TableAggregateFunction<Tuple2<Integer, Integer>, Top2A
}
public void emitValue(Top2Accum acc, Collector<Tuple2<Integer, Integer>>
out) {
- // emit the value and rank
+ // 下发原值与等级值
Review comment:
```suggestion
// 发送 value 与 rank
```
##########
File path: docs/content.zh/docs/dev/table/tableApi.md
##########
@@ -1964,9 +1965,9 @@ A session window is defined by using the `Session` class
as follows:
### Over Windows
-Over window aggregates are known from standard SQL (`OVER` clause) and defined
in the `SELECT` clause of a query. Unlike group windows, which are specified in
the `GROUP BY` clause, over windows do not collapse rows. Instead over window
aggregates compute an aggregate for each input row over a range of its
neighboring rows.
+Over window 聚合是在标准 SQL(`OVER` 子句)中被知晓,并在 `SELECT` 查询子句中定义的。与在“GROUP BY”子句中指定的
group window 不同, over window 不会折叠行。相反,over window 聚合为每个输入行在其相邻行的范围内计算聚合。
Review comment:
```suggestion
Over window 聚合来自在标准的 SQL(`OVER` 子句),可以在 `SELECT` 查询子句中定义。与在“GROUP BY”子句中指定的
group window 不同,over window 不会折叠行。相反,over window 聚合为每个输入行在其相邻行的范围内计算聚合。
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]