zhuxiaoshang commented on a change in pull request #14985:
URL: https://github.com/apache/flink/pull/14985#discussion_r580732501
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
Review comment:
有约束 -> 有边界,和无边界对应
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
Review comment:
它的底层存储是 MySql 数据库中的表。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
Review comment:
ETL 前后留空格
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -109,25 +99,20 @@ report(transactions).executeInsert("spend_report");
```
-## Breaking Down The Code
+## 代码分析
-#### The Execution Environment
+#### 执行环境
-The first two lines set up your `TableEnvironment`.
-The table environment is how you can set properties for your Job, specify
whether you are writing a batch or a streaming application, and create your
sources.
-This walkthrough creates a standard table environment that uses the streaming
execution.
+前两行设置您的TableEnvironment。表环境是您可以为Job设置属性,指定是编写批处理应用程序还是流应用程序以及创建源的方法,本练习将创建一个使用流执行的标准表环境。
Review comment:
table environment可以保留英文吧,专业术语。
table environment 可以用来为Job设置属性,指定是编写批应用程序还是流应用程序以及创建源。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
```java
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
```
-One of Flink's unique properties is that it provides consistent semantics
across batch and streaming.
-This means you can develop and test applications in batch mode on static
datasets, and deploy to production as streaming applications.
+Flink 的独特属性之一是,它在批处理和流传输之间提供一致的语义。这意味着您可以在静态数据集上以批处理模式开发和测试应用程序,并作为流应用程序部署到生产中。
-## Attempt One
-
-Now with the skeleton of a Job set-up, you are ready to add some business
logic.
-The goal is to build a report that shows the total spend for each account
across each hour of the day.
-This means the timestamp column needs be be rounded down from millisecond to
hour granularity.
Review comment:
这里有个typo,应该是 ‘to be’ 吧,改下英文原文。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
Review comment:
Flink 中的 Table API 通常用于简化数据分析,数据管道和ETL应用程序的定义。---要换行的吧,跟原文保持一致,下面也一样。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
-## What Will You Be Building?
+## 您要搭建一个什么系统
-In this tutorial, you will learn how to build a real-time dashboard to track
financial transactions by account.
-The pipeline will read data from Kafka and write the results to MySQL
visualized via Grafana.
+在本教程中,您将学习如何构建实时仪表板以按帐户跟踪财务交易。流程将从Kafka读取数据,并将结果写入通过 Grafana 可视化的 MySQL。
-## Prerequisites
+## 准备条件
-This walkthrough assumes that you have some familiarity with Java or Scala,
but you should be able to follow along even if you come from a different
programming language.
-It also assumes that you are familiar with basic relational concepts such as
`SELECT` and `GROUP BY` clauses.
+这个代码练习假定您对 Java 或 Scala
有一定的了解,当然,如果您之前使用的是其他开发语言,您也应该能够跟随本教程进行学习。同时假定您熟悉基本的关系概念,例如 SELECT 和 GROUP BY
语法。
-## Help, I’m Stuck!
+## 困难求助
-If you get stuck, check out the [community support
resources](https://flink.apache.org/community.html).
-In particular, Apache Flink's [user mailing
list](https://flink.apache.org/community.html#mailing-lists) consistently ranks
as one of the most active of any Apache project and a great way to get help
quickly.
+如果遇到困难,可以参考[社区支持资源](https://flink.apache.org/community.html)。
当然也可以在邮件列表提问,Flink
的[用户邮件列表](https://flink.apache.org/community.html#mailing-lists)一直被评为所有Apache项目中最活跃的一个,这也是快速获得帮助的好方法。
Review comment:
注意英文单词前后空格,括号用中文括号
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
```java
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
```
-One of Flink's unique properties is that it provides consistent semantics
across batch and streaming.
-This means you can develop and test applications in batch mode on static
datasets, and deploy to production as streaming applications.
+Flink 的独特属性之一是,它在批处理和流传输之间提供一致的语义。这意味着您可以在静态数据集上以批处理模式开发和测试应用程序,并作为流应用程序部署到生产中。
-## Attempt One
-
-Now with the skeleton of a Job set-up, you are ready to add some business
logic.
-The goal is to build a report that shows the total spend for each account
across each hour of the day.
-This means the timestamp column needs be be rounded down from millisecond to
hour granularity.
+## 尝试
+
+现在,有了 Job 设置的框架,您就可以添加一些业务逻辑。目的是建立一个报告,显示每个帐户在一天中每个小时的总支出。这意味着时间戳列需要从毫秒舍入到小时粒度。
Flink supports developing relational applications in pure [SQL]({{< ref
"docs/dev/table/sql/overview" >}}) or using the [Table API]({{< ref
"docs/dev/table/tableApi" >}}).
The Table API is a fluent DSL inspired by SQL, that can be written in Python,
Java, or Scala and supports strong IDE integration.
Just like a SQL query, Table programs can select the required fields and group
by your keys.
These features, allong with [built-in functions]({{< ref
"docs/dev/table/functions/systemFunctions" >}}) like `floor` and `sum`, you can
write this report.
+Flink支持使用纯[SQL]({{< ref "docs/dev/table/sql/overview" >}})或使用[Table API]({{<
ref "docs/dev/table/tableApi" >}})。Table API 是受 SQL 启发的流畅 DSL,可以用 Python、Java或
Scala 编写,并支持强大的 IDE 集成。就像 SQL 查询一样,Table 程序可以选择必填字段并按键进行分组。这些特点,结合[内置函数] ({{<
ref "docs/dev/table/functions/systemFunctions" >}}) ,如floor和sum,可以编写此报告。
Review comment:
利用这些特性,并结合[内置函数] ({{< ref "docs/dev/table/functions/systemFunctions"
>}}) ,如 `floor` 和 `sum`,就可以编写 report 函数了。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
```java
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
```
-One of Flink's unique properties is that it provides consistent semantics
across batch and streaming.
-This means you can develop and test applications in batch mode on static
datasets, and deploy to production as streaming applications.
+Flink 的独特属性之一是,它在批处理和流传输之间提供一致的语义。这意味着您可以在静态数据集上以批处理模式开发和测试应用程序,并作为流应用程序部署到生产中。
Review comment:
Flink 的特色之一是,它在批处理和流处理之间提供一致性的语义。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
Review comment:
用于验证 report 函数的逻辑。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
```java
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
```
-One of Flink's unique properties is that it provides consistent semantics
across batch and streaming.
-This means you can develop and test applications in batch mode on static
datasets, and deploy to production as streaming applications.
+Flink 的独特属性之一是,它在批处理和流传输之间提供一致的语义。这意味着您可以在静态数据集上以批处理模式开发和测试应用程序,并作为流应用程序部署到生产中。
-## Attempt One
-
-Now with the skeleton of a Job set-up, you are ready to add some business
logic.
-The goal is to build a report that shows the total spend for each account
across each hour of the day.
-This means the timestamp column needs be be rounded down from millisecond to
hour granularity.
+## 尝试
+
+现在,有了 Job 设置的框架,您就可以添加一些业务逻辑。目的是建立一个报告,显示每个帐户在一天中每个小时的总支出。这意味着时间戳列需要从毫秒舍入到小时粒度。
Flink supports developing relational applications in pure [SQL]({{< ref
"docs/dev/table/sql/overview" >}}) or using the [Table API]({{< ref
"docs/dev/table/tableApi" >}}).
Review comment:
删除这一段英文
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
-## What Will You Be Building?
+## 您要搭建一个什么系统
Review comment:
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications
第6点,用“你” 而不用“您”,下面也一样。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
Review comment:
该 `report` 函数是实现业务逻辑的地方,目前尚未实现。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -275,38 +248,30 @@ public static Table report(Table transactions) {
}
```
-This defines your application as using one hour tumbling windows based on the
timestamp column.
-So a row with timestamp `2019-06-01 01:23:47` is put in the `2019-06-01
01:00:00` window.
-
+这将您的应用程序定义为使用基于timestamp列的一小时滚动窗口。因此,带有时间戳的行 `2019-06-01 01:23:47` 将被放置在
`2019-06-01 01:00:00` 窗口中。
-Aggregations based on time are unique because time, as opposed to other
attributes, generally moves forward in a continuous streaming application.
-Unlike `floor` and your UDF, window functions are
[intrinsics](https://en.wikipedia.org/wiki/Intrinsic_function), which allows
the runtime to apply additional optimizations.
-In a batch context, windows offer a convenient API for grouping records by a
timestamp attribute.
+基于时间的聚合是唯一的,因为与其它属性相反,时间通常在连续流应用程序中向前移动。与用户自定义函数 `floor`
不同,窗口函数是[内部函数](https://en.wikipedia.org/wiki/Intrinsic_function),它允许运行时应用额外的优化。在批处理环境中,窗口函数提供了一种用于按
timestamp 属性对记录进行分组方便的API。
Running the test with this implementation will also pass.
Review comment:
未翻译
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -163,44 +147,37 @@ tEnv.executeSql("CREATE TABLE spend_report (\n" +
")");
```
-The second table, `spend_report`, stores the final results of the aggregation.
-Its underlying storage is a table in a MySql database.
+第二张表`spend_report`存储了聚合的最终结果,它的基础存储是 MySql 数据库中的表。
-#### The Query
+#### 查询
-With the environment configured and tables registered, you are ready to build
your first application.
-From the `TableEnvironment` you can read `from` an input table to read its
rows and then write those results into an output table using `executeInsert`.
-The `report` function is where you will implement your business logic.
-It is currently unimplemented.
+配置好环境并注册表之后,就可以构建第一个应用程序了。从 `TableEnvironment` 您可以利用 `from` 从输入表读取其行,然后利用
`executeInsert` 将结果写入输出表。该 `report` 功能是实现业务逻辑的地方,目前尚未实现。
```java
Table transactions = tEnv.from("transactions");
report(transactions).executeInsert("spend_report");
```
-## Testing
+## 测试
-The project contains a secondary testing class `SpendReportTest` that
validates the logic of the report.
-It creates a table environment in batch mode.
+该项目包含一个辅助测试类 `SpendReportTest`,用于验证报告逻辑。它以批处理方式创建表环境。
```java
EnvironmentSettings settings =
EnvironmentSettings.newInstance().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
```
-One of Flink's unique properties is that it provides consistent semantics
across batch and streaming.
-This means you can develop and test applications in batch mode on static
datasets, and deploy to production as streaming applications.
+Flink 的独特属性之一是,它在批处理和流传输之间提供一致的语义。这意味着您可以在静态数据集上以批处理模式开发和测试应用程序,并作为流应用程序部署到生产中。
-## Attempt One
-
-Now with the skeleton of a Job set-up, you are ready to add some business
logic.
-The goal is to build a report that shows the total spend for each account
across each hour of the day.
-This means the timestamp column needs be be rounded down from millisecond to
hour granularity.
+## 尝试
+
+现在,有了 Job 设置的框架,您就可以添加一些业务逻辑。目的是建立一个报告,显示每个帐户在一天中每个小时的总支出。这意味着时间戳列需要从毫秒舍入到小时粒度。
Flink supports developing relational applications in pure [SQL]({{< ref
"docs/dev/table/sql/overview" >}}) or using the [Table API]({{< ref
"docs/dev/table/tableApi" >}}).
The Table API is a fluent DSL inspired by SQL, that can be written in Python,
Java, or Scala and supports strong IDE integration.
Just like a SQL query, Table programs can select the required fields and group
by your keys.
These features, allong with [built-in functions]({{< ref
"docs/dev/table/functions/systemFunctions" >}}) like `floor` and `sum`, you can
write this report.
+Flink支持使用纯[SQL]({{< ref "docs/dev/table/sql/overview" >}})或使用[Table API]({{<
ref "docs/dev/table/tableApi" >}})。Table API 是受 SQL 启发的流畅 DSL,可以用 Python、Java或
Scala 编写,并支持强大的 IDE 集成。就像 SQL 查询一样,Table 程序可以选择必填字段并按键进行分组。这些特点,结合[内置函数] ({{<
ref "docs/dev/table/functions/systemFunctions" >}}) ,如floor和sum,可以编写此报告。
Review comment:
Flink支持使用纯[SQL]({{< ref "docs/dev/table/sql/overview" >}})或使用[Table
API]({{< ref "docs/dev/table/tableApi" >}})来开发纯关系型应用。
##########
File path: docs/content.zh/docs/try-flink/table_api.md
##########
@@ -28,51 +28,41 @@ under the License.
# 基于 Table API 实现实时报表
-Apache Flink offers a Table API as a unified, relational API for batch and
stream processing, i.e., queries are executed with the same semantics on
unbounded, real-time streams or bounded, batch data sets and produce the same
results.
-The Table API in Flink is commonly used to ease the definition of data
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为统一的相关
API,用于批处理和流处理,即:对无边界的实时流或有约束的批处理数据集以相同的语义执行查询,并产生相同的结果。Flink 中的 Table API
通常用于简化数据分析,数据管道和ETL应用程序的定义。
-## What Will You Be Building?
+## 您要搭建一个什么系统
-In this tutorial, you will learn how to build a real-time dashboard to track
financial transactions by account.
-The pipeline will read data from Kafka and write the results to MySQL
visualized via Grafana.
+在本教程中,您将学习如何构建实时仪表板以按帐户跟踪财务交易。流程将从Kafka读取数据,并将结果写入通过 Grafana 可视化的 MySQL。
Review comment:
将结果写入 MySQL 并通过 Grafana 提供可视化。
---这样是不是通顺一点
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]