JingsongLi commented on a change in pull request #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md URL: https://github.com/apache/flink/pull/11127#discussion_r382411106
########## File path: docs/dev/table/index.zh.md ########## @@ -25,41 +25,41 @@ specific language governing permissions and limitations under the License. --> -Apache Flink features two relational APIs - the Table API and SQL - for unified stream and batch processing. The Table API is a language-integrated query API for Scala and Java that allows the composition of queries from relational operators such as selection, filter, and join in a very intuitive way. Flink's SQL support is based on [Apache Calcite](https://calcite.apache.org) which implements the SQL standard. Queries specified in either interface have the same semantics and specify the same result regardless whether the input is a batch input (DataSet) or a stream input (DataStream). +Apache Flink 有两种关系型 API 来做流批统一处理:Table API 和 SQL。Table API 是集成于 Java 和 Scala 的查询 API,它可以用一种非常直观的方式来组合使用例如选取、过滤、join 等关系型算子。Flink SQL 是基于 [Apache Calcite](https://calcite.apache.org) 来实现的标准 SQL。这两种 API 中的查询对于批(DataSet)流(DataStream)的输入有相同的语义,也会产生同样的计算结果。 -The Table API and the SQL interfaces are tightly integrated with each other as well as Flink's DataStream and DataSet APIs. You can easily switch between all APIs and libraries which build upon the APIs. For instance, you can extract patterns from a DataStream using the [CEP library]({{ site.baseurl }}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or you might scan, filter, and aggregate a batch table using a SQL query before running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the preprocessed data. +Table API 和 SQL 两种 API 是紧密集成的,以及 DataStream 和 DataSet API。你可以在这些 API 之间,以及一些基于这些 API 的库之间轻松的切换。比如,你可以先用 [CEP]({{ site.baseurl }}/zh/dev/libs/cep.html) 从 DataStream 中做模式匹配,然后用 Table API 来分析匹配的结果;或者你可以用 SQL 来扫描、过滤、聚合一个批式的表,然后再跑一个 [Gelly 图算法]({{ site.baseurl }}/zh/dev/libs/gelly) 来处理已经预处理好的数据。 -**Please note that the Table API and SQL are not yet feature complete and are being actively developed. Not all operations are supported by every combination of \[Table API, SQL\] and \[stream, batch\] input.** +**注意:Table API 和 SQL 现在还处于活跃开发阶段,还没有完全实现所有的特性。不是所有的 \[Table API,SQL\] 和 \[流,批\] 的组合都是支持的。** -Dependency Structure +依赖图 -------------------- -Starting from Flink 1.9, Flink provides two different planner implementations for evaluating Table & SQL API programs: the Blink planner and the old planner that was available before Flink 1.9. Planners are responsible for -translating relational operators into an executable, optimized Flink job. Both of the planners come with different optimization rules and runtime classes. -They may also differ in the set of supported features. +从1.9开始,Flink 提供了两个 table planner 实现来执行 Table API 和 SQL 程序:Blink planner 和 old planner,old planner 在1.9之前就已经存在了。 +planner 的作用主要是把关系型的操作翻译成可执行的、经过优化的 Flink job。这两个 planner 所使用的优化规则以及运行时都不一样。 +它们在支持的功能上也有些诧异。 -<span class="label label-danger">Attention</span> For production use cases, we recommend the old planner that was present before Flink 1.9 for now. +<span class="label label-danger">注意</span> 对于生产环境,我们建议使用在1.9之前就已经存在的 old planner。 -All Table API and SQL components are bundled in the `flink-table` or `flink-table-blink` Maven artifacts. +所有的 Table API 和 SQL 的代码都在 `flink-table` 或者 `flink-table-blink` Maven artifacts 下。 -The following dependencies are relevant for most projects: +下面是各个依赖: -* `flink-table-common`: A common module for extending the table ecosystem by custom functions, formats, etc. -* `flink-table-api-java`: The Table & SQL API for pure table programs using the Java programming language (in early development stage, not recommended!). -* `flink-table-api-scala`: The Table & SQL API for pure table programs using the Scala programming language (in early development stage, not recommended!). -* `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet API support using the Java programming language. -* `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet API support using the Scala programming language. -* `flink-table-planner`: The table program planner and runtime. This was the only planner of Flink before the 1.9 release. It is still the recommended one. -* `flink-table-planner-blink`: The new Blink planner. -* `flink-table-runtime-blink`: The new Blink runtime. -* `flink-table-uber`: Packages the API modules above plus the old planner into a distribution for most Table & SQL API use cases. The uber JAR file `flink-table-*.jar` is located in the `/lib` directory of a Flink release by default. -* `flink-table-uber-blink`: Packages the API modules above plus the Blink specific modules into a distribution for most Table & SQL API use cases. The uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a Flink release by default. +* `flink-table-common`: 公共模块,比如自定义函数、格式等需要依赖的。 +* `flink-table-api-java`: Table 和 SQL API,使用 Java 语言编写的,给纯 table 程序使用(还在早期开发阶段,不建议使用) +* `flink-table-api-scala`: Table 和 SQL API,使用 Scala 语言编写的,给纯 table 程序使用(还在早期开发阶段,不建议使用) +* `flink-table-api-java-bridge`: Table 和 SQL API,也支持 DataStream/DataSet API,给 Java 语言使用。 +* `flink-table-api-scala-bridge`: Table 和 SQL API,也支持 DataStream/DataSet API,给 Scala 语言使用。 +* `flink-table-planner`: table planner 和运行时。这是在1.9之前 Flink 的唯一的 planner,现在仍然建议使用这个。 +* `flink-table-planner-blink`: 新的 Blink planner。 +* `flink-table-runtime-blink`: 新的 Blink 运行时。 +* `flink-table-uber`: 把上述模块以及 old planner 打包到一起,可以在大部分 Table & SQL API 场景下使用。打包到一起的 jar 文件 `flink-table-*.jar` 默认会直接放到 Flink 发行版的 `/lib` 目录下。 +* `flink-table-uber-blink`: 把上述模块以及 Blink planner 打包到一起,可以在大部分 Table & SQL API 场景下使用。打包到一起的 jar 文件 `flink-table-blink-*.jar` 默认会放到 Flink 发行版的 `/lib` 目录下。 -See the [common API](common.html) page for more information about how to switch between the old and new Blink planner in table programs. +关于如何使用 old planner 以及 Blink planner,可以参考[公共 API](common.html)。 -### Table Program Dependencies +### Table 程序依赖 -Depending on the target programming language, you need to add the Java or Scala API to a project in order to use the Table API & SQL for defining pipelines: +取决于你使用的编程语言,选择 Java 或者 Scala API 来构建你的程序: Review comment: 构建你的Table API和SQL的程序? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
