YngwieWang commented on a change in pull request #9350: [FLINK-13485] 
[chinese-translation] Translate "Table API Example Walkthrough" page into 
Chinese
URL: https://github.com/apache/flink/pull/9350#discussion_r310365401
 
 

 ##########
 File path: docs/getting-started/walkthroughs/table_api.zh.md
 ##########
 @@ -24,35 +24,39 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink offers a Table API as a unified, relational API for batch and 
stream processing, i.e., queries are executed with the same semantics on 
unbounded, real-time streams or bounded, batch data sets and produce the same 
results.
-The Table API in Flink is commonly used to ease the definition of data 
analytics, data pipelining, and ETL applications.
+Apache Filnk为批流一体化提供了一种统一的、关系型API,即Tabel API。
 
+也就是说通过Tabel API建立的查询,在无界的实时数据流亦或是有界的批数据上具有同样的语义,得出的结果也是一样的。
+
+在Flink中Tabel API被广泛用于简化数据分析、数据工作流(data pipelining)和ETL应用程序的定义。
 * This will be replaced by the TOC
 {:toc}
 
-## What Will You Be Building? 
+## 接下来你会构建什么? 
+
+在这个教程中,你将会学习如何构建一个持续不断的ETL数据流,这个数据流会被用来按时间顺序追踪每个账户的财务交易。
 
-In this tutorial, you will learn how to build a continuous ETL pipeline for 
tracking financial transactions by account over time.
-You will start by building your report as a nightly batch job, and then 
migrate to a streaming pipeline.
+首先你会构建一个每晚运行的批作业,之后再把这个批作业转换成流作业。
 
 Review comment:
   ```suggestion
   你将首先将报表构建为每晚执行的批处理作业,然后迁移到流式管道。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to