YngwieWang commented on a change in pull request #9299: [FLINK-13405][docs-zh] 
Translate "Basic API Concepts" page into Chinese
URL: https://github.com/apache/flink/pull/9299#discussion_r317597048
 
 

 ##########
 File path: docs/dev/api_concepts.zh.md
 ##########
 @@ -605,46 +523,35 @@ data.map (new RichMapFunction[String, Int] {
 
 </div>
 
-Rich functions provide, in addition to the user-defined function (map,
-reduce, etc), four methods: `open`, `close`, `getRuntimeContext`, and
-`setRuntimeContext`. These are useful for parameterizing the function
-(see [Passing Parameters to Functions]({{ site.baseurl 
}}/dev/batch/index.html#passing-parameters-to-functions)),
-creating and finalizing local state, accessing broadcast variables (see
-[Broadcast Variables]({{ site.baseurl 
}}/dev/batch/index.html#broadcast-variables)), and for accessing runtime
-information such as accumulators and counters (see
-[Accumulators and Counters](#accumulators--counters)), and information
-on iterations (see [Iterations]({{ site.baseurl }}/dev/batch/iterations.html)).
+富函数为用户定义函数(map、reduce 等)额外提供了 4 个方法: `open`、`close`、`getRuntimeContext` 和 
`setRuntimeContext`。这些方法有助于向函数传参(请参阅 [向函数传递参数]({{ site.baseurl 
}}/zh/dev/batch/index.html#passing-parameters-to-functions))、
+创建和终止本地状态、访问广播变量(请参阅
+[广播变量]({{ site.baseurl 
}}/zh/dev/batch/index.html#broadcast-variables))、访问诸如累加器和计数器等运行时信息(请参阅
+[累加器和计数器](#accumulators--counters))和迭代信息(请参阅 [迭代]({{ site.baseurl 
}}/zh/dev/batch/iterations.html))。
 
 {% top %}
 
-Supported Data Types
+支持的数据类型
 --------------------
 
-Flink places some restrictions on the type of elements that can be in a 
DataSet or DataStream.
-The reason for this is that the system analyzes the types to determine
-efficient execution strategies.
+Flink 对于 DataSet 或 DataStream 中可以包含的元素类型做了一些限制。这么做是为了使系统能够分析类型以确定有效的执行策略。
 
-There are six different categories of data types:
+有六种不同的数据类型:
 
-1. **Java Tuples** and **Scala Case Classes**
-2. **Java POJOs**
-3. **Primitive Types**
-4. **Regular Classes**
-5. **Values**
-6. **Hadoop Writables**
-7. **Special Types**
+1. **Java Tuple** 和 **Scala Case Class**
+2. **Java POJO**
+3. **基本数据类型**
+4. **常规的类**
+5. **值**
+6. **Hadoop Writable**
+7. **特殊类型**
 
-#### Tuples and Case Classes
+#### Tuple 和 Case Class
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-Tuples are composite types that contain a fixed number of fields with various 
types.
-The Java API provides classes from `Tuple1` up to `Tuple25`. Every field of a 
tuple
-can be an arbitrary Flink type including further tuples, resulting in nested 
tuples. Fields of a
-tuple can be accessed directly using the field's name as `tuple.f4`, or using 
the generic getter method
-`tuple.getField(int position)`. The field indices start at 0. Note that this 
stands in contrast
-to the Scala tuples, but it is more consistent with Java's general indexing.
+Tuple 是复合类型,包含固定数量的各种类型的字段。
+Java API 提供了从 `Tuple1` 到 `Tuple25` 的类。 Tuple 的每一个字段可以是任意 Flink 类型,包括 
Tuple,即嵌套的 Tuple。Tuple 的字段可以通过字段序号如 `tuple.f4` 直接访问,也可以使用通常的 getter 方法 
`tuple.getField(int position)`。字段索引从 0 开始。请注意,这与 Scala Tuple 形成鲜明对比,但与 Java 
的常规索引更为一致。
 
 Review comment:
   👍 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to