YngwieWang commented on a change in pull request #9299: [FLINK-13405][docs-zh] 
Translate "Basic API Concepts" page into Chinese
URL: https://github.com/apache/flink/pull/9299#discussion_r317603113
 
 

 ##########
 File path: docs/dev/api_concepts.zh.md
 ##########
 @@ -739,164 +646,125 @@ class WordWithCount(var word: String, var count: Int) {
 
 val input = env.fromElements(
     new WordWithCount("hello", 1),
-    new WordWithCount("world", 2)) // Case Class Data Set
+    new WordWithCount("world", 2)) // Case Class 数据集
 
-input.keyBy("word")// key by field expression "word"
+input.keyBy("word")// 以字段表达式“word”为键
 
 {% endhighlight %}
 </div>
 </div>
 
-#### Primitive Types
+#### 基本数据类型
 
-Flink supports all Java and Scala primitive types such as `Integer`, `String`, 
and `Double`.
+Flink 支持所有 Java 和 Scala 的基本数据类型如 `Integer`、 `String`、和 `Double`。
 
-#### General Class Types
+#### 常规的类
 
-Flink supports most Java and Scala classes (API and custom).
-Restrictions apply to classes containing fields that cannot be serialized, 
like file pointers, I/O streams, or other native
-resources. Classes that follow the Java Beans conventions work well in general.
+Flink 支持大部分 Java 和 Scala 的类(API 和自定义)。
+除了包含无法序列化的字段的类,如文件指针,I / O流或其他本地资源。遵循 Java Beans 约定的类通常可以很好地工作。
 
-All classes that are not identified as POJO types (see POJO requirements 
above) are handled by Flink as general class types.
-Flink treats these data types as black boxes and is not able to access their 
content (i.e., for efficient sorting). General types are de/serialized using 
the serialization framework [Kryo](https://github.com/EsotericSoftware/kryo).
+Flink 对于所有未识别为 POJO 类型的类(请参阅上面对于的 POJO 要求)都作为常规类处理。
+Flink 将这些数据类型视为黑盒,并且无法访问其内容(为了诸如高效排序等目的)。常规类使用 
[Kryo](https://github.com/EsotericSoftware/kryo) 序列化框架进行序列化和反序列化。
 
-#### Values
+#### 值
 
-*Value* types describe their serialization and deserialization manually. 
Instead of going through a
-general purpose serialization framework, they provide custom code for those 
operations by means of
-implementing the `org.apache.flinktypes.Value` interface with the methods 
`read` and `write`. Using
-a Value type is reasonable when general purpose serialization would be highly 
inefficient. An
-example would be a data type that implements a sparse vector of elements as an 
array. Knowing that
-the array is mostly zero, one can use a special encoding for the non-zero 
elements, while the
-general purpose serialization would simply write all array elements.
+*值* 类型手工描述其序列化和反序列化。它们不是通过通用序列化框架,而是通过使用 `read` 和 `write` 方法实现 
`org.apache.flinktypes.Value` 
接口来为这些操作提供自定义编码。当通用序列化效率非常低时,使用值类型是合理的。例如,用数组实现稀疏向量。已知数组大部分元素为零,就可以对非零元素使用特殊编码,而通用序列化只会简单地将所有数组元素都写入。
 
-The `org.apache.flinktypes.CopyableValue` interface supports manual internal 
cloning logic in a
-similar way.
+`org.apache.flinktypes.CopyableValue` 接口以类似的方式支持内部手工克隆逻辑。
 
-Flink comes with pre-defined Value types that correspond to basic data types. 
(`ByteValue`,
-`ShortValue`, `IntValue`, `LongValue`, `FloatValue`, `DoubleValue`, 
`StringValue`, `CharValue`,
-`BooleanValue`). These Value types act as mutable variants of the basic data 
types: Their value can
-be altered, allowing programmers to reuse objects and take pressure off the 
garbage collector.
+Flink 有与基本数据类型对应的预定义值类型。(`ByteValue`、
+`ShortValue`、 `IntValue`、`LongValue`、 `FloatValue`、`DoubleValue`、 
`StringValue`、`CharValue`、
+`BooleanValue`)。这些值类型充当基本数据类型的可变变体:它们的值可以改变,允许程序员重用对象并减轻垃圾回收器的压力。
 
 
-#### Hadoop Writables
+#### Hadoop Writable
 
-You can use types that implement the `org.apache.hadoop.Writable` interface. 
The serialization logic
-defined in the `write()`and `readFields()` methods will be used for 
serialization.
+可以使用实现了 `org.apache.hadoop.Writable` 接口的类型。它们会使用 `write()` 和 `readFields()` 
方法中定义的序列化逻辑。
 
-#### Special Types
+#### 特殊类型
 
-You can use special types, including Scala's `Either`, `Option`, and `Try`.
-The Java API has its own custom implementation of `Either`.
-Similarly to Scala's `Either`, it represents a value of two possible types, 
*Left* or *Right*.
-`Either` can be useful for error handling or operators that need to output two 
different types of records.
+可以使用特殊类型,包括 Scala 的 `Either`、`Option` 和 `Try`。
+Java API 有对 `Either` 的自定义实现。
+类似于 Scala 的 `Either`,它表示一个具有 *Left* 或 *Right* 两种可能类型的值。
+`Either` 可用于错误处理或需要输出两种不同类型记录的算子。
 
-#### Type Erasure & Type Inference
+#### 类型擦除和类型推断
 
-*Note: This Section is only relevant for Java.*
+*请注意: 本节只与 Java 有关。*
 
-The Java compiler throws away much of the generic type information after 
compilation. This is
-known as *type erasure* in Java. It means that at runtime, an instance of an 
object does not know
-its generic type any more. For example, instances of `DataStream<String>` and 
`DataStream<Long>` look the
-same to the JVM.
+Java 编译器在编译后抛弃了大量泛型类型信息。这在 Java 中被称作 *类型擦除*。它意味着在运行时,对象的实例已经不知道它的泛型类型了。例如 
`DataStream<String>` 和 `DataStream<Long>` 的实例在 JVM 看来是一样的。
 
-Flink requires type information at the time when it prepares the program for 
execution (when the
-main method of the program is called). The Flink Java API tries to reconstruct 
the type information
-that was thrown away in various ways and store it explicitly in the data sets 
and operators. You can
-retrieve the type via `DataStream.getType()`. The method returns an instance 
of `TypeInformation`,
-which is Flink's internal way of representing types.
+Flink 在准备程序执行时(程序的 main 方法被调用时)需要类型信息。Flink Java API 
尝试重建以各种方式丢弃的类型信息,并将其显式存储在数据集和算子中。你可以通过 `DataStream.getType()` 获取数据类型。此方法返回 
`TypeInformation` 的一个实例,这是 Flink 内部表示类型地方式。
 
-The type inference has its limits and needs the "cooperation" of the 
programmer in some cases.
-Examples for that are methods that create data sets from collections, such as
-`ExecutionEnvironment.fromCollection(),` where you can pass an argument that 
describes the type. But
-also generic functions like `MapFunction<I, O>` may need extra type 
information.
+类型推断有其局限性,在某些情况下需要程序员的“配合”。
+这方面的示例是从集合创建数据集的方法,例如 
`ExecutionEnvironment.fromCollection()`,你可以在这里传递一个描述类型的参数。 但是像 `MapFunction<I, 
O>` 这样的泛型函数可能还需要额外的类型信息。
 
 Review comment:
   修正了。

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to