klion26 commented on a change in pull request #8319: [FLINK-11636] [docs-zh] 
Translate "State Schema Evolution" into Chinese
URL: https://github.com/apache/flink/pull/8319#discussion_r280969812
 
 

 ##########
 File path: docs/dev/stream/state/schema_evolution.zh.md
 ##########
 @@ -48,62 +47,52 @@ checkpointedState = 
getRuntimeContext().getListState(descriptor);
 {% endhighlight %}
 </div>
 
-Under the hood, whether or not the schema of state can be evolved depends on 
the serializer used to read / write
-persisted state bytes. Simply put, a registered state's schema can only be 
evolved if its serializer properly
-supports it. This is handled transparently by serializers generated by Flink's 
type serialization framework
-(current scope of support is listed [below]({{ site.baseurl 
}}/dev/stream/state/schema_evolution.html#supported-data-types-for-schema-evolution)).
+在内部,状态是否可以进行升级取决于用于读写持久化状态字节的序列化器。
+简而言之,状态数据结构只有在其序列化器正确支持时才能升级。
+这一过程是被 Flink 的类型序列化框架生成的序列化器透明处理的([下面]({{ site.baseurl 
}}/zh/dev/stream/state/schema_evolution.html#数据结构升级支持的数据类型) 列出了当前的支持范围)。
 
-If you intend to implement a custom `TypeSerializer` for your state type and 
would like to learn how to implement
-the serializer to support state schema evolution, please refer to
-[Custom State Serialization]({{ site.baseurl 
}}/dev/stream/state/custom_serialization.html).
-The documentation there also covers necessary internal details about the 
interplay between state serializers and Flink's
-state backends to support state schema evolution.
+如果你想要为你的状态类型实现自定义的 `TypeSerializer` 并且想要学习如何实现支持状态数据结构升级的序列化器,
+可以参考 [自定义状态序列化器]({{ site.baseurl 
}}/zh/dev/stream/state/custom_serialization.html)。
+本文档也包含一些用于支持状态数据结构升级的状态序列化器与 Flink 状态后端存储相互作用的必要内部细节。
 
-## Evolving state schema
+## 升级状态数据结构
 
-To evolve the schema of a given state type, you would take the following steps:
+为了对给定的状态类型进行升级,你需要采取以下几个步骤:
 
- 1. Take a savepoint of your Flink streaming job.
- 2. Update state types in your application (e.g., modifying your Avro type 
schema).
- 3. Restore the job from the savepoint. When accessing state for the first 
time, Flink will assess whether or not
- the schema had been changed for the state, and migrate state schema if 
necessary.
+ 1. 对 Flink 流作业进行 savepoint 操作。
+ 2. 升级程序中的状态类型(例如:修改你的 Avro 结构)。
+ 3. 从 savepoint 处重启作业。当第一次访问状态数据时,Flink 会评估状态数据结构是否已经改变,并在必要的时候进行状态结构迁移。
 
-The process of migrating state to adapt to changed schemas happens 
automatically, and independently for each state.
-This process is performed internally by Flink by first checking if the new 
serializer for the state has different
-serialization schema than the previous serializer; if so, the previous 
serializer is used to read the state to objects,
-and written back to bytes again with the new serializer.
+用来适应状态结构的改变而进行的状态迁移过程是自动发生的,并且状态之间是互相独立的。
+Flink 内部是这样来进行处理的,首先会检查新的序列化器相对比之前的序列化器是否有不同的状态结构;如果有,
+那么之前的序列化器用来读取状态数据字节到对象,然后使用新的序列化器将对象回写为字节。
 
-Further details about the migration process is out of the scope of this 
documentation; please refer to
-[here]({{ site.baseurl }}/dev/stream/state/custom_serialization.html).
+更多的迁移过程细节不在本文档谈论的范围;可以参考[文档]({{ site.baseurl 
}}/zh/dev/stream/state/custom_serialization.html)。
 
-## Supported data types for schema evolution
+## 数据结构升级支持的数据类型
 
-Currently, schema evolution is supported only for POJO and Avro types. 
Therefore, if you care about schema evolution for
-state, it is currently recommended to always use either Pojo or Avro for state 
data types.
+目前,仅对于 POJO 以及 Avro 类型支持数据结构升级。
 
 Review comment:
   `目前,仅支持 POJO 和 Avro 类型的 schema 升级`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to