klion26 commented on a change in pull request #9188: [FLINK-12940][docs-zh]
Translate Apache Cassandra Connector page into…
URL: https://github.com/apache/flink/pull/9188#discussion_r305619109
##########
File path: docs/dev/connectors/cassandra.zh.md
##########
@@ -43,77 +43,68 @@ To use this connector, add the following dependency to
your project:
</dependency>
{% endhighlight %}
-Note that the streaming connectors are currently __NOT__ part of the binary
distribution. See how to link with them for cluster execution [here]({{
site.baseurl}}/dev/projectsetup/dependencies.html).
+请注意,流连接器当前不是 Flink
二进制发布包的一部分。了解如何链接它们以进行集群执行[此处]({{site.baseurl}}/zh/dev/projectsetup/dependencies.html)。
-## Installing Apache Cassandra
-There are multiple ways to bring up a Cassandra instance on local machine:
-1. Follow the instructions from [Cassandra Getting Started
page](http://cassandra.apache.org/doc/latest/getting_started/index.html).
-2. Launch a container running Cassandra from [Official Docker
Repository](https://hub.docker.com/_/cassandra/)
+## 安装 Apache Cassandra
+有多种方法可以在本地计算机上启动Cassandra实例:
+
+1.
按照[Cassandra入门页面](http://cassandra.apache.org/doc/latest/getting_started/index.html)中的说明进行操作。
+2. 从[Official Docker Repository](https://hub.docker.com/_/cassandra/)启动运行
Cassandra 的容器
## Cassandra Sinks
-### Configurations
+### 配置
-Flink's Cassandra sink are created by using the static
CassandraSink.addSink(DataStream<IN> input) method.
-This method returns a CassandraSinkBuilder, which offers methods to further
configure the sink, and finally `build()` the sink instance.
+Flink的 Cassandra 接收器是使用静态 `CassandraSink.addSink(DataStream<IN> input)`
方法创建的。这个方法返回一个 `CassandraSinkBuilder`,它提供了进一步配置接收器的方法,最后通过 `build()` 创建接收器实例。
-The following configuration methods can be used:
+可以使用以下配置方法:
1. _setQuery(String query)_
- * Sets the upsert query that is executed for every record the sink
receives.
- * The query is internally treated as CQL statement.
- * __DO__ set the upsert query for processing __Tuple__ data type.
- * __DO NOT__ set the query for processing __POJO__ data types.
+ * 设置为接收器接收的每个记录执行的 upsert 查询。
+ * 查询在内部被视为 CQL 语句。
+ * __DO__ 设置 upsert 查询以处理 __Tuple__ 数据类型。
+ * __DO NOT__ 设置查询以处理 __POJO__ 数据类型。
2. _setClusterBuilder()_
- * Sets the cluster builder that is used to configure the connection to
cassandra with more sophisticated settings such as consistency level, retry
policy and etc.
+ * 将用于配置创建更复杂的 cassandra cluster builder,例如一致性级别,重试策略等。
3. _setHost(String host[, int port])_
- * Simple version of setClusterBuilder() with host/port information to
connect to Cassandra instances
+ * 简单版本的 setClusterBuilder(),包含连接到 Cassandra 实例的主机/端口信息
4. _setMapperOptions(MapperOptions options)_
- * Sets the mapper options that are used to configure the DataStax
ObjectMapper.
- * Only applies when processing __POJO__ data types.
+ * 设置用于配置 DataStax ObjectMapper 的映射器选项。
+ * 仅在处理 __POJO__ 数据类型时适用。
5. _setMaxConcurrentRequests(int maxConcurrentRequests, Duration timeout)_
- * Sets the maximum allowed number of concurrent requests with a timeout
for acquiring permits to execute.
- * Only applies when __enableWriteAheadLog()__ is not configured.
+ * 设置允许执行许可的超时的最大并发请求数。
+ * 仅在未配置 __enableWriteAheadLog()__ 时适用。
6. _enableWriteAheadLog([CheckpointCommitter committer])_
- * An __optional__ setting
- * Allows exactly-once processing for non-deterministic algorithms.
+ * __optional__ 设置
+ * 允许对非确定性算法进行精确一次处理。
7. _setFailureHandler([CassandraFailureHandler failureHandler])_
- * An __optional__ setting
- * Sets the custom failure handler.
+ * __optional__ 设置。
+ * 设置自定义失败处理程序。
8. _build()_
- * Finalizes the configuration and constructs the CassandraSink instance.
+ * 完成配置并构造 CassandraSink 实例。
-### Write-ahead Log
+### 预写日志
-A checkpoint committer stores additional information about completed
checkpoints
-in some resource. This information is used to prevent a full replay of the last
-completed checkpoint in case of a failure.
-You can use a `CassandraCommitter` to store these in a separate table in
cassandra.
-Note that this table will NOT be cleaned up by Flink.
+一个 checkpoint 提交者存储有关已完成 checkpoint 的附加信息在某些资源中。此信息用于防止在发生故障时从最后一次完整保存的
checkpoint 中重播恢复数据 。
+您可以使用 `CassandraCommitter` 将它们存储在 cassandra 的单独表中。请注意,Flink 不会清理此表。
-Flink can provide exactly-once guarantees if the query is idempotent (meaning
it can be applied multiple
-times without changing the result) and checkpointing is enabled. In case of a
failure the failed
-checkpoint will be replayed completely.
+如果查询是幂等的,Flink 启用了 checkpoint 情况下可以提供精确一次保证(意味着它可以应用多个时间而不更改结果)。如果失败则会从已完整保存的
checkpoint 中重播恢复数据。
-Furthermore, for non-deterministic programs the write-ahead log has to be
enabled. For such a program
-the replayed checkpoint may be completely different than the previous attempt,
which may leave the
-database in an inconsistent state since part of the first attempt may already
be written.
-The write-ahead log guarantees that the replayed checkpoint is identical to
the first attempt.
-Note that that enabling this feature will have an adverse impact on latency.
+此外,对于非确定性程序,必须启用预写日志。对于这样的计划重播的 checkpoint
可能与之前的尝试完全不同,后者可能会离开数据库处于不一致状态,因为可能已经编写了第一次尝试的部分内容。预写日志保证重放的 checkpoint
与第一次尝试相同。请注意,启用此功能会对延迟产生负面影响。
-<p style="border-radius: 5px; padding: 5px" class="bg-danger"><b>Note</b>: The
write-ahead log functionality is currently experimental. In many cases it is
sufficient to use the connector without enabling it. Please report problems to
the development mailing list.</p>
+<p style="border-radius: 5px; padding: 5px"
class="bg-danger"><b>注意</b>:预写日志功能目前是实验性的。在许多情况下,使用连接器而不启用它就足够了。请将问题报告给开发邮件列表。</p>
Review comment:
`在许多情况下,使用连接器而不启用它就足够了` -> `在许多情况下,并不需要启用它`?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services