gaoyunhaii commented on a change in pull request #18472: URL: https://github.com/apache/flink/pull/18472#discussion_r805499075
########## File path: docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -27,24 +27,24 @@ under the License. # Azure Table Storage -This example is using the `HadoopInputFormat` wrapper to use an existing Hadoop input format implementation for accessing [Azure's Table Storage](https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview). +本例使用 `HadoopInputFormat` 包装器使用现有的 Hadoop input format 实现访问 [Azure's Table Storage](https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview). -1. Download and compile the `azure-tables-hadoop` project. The input format developed by the project is not yet available in Maven Central, therefore, we have to build the project ourselves. - Execute the following commands: +1. 下载并编译 `azure-tables-hadoop` 项目。该项目开发的 input format 在 Maven 中心尚不可用,因此,我们必须自己构建该项目。 Review comment: 在 Maven 中心尚不可用 -> 在 Maven 中心仓库尚不存在 ? ########## File path: docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -114,15 +114,14 @@ public class AzureTableExample { } }); - // emit result (this works only locally) + // 发送结果(这仅在本地模式有效) fin.print(); - // execute program + // 执行程序 env.execute("Azure Example"); } } ``` - -The example shows how to access an Azure table and turn data into Flink's `DataStream` (more specifically, the type of the set is `DataStream<Tuple2<Text, WritableEntity>>`). With the `DataStream`, you can apply all known transformations to the DataStream. +该示例展示了如何访问 Azure 表和如何将数据转换为 Flink 的 `DataStream`(更具体地说,集合的类型是 `DataStream<Tuple2<Text, WritableEntity>>`)。使用 `DataStream`,你可以将所有已知的 transformations 应用到 DataStream。 Review comment: 使用 `DataStream`,你可以将所有已知的 transformations 应用到 DataStream。 -> 你可以将所有已知的 transformations 应用到这个 DataStream 实例。 ########## File path: docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -59,13 +59,13 @@ curl https://flink.apache.org/q/quickstart.sh | bash </dependency> ``` -`flink-hadoop-compatibility` is a Flink package that provides the Hadoop input format wrappers. -`microsoft-hadoop-azure` is adding the project we've build before to our project. +`flink-hadoop-compatibility` 是一个提供 Hadoop input format 包装器的 Flink 包。 +`microsoft-hadoop-azure` 可以将之前构建的部分添加到项目中。 -The project is now ready for starting to code. We recommend to import the project into an IDE, such as IntelliJ. You should import it as a Maven project. -Browse to the file `Job.java`. This is an empty skeleton for a Flink job. +项目现在已经可以开始编码。我们建议将项目导入 IDE,例如 IntelliJ。你应该将其作为 Maven 项目导入。 Review comment: 项目现在已经可以开始编码 -> 现在可以开始进行项目的编码 ? ########## File path: docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -27,24 +27,24 @@ under the License. # Azure Table Storage -This example is using the `HadoopInputFormat` wrapper to use an existing Hadoop input format implementation for accessing [Azure's Table Storage](https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview). +本例使用 `HadoopInputFormat` 包装器使用现有的 Hadoop input format 实现访问 [Azure's Table Storage](https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview). Review comment: 包装器使用现有的 -> 包装器`来`使用现有的 ########## File path: docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -59,13 +59,13 @@ curl https://flink.apache.org/q/quickstart.sh | bash </dependency> ``` -`flink-hadoop-compatibility` is a Flink package that provides the Hadoop input format wrappers. -`microsoft-hadoop-azure` is adding the project we've build before to our project. +`flink-hadoop-compatibility` 是一个提供 Hadoop input format 包装器的 Flink 包。 +`microsoft-hadoop-azure` 可以将之前构建的部分添加到项目中。 -The project is now ready for starting to code. We recommend to import the project into an IDE, such as IntelliJ. You should import it as a Maven project. -Browse to the file `Job.java`. This is an empty skeleton for a Flink job. +项目现在已经可以开始编码。我们建议将项目导入 IDE,例如 IntelliJ。你应该将其作为 Maven 项目导入。 +浏览到文件 `Job.java`。这是 Flink 作业的初始框架。 Review comment: 浏览到 -> 跳转到 ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
