wuchong commented on a change in pull request #11232: [FLINK-16133] [docs] 
/ops/filesystems/azure.zh.md
URL: https://github.com/apache/flink/pull/11232#discussion_r388061934
 
 

 ##########
 File path: docs/ops/filesystems/azure.zh.md
 ##########
 @@ -23,60 +23,54 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Azure Blob Storage](https://docs.microsoft.com/en-us/azure/storage/) is a 
Microsoft-managed service providing cloud storage for a variety of use cases.
-You can use Azure Blob Storage with Flink for **reading** and **writing data** 
as well in conjunction with the [streaming **state backends**]({{ site.baseurl 
}}/ops/state/state_backends.html)  
+[Azure Blob 存储](https://docs.microsoft.com/en-us/azure/storage/) 是一项由 
Microsoft 管理的服务,能提供多种应用场景下的云存储。
+Azure Blob 存储可与 Flink 一起使用以**读取**和**写入数据**,以及与[流 State Backend]({{ 
site.baseurl }}/zh/ops/state/state_backends.html) 结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use Azure Blob Storage objects like regular files by specifying paths 
in the following format:
+通过以下格式指定路径,Azure Blob 存储对象可类似于普通文件使用:
 
 {% highlight plain %}
 
wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>
 
-// SSL encrypted access
+// SSL 加密访问
 
wasbs://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>
 {% endhighlight %}
 
-See below for how to use Azure Blob Storage in a Flink job:
+参见以下代码了解如何在 Flink 作业中使用 Azure Blob 存储:
 
 {% highlight java %}
-// Read from Azure Blob storage
+// 读取 Azure Blob 存储
 
env.readTextFile("wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>");
 
-// Write to Azure Blob storage
+// 写入 Azure Blob 存储
 
stream.writeAsText("wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>")
 
-// Use Azure Blob Storage as FsStatebackend
+// 将 Azure Blob 存储用作 FsStatebackend
 env.setStateBackend(new 
FsStateBackend("wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>"));
 {% endhighlight %}
 
-### Shaded Hadoop Azure Blob Storage file system
+### Shaded Hadoop Azure Blob 存储文件系统
 
-To use `flink-azure-fs-hadoop,` copy the respective JAR file from the `opt` 
directory to the `plugins` directory of your Flink distribution before starting 
Flink, e.g.
+为使用 flink-azure-fs-hadoop,在启动 Flink 之前,将对应的 JAR 文件从 opt 目录复制到 Flink 发行版中的 
plugin 目录下的一个文件夹中,例如:
 
 {% highlight bash %}
 mkdir ./plugins/azure-fs-hadoop
 cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar 
./plugins/azure-fs-hadoop/
 {% endhighlight %}
 
-`flink-azure-fs-hadoop` registers default FileSystem wrappers for URIs with 
the *wasb://* and *wasbs://* (SSL encrypted access) scheme.
+`flink-azure-fs-hadoop` 为使用 *wasb://* 和 *wasbs://* (SSL 加密访问) 的 URI 
注册了默认的文件系统包装器。
 
-### Credentials Configuration
-
-Hadoop's Azure Filesystem supports configuration of credentials via the Hadoop 
configuration as
-outlined in the [Hadoop Azure Blob Storage 
documentation](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials).
-For convenience Flink forwards all Flink configurations with a key prefix of 
`fs.azure` to the
-Hadoop configuration of the filesystem. Consequentially, the azure blob 
storage key can be configured
-in `flink-conf.yaml` via:
+### 凭据配置
+Hadoop 的 Azure 文件系统支持通过 Hadoop 配置来配置凭据,如 [Hadoop Azure Blob Storage 
documentation](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials)
 所述。
 
 Review comment:
   ```suggestion
   Hadoop 的 Azure 文件系统支持通过 Hadoop 配置来配置凭据,如 [Hadoop Azure Blob Storage 
文档](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials)
 所述。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to