This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
     new ea4f9aa5139 [hotfix] Fix broken link
ea4f9aa5139 is described below

commit ea4f9aa5139ad3525ab49cd66eadb51ab7767c1d
Author: Chesnay Schepler <[email protected]>
AuthorDate: Mon Oct 10 12:04:51 2022 +0200

    [hotfix] Fix broken link
---
 docs/content.zh/docs/deployment/filesystems/s3.md | 2 +-
 docs/content/docs/deployment/filesystems/s3.md    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/content.zh/docs/deployment/filesystems/s3.md 
b/docs/content.zh/docs/deployment/filesystems/s3.md
index bf41d97265a..5bb316b038b 100644
--- a/docs/content.zh/docs/deployment/filesystems/s3.md
+++ b/docs/content.zh/docs/deployment/filesystems/s3.md
@@ -63,7 +63,7 @@ env.setStateBackend(new 
FsStateBackend("s3://<your-bucket>/<endpoint>"));
 Flink 提供两种文件系统用来与 S3 交互:`flink-s3-fs-presto` 和 
`flink-s3-fs-hadoop`。两种实现都是独立的且没有依赖项,因此使用时无需将 Hadoop 添加至 classpath。
 
   - `flink-s3-fs-presto`,通过 *s3://* 和 *s3p://* 两种 scheme 使用,基于 [Presto 
project](https://prestodb.io/)。
-  可以使用[和 Presto 
文件系统相同的配置项](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration)进行配置,方式为将配置添加到
 `flink-conf.yaml` 文件中。如果要在 S3 中使用 checkpoint,推荐使用 Presto S3 文件系统。
+  可以使用[和 Presto 
文件系统相同的配置项](https://prestodb.io/docs/0.272/connector/hive.html#amazon-s3-configuration)进行配置,方式为将配置添加到
 `flink-conf.yaml` 文件中。如果要在 S3 中使用 checkpoint,推荐使用 Presto S3 文件系统。
 
   - `flink-s3-fs-hadoop`,通过 *s3://* 和 *s3a://* 两种 scheme 使用, 基于 [Hadoop 
Project](https://hadoop.apache.org/)。
   本文件系统可以使用类似 [Hadoop S3A 
的配置项](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)进行配置,方式为将配置添加到
 `flink-conf.yaml` 文件中。
diff --git a/docs/content/docs/deployment/filesystems/s3.md 
b/docs/content/docs/deployment/filesystems/s3.md
index 724805ca3ef..7377557f268 100644
--- a/docs/content/docs/deployment/filesystems/s3.md
+++ b/docs/content/docs/deployment/filesystems/s3.md
@@ -63,7 +63,7 @@ Flink provides two file systems to talk to Amazon S3, 
`flink-s3-fs-presto` and `
 Both implementations are self-contained with no dependency footprint, so there 
is no need to add Hadoop to the classpath to use them.
 
   - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is 
based on code from the [Presto project](https://prestodb.io/).
-  You can configure it using [the same configuration keys as the Presto file 
system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration),
 by adding the configurations to your `flink-conf.yaml`. The Presto S3 
implementation is the recommended file system for checkpointing to S3.
+  You can configure it using [the same configuration keys as the Presto file 
system](https://prestodb.io/docs/0.272/connector/hive.html#amazon-s3-configuration),
 by adding the configurations to your `flink-conf.yaml`. The Presto S3 
implementation is the recommended file system for checkpointing to S3.
 
   - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code 
from the [Hadoop Project](https://hadoop.apache.org/).
   The file system can be [configured using Hadoop's s3a configuration 
keys](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)
 by adding the configurations to your `flink-conf.yaml`. 

Reply via email to