This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 41e3811ae889fd8ec08a502eefe6c9d63c062dd5
Author: Robert Metzger <[email protected]>
AuthorDate: Tue Dec 1 10:33:33 2020 +0100

    [hotfix][docs] Fix various broken links in the docs
    
    I used this to identify broken links in the zh.md files:
    
        git grep -E "[^z][^h]\.md " -- '*.zh.md'
---
 docs/deployment/index.zh.md                   | 30 +++++++++++++--------------
 docs/deployment/resource-providers/yarn.zh.md | 24 ++++++++++-----------
 docs/dev/batch/hadoop_compatibility.md        |  2 +-
 docs/dev/batch/hadoop_compatibility.zh.md     |  3 +++
 docs/dev/project-configuration.md             |  6 +++++-
 docs/dev/project-configuration.zh.md          |  6 +++++-
 docs/dev/table/connectors/hive/index.md       |  2 +-
 docs/dev/table/streaming/joins.zh.md          | 16 +++++++-------
 8 files changed, 50 insertions(+), 39 deletions(-)

diff --git a/docs/deployment/index.zh.md b/docs/deployment/index.zh.md
index 8f0518d..54c42ce 100644
--- a/docs/deployment/index.zh.md
+++ b/docs/deployment/index.zh.md
@@ -28,7 +28,7 @@ under the License.
 Flink is a versatile framework, supporting many different deployment scenarios 
in a mix and match fashion.
 
 Below, we briefly explain the building blocks of a Flink cluster, their 
purpose and available implementations.
-If you just want to start Flink locally, we recommend setting up a [Standalone 
Cluster]({% link deployment/resource-providers/standalone/index.md %}).
+If you just want to start Flink locally, we recommend setting up a [Standalone 
Cluster]({% link deployment/resource-providers/standalone/index.zh.md %}).
 
 * This will be replaced by the TOC
 {:toc}
@@ -63,11 +63,11 @@ When deploying Flink, there are often multiple options 
available for each buildi
             </td>
             <td>
                 <ul>
-                    <li><a href="{% link deployment/cli.md %}">Command Line 
Interface</a></li>
-                    <li><a href="{% link ops/rest_api.md %}">REST 
Endpoint</a></li>
-                    <li><a href="{% link dev/table/sqlClient.md %}">SQL 
Client</a></li>
-                    <li><a href="{% link deployment/repls/python_shell.md 
%}">Python REPL</a></li>
-                    <li><a href="{% link deployment/repls/scala_shell.md 
%}">Scala REPL</a></li>
+                    <li><a href="{% link deployment/cli.zh.md %}">Command Line 
Interface</a></li>
+                    <li><a href="{% link ops/rest_api.zh.md %}">REST 
Endpoint</a></li>
+                    <li><a href="{% link dev/table/sqlClient.zh.md %}">SQL 
Client</a></li>
+                    <li><a href="{% link deployment/repls/python_shell.zh.md 
%}">Python REPL</a></li>
+                    <li><a href="{% link deployment/repls/scala_shell.zh.md 
%}">Scala REPL</a></li>
                 </ul>
             </td>
         </tr>
@@ -84,11 +84,11 @@ When deploying Flink, there are often multiple options 
available for each buildi
             </td>
             <td>
                 <ul id="jmimpls">
-                    <li><a href="{% link 
deployment/resource-providers/standalone/index.md %}">Standalone</a> (this is 
the barebone mode that requires just JVMs to be launched. Deployment with <a 
href="{% link deployment/resource-providers/standalone/docker.md %}">Docker, 
Docker Swarm / Compose</a>, <a href="{% link 
deployment/resource-providers/standalone/kubernetes.md %}">non-native 
Kubernetes</a> and other models is possible through manual setup in this mode)
+                    <li><a href="{% link 
deployment/resource-providers/standalone/index.zh.md %}">Standalone</a> (this 
is the barebone mode that requires just JVMs to be launched. Deployment with <a 
href="{% link deployment/resource-providers/standalone/docker.zh.md %}">Docker, 
Docker Swarm / Compose</a>, <a href="{% link 
deployment/resource-providers/standalone/kubernetes.zh.md %}">non-native 
Kubernetes</a> and other models is possible through manual setup in this mode)
                     </li>
-                    <li><a href="{% link 
deployment/resource-providers/native_kubernetes.md %}">Kubernetes</a></li>
-                    <li><a href="{% link deployment/resource-providers/yarn.md 
%}">YARN</a></li>
-                    <li><a href="{% link 
deployment/resource-providers/mesos.md %}">Mesos</a></li>
+                    <li><a href="{% link 
deployment/resource-providers/native_kubernetes.zh.md %}">Kubernetes</a></li>
+                    <li><a href="{% link 
deployment/resource-providers/yarn.zh.md %}">YARN</a></li>
+                    <li><a href="{% link 
deployment/resource-providers/mesos.zh.md %}">Mesos</a></li>
                 </ul>
             </td>
         </tr>
@@ -112,8 +112,8 @@ When deploying Flink, there are often multiple options 
available for each buildi
             </td>
             <td>
                 <ul>
-                    <li><a href="{% link deployment/ha/zookeeper_ha.md 
%}">Zookeeper</a></li>
-                    <li><a href="{% link deployment/ha/kubernetes_ha.md 
%}">Kubernetes HA</a></li>
+                    <li><a href="{% link deployment/ha/zookeeper_ha.zh.md 
%}">Zookeeper</a></li>
+                    <li><a href="{% link deployment/ha/kubernetes_ha.zh.md 
%}">Kubernetes HA</a></li>
                 </ul>
             </td>
         </tr>
@@ -122,7 +122,7 @@ When deploying Flink, there are often multiple options 
available for each buildi
             <td>
                 For checkpointing (recovery mechanism for streaming jobs) 
Flink relies on external file storage systems
             </td>
-            <td>See <a href="{% link deployment/filesystems/index.md 
%}">FileSystems</a> page.</td>
+            <td>See <a href="{% link deployment/filesystems/index.zh.md 
%}">FileSystems</a> page.</td>
         </tr>
         <tr>
             <td>Resource Provider</td>
@@ -136,7 +136,7 @@ When deploying Flink, there are often multiple options 
available for each buildi
             <td>
                 Flink components report internal metrics and Flink jobs can 
report additional, job specific metrics as well.
             </td>
-            <td>See <a href="{% link deployment/metric_reporters.md 
%}">Metrics Reporter</a> page.</td>
+            <td>See <a href="{% link deployment/metric_reporters.zh.md 
%}">Metrics Reporter</a> page.</td>
         </tr>
         <tr>
             <td>Application-level data sources and sinks</td>
@@ -151,7 +151,7 @@ When deploying Flink, there are often multiple options 
available for each buildi
                     <li>ElasticSearch</li>
                     <li>Apache Cassandra</li>
                 </ul>
-                See <a href="{% link dev/connectors/index.md 
%}">Connectors</a> page.
+                See <a href="{% link dev/connectors/index.zh.md 
%}">Connectors</a> page.
             </td>
         </tr>
     </tbody>
diff --git a/docs/deployment/resource-providers/yarn.zh.md 
b/docs/deployment/resource-providers/yarn.zh.md
index 759d203..0cb589b 100644
--- a/docs/deployment/resource-providers/yarn.zh.md
+++ b/docs/deployment/resource-providers/yarn.zh.md
@@ -42,7 +42,7 @@ Flink can dynamically allocate and de-allocate TaskManager 
resources depending o
 This *Getting Started* section assumes a functional YARN environment, starting 
from version 2.4.1. YARN environments are provided most conveniently through 
services such as Amazon EMR, Google Cloud DataProc or products like Cloudera. 
[Manually setting up a YARN environment 
locally](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html)
 or [on a 
cluster](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html)
 is not [...]
 
 - Make sure your YARN cluster is ready for accepting Flink applications by 
running `yarn top`. It should show no error messages.
-- Download a recent Flink distribution from the [download page]({{ 
site.download_url }}) and unpack it.
+- Download a recent Flink distribution from the [download page]({{ 
site.zh_download_url }}) and unpack it.
 - **Important** Make sure that the `HADOOP_CLASSPATH` environment variable is 
set up (it can be checked by running `echo $HADOOP_CLASSPATH`). If not, set it 
up using 
 
 {% highlight bash %}
@@ -78,7 +78,7 @@ Congratulations! You have successfully run a Flink 
application by deploying Flin
 
 ## Deployment Modes Supported by Flink on YARN
 
-For production use, we recommend deploying Flink Applications in the [Per-job 
or Application Mode]({% link deployment/index.md %}#deployment-modes), as these 
modes provide a better isolation for the Applications.
+For production use, we recommend deploying Flink Applications in the [Per-Job 
or Application Mode]({% link deployment/index.zh.md %}#deployment-modes), as 
these modes provide a better isolation for the Applications.
 
 ### Application Mode
 
@@ -117,7 +117,7 @@ client.
 
 ### Per-Job Cluster Mode
 
-The Per-job Cluster mode will launch a Flink cluster on YARN, then run the 
provided application jar locally and finally submit the JobGraph to the 
JobManager on YARN. If you pass the `--detached` argument, the client will stop 
once the submission is accepted.
+The Per-Job Cluster mode will launch a Flink cluster on YARN, then run the 
provided application jar locally and finally submit the JobGraph to the 
JobManager on YARN. If you pass the `--detached` argument, the client will stop 
once the submission is accepted.
 
 The YARN cluster will stop once the job has stopped.
 
@@ -159,7 +159,7 @@ You can **re-attach to a YARN session** using the following 
command:
 ./bin/yarn-session.sh -id application_XXXX_YY
 ```
 
-Besides passing [configuration]({% link deployment/config.md %}) via the 
`conf/flink-conf.yaml` file, you can also pass any configuration at submission 
time to the `./bin/yarn-session.sh` client using `-Dkey=value` arguments.
+Besides passing [configuration]({% link deployment/config.zh.md %}) via the 
`conf/flink-conf.yaml` file, you can also pass any configuration at submission 
time to the `./bin/yarn-session.sh` client using `-Dkey=value` arguments.
 
 The YARN session client also has a few "shortcut arguments" for commonly used 
settings. They can be listed with `./bin/yarn-session.sh -h`.
 
@@ -169,7 +169,7 @@ The YARN session client also has a few "shortcut arguments" 
for commonly used se
 
 ### Configuring Flink on YARN
 
-The YARN-specific configurations are listed on the [configuration page]({% 
link deployment/config.md %}#yarn).
+The YARN-specific configurations are listed on the [configuration page]({% 
link deployment/config.zh.md %}#yarn).
 
 The following configuration parameters are managed by Flink on YARN, as they 
might get overwritten by the framework at runtime:
 - `jobmanager.rpc.address` (dynamically set to the address of the JobManager 
container by Flink on YARN)
@@ -182,17 +182,17 @@ If you need to pass additional Hadoop configuration files 
to Flink, you can do s
 
 A JobManager running on YARN will request additional TaskManagers, if it can 
not run all submitted jobs with the existing resources. In particular when 
running in Session Mode, the JobManager will, if needed, allocate additional 
TaskManagers as additional jobs are submitted. Unused TaskManagers are freed up 
again after a timeout.
 
-The memory configurations for JobManager and TaskManager processes will be 
respected by the YARN implementation. The number of reported VCores is by 
default equal to the number of configured slots per TaskManager. The 
[yarn.containers.vcores]({% link deployment/config.md 
%}#yarn-containers-vcores) allows overwriting the number of vcores with a 
custom value. In order for this parameter to work you should enable CPU 
scheduling in your YARN cluster.
+The memory configurations for JobManager and TaskManager processes will be 
respected by the YARN implementation. The number of reported VCores is by 
default equal to the number of configured slots per TaskManager. The 
[yarn.containers.vcores]({% link deployment/config.zh.md 
%}#yarn-containers-vcores) allows overwriting the number of vcores with a 
custom value. In order for this parameter to work you should enable CPU 
scheduling in your YARN cluster.
 
-Failed containers (including the JobManager) are replaced by YARN. The maximum 
number of JobManager container restarts is configured via 
[yarn.application-attempts]({% link deployment/config.md 
%}#yarn-application-attempts) (default 1). The YARN Application will fail once 
all attempts are exhausted.
+Failed containers (including the JobManager) are replaced by YARN. The maximum 
number of JobManager container restarts is configured via 
[yarn.application-attempts]({% link deployment/config.zh.md 
%}#yarn-application-attempts) (default 1). The YARN Application will fail once 
all attempts are exhausted.
 
 ### High-Availability on YARN
 
-High-Availability on YARN is achieved through a combination of YARN and a 
[high availability service]({% link deployment/ha/index.md %}).
+High-Availability on YARN is achieved through a combination of YARN and a 
[high availability service]({% link deployment/ha/index.zh.md %}).
 
 Once a HA service is configured, it will persist JobManager metadata and 
perform leader elections.
 
-YARN is taking care of restarting failed JobManagers. The maximum number of 
JobManager restarts is defined through two configuration parameters. First 
Flink's [yarn.application-attempts]({% link deployment/config.md 
%}#yarn-application-attempts) configuration will default 2. This value is 
limited by YARN's 
[yarn.resourcemanager.am.max-attempts](https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml),
 which also defaults to 2.
+YARN is taking care of restarting failed JobManagers. The maximum number of 
JobManager restarts is defined through two configuration parameters. First 
Flink's [yarn.application-attempts]({% link deployment/config.zh.md 
%}#yarn-application-attempts) configuration will default 2. This value is 
limited by YARN's 
[yarn.resourcemanager.am.max-attempts](https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml),
 which also defaults to 2.
 
 Note that Flink is managing the `high-availability.cluster-id` configuration 
parameter when running on YARN. **You should not overwrite this parameter when 
running an HA cluster on YARN**. The cluster ID is used to distinguish multiple 
HA clusters in the HA backend (for example Zookeeper). Overwriting this 
configuration parameter can lead to multiple YARN clusters affecting each other.
 
@@ -213,7 +213,7 @@ For providing Flink with the required Hadoop dependencies, 
we recommend setting
 
 If that is not possible, the dependencies can also be put into the `lib/` 
folder of Flink. 
 
-Flink also offers pre-bundled Hadoop fat jars for placing them in the `lib/` 
folder, on the [Downloads / Additional 
Components]({{site.download_url}}#additional-components) section of the 
website. These pre-bundled fat jars are shaded to avoid dependency conflicts 
with common libraries. The Flink community is not testing the YARN integration 
against these pre-bundled jars. 
+Flink also offers pre-bundled Hadoop fat jars for placing them in the `lib/` 
folder, on the [Downloads / Additional 
Components]({{site.zh_download_url}}#additional-components) section of the 
website. These pre-bundled fat jars are shaded to avoid dependency conflicts 
with common libraries. The Flink community is not testing the YARN integration 
against these pre-bundled jars. 
 
 ### Running Flink on YARN behind Firewalls
 
@@ -221,11 +221,11 @@ Some YARN clusters use firewalls for controlling the 
network traffic between the
 In those setups, Flink jobs can only be submitted to a YARN session from 
within the cluster's network (behind the firewall).
 If this is not feasible for production use, Flink allows to configure a port 
range for its REST endpoint, used for the client-cluster communication. With 
this range configured, users can also submit jobs to Flink crossing the 
firewall.
 
-The configuration parameter for specifying the REST endpoint port is 
[rest.bind-port]({% link deployment/config.md %}#rest-bind-port). This 
configuration option accepts single ports (for example: "50010"), ranges 
("50000-50025"), or a combination of both.
+The configuration parameter for specifying the REST endpoint port is 
[rest.bind-port]({% link deployment/config.zh.md %}#rest-bind-port). This 
configuration option accepts single ports (for example: "50010"), ranges 
("50000-50025"), or a combination of both.
 
 ### User jars & Classpath
 
-By default Flink will include the user jars into the system classpath when 
running a single job. This behavior can be controlled with the 
[yarn.per-job-cluster.include-user-jar]({% link deployment/config.md 
%}#yarn-per-job-cluster-include-user-jar) parameter.
+By default Flink will include the user jars into the system classpath when 
running a single job. This behavior can be controlled with the 
[yarn.per-job-cluster.include-user-jar]({% link deployment/config.zh.md 
%}#yarn-per-job-cluster-include-user-jar) parameter.
 
 When setting this to `DISABLED` Flink will include the jar in the user 
classpath instead.
 
diff --git a/docs/dev/batch/hadoop_compatibility.md 
b/docs/dev/batch/hadoop_compatibility.md
index be26996..f7e76b1 100644
--- a/docs/dev/batch/hadoop_compatibility.md
+++ b/docs/dev/batch/hadoop_compatibility.md
@@ -64,7 +64,7 @@ and Reducers.
 </dependency>
 {% endhighlight %}
 
-If you want to run your Flink application locally (from your IDE), you also 
need to add 
+If you want to run your Flink application locally (e.g. from your IDE), you 
also need to add 
 a `hadoop-client` dependency such as:
 
 {% highlight xml %}
diff --git a/docs/dev/batch/hadoop_compatibility.zh.md 
b/docs/dev/batch/hadoop_compatibility.zh.md
index a85cbf0..89a82d4 100644
--- a/docs/dev/batch/hadoop_compatibility.zh.md
+++ b/docs/dev/batch/hadoop_compatibility.zh.md
@@ -64,6 +64,9 @@ and Reducers.
 </dependency>
 {% endhighlight %}
 
+If you want to run your Flink application locally (e.g. from your IDE), you 
also need to add 
+a `hadoop-client` dependency such as:
+
 {% highlight xml %}
 <dependency>
     <groupId>org.apache.hadoop</groupId>
diff --git a/docs/dev/project-configuration.md 
b/docs/dev/project-configuration.md
index 0a81ccbe..d5ca58c 100644
--- a/docs/dev/project-configuration.md
+++ b/docs/dev/project-configuration.md
@@ -153,7 +153,11 @@ for details on how to build Flink for a specific Scala 
version.
 
 If you want to use Flink with Hadoop, you need to have a Flink setup that 
includes the Hadoop dependencies, rather than
 adding Hadoop as an application dependency. Flink will use the Hadoop 
dependencies specified by the `HADOOP_CLASSPATH`
-environment variable, which can usually be set by calling `export 
HADOOP_CLASSPATH=``hadoop classpath```
+environment variable, which can be set in the following way:
+
+{% highlight bash %}
+export HADOOP_CLASSPATH=`hadoop classpath`
+{% endhighlight %}
 
 There are two main reasons for that design:
 
diff --git a/docs/dev/project-configuration.zh.md 
b/docs/dev/project-configuration.zh.md
index 0a81ccbe..d5ca58c 100644
--- a/docs/dev/project-configuration.zh.md
+++ b/docs/dev/project-configuration.zh.md
@@ -153,7 +153,11 @@ for details on how to build Flink for a specific Scala 
version.
 
 If you want to use Flink with Hadoop, you need to have a Flink setup that 
includes the Hadoop dependencies, rather than
 adding Hadoop as an application dependency. Flink will use the Hadoop 
dependencies specified by the `HADOOP_CLASSPATH`
-environment variable, which can usually be set by calling `export 
HADOOP_CLASSPATH=``hadoop classpath```
+environment variable, which can be set in the following way:
+
+{% highlight bash %}
+export HADOOP_CLASSPATH=`hadoop classpath`
+{% endhighlight %}
 
 There are two main reasons for that design:
 
diff --git a/docs/dev/table/connectors/hive/index.md 
b/docs/dev/table/connectors/hive/index.md
index 9e13c7d..0438d4f 100644
--- a/docs/dev/table/connectors/hive/index.md
+++ b/docs/dev/table/connectors/hive/index.md
@@ -92,7 +92,7 @@ to make the integration work in Table API program or SQL in 
SQL Client.
 Alternatively, you can put these dependencies in a dedicated folder, and add 
them to classpath with the `-C`
 or `-l` option for Table API program or SQL Client respectively.
 
-Apache Hive is built on Hadoop, so you need to provide Hadoop dependenies, by 
setting the `HADOOP_CLASSPATH` 
+Apache Hive is built on Hadoop, so you need to provide Hadoop dependencies, by 
setting the `HADOOP_CLASSPATH` 
 environment variable:
 ```
 export HADOOP_CLASSPATH=`hadoop classpath`
diff --git a/docs/dev/table/streaming/joins.zh.md 
b/docs/dev/table/streaming/joins.zh.md
index f24bf67..d548f09 100644
--- a/docs/dev/table/streaming/joins.zh.md
+++ b/docs/dev/table/streaming/joins.zh.md
@@ -71,7 +71,7 @@ WHERE o.id = s.orderId AND
 时态表 Join
 --------------------------
 <span class="label label-danger">注意</span> 只在 Blink planner 中支持。
-<span class="label label-danger">注意</span> 时态表有两种方式去定义,即 [时态表函数]({% link 
dev/table/streaming/temporal_tables.zh.md %})#时态表函数) 和 [时态表 DDL]({% link 
dev/table/streaming/temporal_tables.zh.md %}#时态表),使用时态表函数的时态表 join 只支持在 Table 
API 中使用,使用时态表 DDL 的时态表 join 只支持在 SQL 中使用。
+<span class="label label-danger">注意</span> 时态表有两种方式去定义,即 [时态表函数]({% link 
dev/table/streaming/temporal_tables.zh.md %}#时态表函数) 和 [时态表 DDL]({% link 
dev/table/streaming/temporal_tables.zh.md %}#时态表),使用时态表函数的时态表 join 只支持在 Table 
API 中使用,使用时态表 DDL 的时态表 join 只支持在 SQL 中使用。
 请参考[时态表]({% link dev/table/streaming/temporal_tables.zh.md 
%})页面获取更多关于时态表和时态表函数的区别。
 
 时态表 Join 意味着对任意表(左输入/探针侧)去关联一个时态表(右输入/构建侧)的版本,时态表可以是一张跟踪所有变更记录的表(例如数据库表的 
changelog,包含多个表快照),也可以是物化所有变更之后的表(例如数据库表,只有最新表快照)。
@@ -88,17 +88,17 @@ ON table1.column-name1 = table2.column-name1
 <a name="processing-time-temporal-joins"></a>
 
 ### 基于事件时间的时态 Join
-基于事件时间的时态表 join 使用(左侧输入/探针侧) 的 事件时间 去关联(右侧输入/构建侧) 
[版本表](temporal_tables.html#声明版本表) 对应的版本。
+基于事件时间的时态表 join 使用(左侧输入/探针侧) 的 事件时间 去关联(右侧输入/构建侧) [版本表]({% link 
dev/table/streaming/temporal_tables.zh.md %}#声明版本表) 对应的版本。
 基于事件时间的时态表 join 仅支持关版本表或版本视图,版本表或版本视图只能是一个 changelog 流。 但是,Flink 支持将 
append-only 流转换成 changelog 流,因此版本表也可以来自一个 append-only 流。
-查看[声明版本视图](temporal_tables.html#声明版本视图) 获取更多的信息关于如何声明一张来自 append-only 流的版本表。
+查看[声明版本视图]({% link dev/table/streaming/temporal_tables.zh.md %}#声明版本视图) 
获取更多的信息关于如何声明一张来自 append-only 流的版本表。
 
 将事件时间作为时间属性时,可将 _过去_ 时间属性与时态表一起使用。这允许对两个表中在相同时间点的记录执行 Join 操作。
 与基于处理时间的时态 Join 相比,时态表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个 
watermarks 以来的所有版本(按时间区分)。
 
-例如,在探针侧表新插入一条事件时间时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 
的版本根据[时态表的概念](temporal_tables.html)进行 Join 运算。
+例如,在探针侧表新插入一条事件时间时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[时态表的概念]({% 
link dev/table/streaming/temporal_tables.zh.md %})进行 Join 运算。
 因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-通过定义事件时间,[watermarks]({{ site.baseurl }}/dev/event_time.html) 允许 Join 
运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
+通过定义事件时间,[watermarks]({% link  dev/event_time.zh.md %}) 允许 Join 
运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
 下面的例子展示了订单流关联产品表这个场景举例,`orders` 表包含了来自 Kafka 的实时订单流,`product_changelog` 
表来自数据库表 `products` 的 changelog , 产品的价格在数据库表 `products` 中是随时间实时变化的。
 
@@ -192,7 +192,7 @@ o_005    18:00:00   NULL         NULL         NULL
 
 ### 基于处理时间的时态 Join
 
-基于处理时间的时态表 join 使用任意表 (左侧输入/探针侧) 的 处理时间 去关联 (右侧输入/构建侧) 
[普通表](temporal_tables.html#声明普通表)的最新版本.
+基于处理时间的时态表 join 使用任意表 (左侧输入/探针侧) 的 处理时间 去关联 (右侧输入/构建侧) [普通表]({% link 
dev/table/streaming/temporal_tables.zh.md %}#声明普通表)的最新版本.
 基于处理时间的时态表 join 当前只支持关联普通表或普通视图,且支持普通表或普通视图当前只能是 append-only 流。
 
 如果将处理时间作为时间属性,_过去_ 时间属性将无法与时态表一起使用。根据定义,处理时间总会是当前时间戳。
@@ -254,7 +254,7 @@ FROM
 时态表函数 Join
 --------------------------
 
-时态表函数 Join 
连接了一个递增表(左输入/探针侧)和一个时态表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[时态表](temporal_tables.html)的相关章节查看更多细节。
+时态表函数 Join 连接了一个递增表(左输入/探针侧)和一个时态表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[时态表]({% 
link dev/table/streaming/temporal_tables.zh.md %})的相关章节查看更多细节。
 
 下方示例展示了一个递增表 `Orders` 与一个不断改变的汇率表 `RatesHistory` 的 Join 操作。
 
@@ -295,7 +295,7 @@ rowtime amount currency
 10:15        2 Euro
 {% endhighlight %}
 
-如果没有[时态表](temporal_tables.html)概念,则需要写一段这样的查询:
+如果没有[时态表]({% link dev/table/streaming/temporal_tables.zh.md %})概念,则需要写一段这样的查询:
 
 {% highlight sql %}
 SELECT

Reply via email to