This is an automated email from the ASF dual-hosted git repository.
chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git
The following commit(s) were added to refs/heads/master by this push:
new b7906f6 [FLINK-23463][docs] Replace <div> tags with ShortCodes
b7906f6 is described below
commit b7906f6630a3dc831eac1f845dd0037b2f660eab
Author: Yangze Guo <[email protected]>
AuthorDate: Wed Aug 4 18:44:29 2021 +0800
[FLINK-23463][docs] Replace <div> tags with ShortCodes
---
.../content.zh/docs/connectors/datastream/kafka.md | 4 +--
.../docs/connectors/datastream/kinesis.md | 2 --
.../docs/connectors/datastream/streamfile_sink.md | 8 ++---
.../docs/deployment/advanced/external_resources.md | 24 ++++++-------
docs/content.zh/docs/deployment/config.md | 6 ++--
.../docs/deployment/memory/mem_migration.md | 16 ++++-----
.../docs/deployment/memory/mem_tuning.md | 6 ++--
docs/content.zh/docs/dev/dataset/iterations.md | 16 ++++-----
.../datastream/event-time/generating_watermarks.md | 12 +++----
.../datastream/fault-tolerance/queryable_state.md | 32 ++++++++---------
.../serialization/custom_serialization.md | 8 ++---
docs/content.zh/docs/dev/table/common.md | 6 ++--
docs/content.zh/docs/ops/state/checkpoints.md | 6 ++--
docs/content.zh/docs/ops/state/savepoints.md | 38 +++++++++++---------
docs/content.zh/docs/ops/state/state_backends.md | 12 +++----
docs/content/docs/connectors/datastream/kinesis.md | 2 --
.../docs/deployment/advanced/external_resources.md | 24 ++++++-------
docs/content/docs/deployment/config.md | 6 ++--
.../docs/deployment/memory/mem_migration.md | 6 ++--
docs/content/docs/deployment/memory/mem_tuning.md | 8 ++---
docs/content/docs/dev/dataset/iterations.md | 16 ++++-----
.../datastream/event-time/generating_watermarks.md | 6 ++--
.../datastream/fault-tolerance/queryable_state.md | 42 +++++++++++-----------
docs/content/docs/ops/state/savepoints.md | 6 ++--
24 files changed, 148 insertions(+), 164 deletions(-)
diff --git a/docs/content.zh/docs/connectors/datastream/kafka.md
b/docs/content.zh/docs/connectors/datastream/kafka.md
index a2cee6e..7d60a37 100644
--- a/docs/content.zh/docs/connectors/datastream/kafka.md
+++ b/docs/content.zh/docs/connectors/datastream/kafka.md
@@ -486,9 +486,9 @@ Flink 通过 Kafka 连接器提供了一流的支持,可以对 Kerberos 配置
## 问题排查
-<div class="alert alert-warning">
+{{< hint warning >}}
如果你在使用 Flink 时对 Kafka 有问题,请记住,Flink 只封装 <a
href="https://kafka.apache.org/documentation/#consumerapi">KafkaConsumer</a> 或
<a
href="https://kafka.apache.org/documentation/#producerapi">KafkaProducer</a>,你的问题可能独立于
Flink,有时可以通过升级 Kafka broker 程序、重新配置 Kafka broker 程序或在 Flink 中重新配置
<tt>KafkaConsumer</tt> 或 <tt>KafkaProducer</tt> 来解决。下面列出了一些常见问题的示例。
-</div>
+{{< /hint >}}
<a name="data-loss"></a>
diff --git a/docs/content.zh/docs/connectors/datastream/kinesis.md
b/docs/content.zh/docs/connectors/datastream/kinesis.md
index 3088f0b..50e09d5 100644
--- a/docs/content.zh/docs/connectors/datastream/kinesis.md
+++ b/docs/content.zh/docs/connectors/datastream/kinesis.md
@@ -436,13 +436,11 @@ to avoid the event time skew related problems described
in [Event time synchroni
To enable synchronization, set the watermark tracker on the consumer:
-<div data-lang="java" markdown="1">
```java
JobManagerWatermarkTracker watermarkTracker =
new JobManagerWatermarkTracker("myKinesisSource");
consumer.setWatermarkTracker(watermarkTracker);
```
-</div>
The `JobManagerWatermarkTracker` will use a global aggregate to synchronize
the per subtask watermarks. Each subtask
uses a per shard queue to control the rate at which records are emitted
downstream based on how far ahead of the global
diff --git a/docs/content.zh/docs/connectors/datastream/streamfile_sink.md
b/docs/content.zh/docs/connectors/datastream/streamfile_sink.md
index 01e62df..ee8c014 100644
--- a/docs/content.zh/docs/connectors/datastream/streamfile_sink.md
+++ b/docs/content.zh/docs/connectors/datastream/streamfile_sink.md
@@ -599,10 +599,10 @@ Flink 有两个内置的滚动策略:
处于 Finished 状态的文件不会再被修改,可以被下游系统安全地读取。
-<div class="alert alert-info">
- <b>重要:</b> 部分文件的索引在每个 subtask 内部是严格递增的(按文件创建顺序)。但是索引并不总是连续的。当 Job
重启后,所有部分文件的索引从 `max part index + 1` 开始,
- 这里的 `max part index` 是所有 subtask 中索引的最大值。
-</div>
+{{< hint info >}}
+**重要:** 部分文件的索引在每个 subtask 内部是严格递增的(按文件创建顺序)。但是索引并不总是连续的。当 Job 重启后,所有部分文件的索引从
`max part index + 1` 开始,
+这里的 `max part index` 是所有 subtask 中索引的最大值。
+{{< /hint >}}
对于每个活动的桶,Writer 在任何时候都只有一个处于 In-progress 状态的部分文件(part file),但是可能有几个 Penging 和
Finished 状态的部分文件(part file)。
diff --git a/docs/content.zh/docs/deployment/advanced/external_resources.md
b/docs/content.zh/docs/deployment/advanced/external_resources.md
index d21e862..5e8a1a0 100644
--- a/docs/content.zh/docs/deployment/advanced/external_resources.md
+++ b/docs/content.zh/docs/deployment/advanced/external_resources.md
@@ -148,9 +148,9 @@ class ExternalResourceMapFunction extends
RichMapFunction[(String, String)] {
`ExternalResourceInfo` 中包含一个或多个键-值对,其键值表示资源的不同维度。你可以通过
`ExternalResourceInfo#getKeys` 获取所有的键。
-<div class="alert alert-info">
- <strong>提示:</strong>目前,RuntimeContext#getExternalResourceInfos
返回的信息对所有算子都是可用的。
-</div>
+{{< hint info >}}
+**提示:** 目前,RuntimeContext#getExternalResourceInfos 返回的信息对所有算子都是可用的。
+{{< /hint >}}
<a name="implement-a-plugin-for-your-custom-resource-type"></a>
@@ -230,9 +230,9 @@ class FPGAInfo extends ExternalResourceInfo {
之后,将 `FPGADriver`,`FPGADriverFactory`,`META-INF/services/` 和所有外部依赖打入 jar 包。在你的
Flink 发行版的 `plugins/` 文件夹中创建一个名为“fpga”的文件夹,将打好的 jar 包放入其中。
更多细节请查看 [Flink Plugin]({{< ref "docs/deployment/filesystems/plugins" >}})。
-<div class="alert alert-info">
- <strong>提示:</strong> 扩展资源由运行在同一台机器上的所有算子共享。社区可能会在未来的版本中支持外部资源隔离。
-</div>
+{{< hint info >}}
+**提示:** 扩展资源由运行在同一台机器上的所有算子共享。社区可能会在未来的版本中支持外部资源隔离。
+{{< /hint >}}
# 已支持的扩展资源插件
@@ -246,9 +246,9 @@ class FPGAInfo extends ExternalResourceInfo {
我们提供了[一个示例程序](https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/gpu/MatrixVectorMul.java),展示了如何在
Flink 中使用 GPU 资源来做矩阵-向量乘法。
-<div class="alert alert-info">
- <strong>提示:</strong>目前,对于所有算子,RuntimeContext#getExternalResourceInfos
会返回同样的资源信息。也即,在同一个 TaskManager 中运行的所有算子都可以访问同一组 GPU 设备。扩展资源目前没有算子级别的隔离。
-</div>
+{{< hint info >}}
+**提示:** 目前,对于所有算子,RuntimeContext#getExternalResourceInfos 会返回同样的资源信息。也即,在同一个
TaskManager 中运行的所有算子都可以访问同一组 GPU 设备。扩展资源目前没有算子级别的隔离。
+{{< /hint >}}
### 前置准备
@@ -326,9 +326,9 @@ external-resource.gpu.kubernetes.config-key: nvidia.com/gpu
# for Kubernetes
- `--coordination-file filePath`:用于同步 GPU 资源分配状态的文件路径。默认路径为
`/var/tmp/flink-gpu-coordination`。
-<div class="alert alert-info">
- <strong>提示:</strong>协调模式只确保一个 GPU 设备不会被同一个 Flink 集群的多个 TaskManager 共享。不同
Flink 集群间(具有不同的协调文件)或非 Flink 应用程序仍然可以使用相同的 GPU 设备。
-</div>
+{{< hint info >}}
+**提示:** 协调模式只确保一个 GPU 设备不会被同一个 Flink 集群的多个 TaskManager 共享。不同 Flink
集群间(具有不同的协调文件)或非 Flink 应用程序仍然可以使用相同的 GPU 设备。
+{{< /hint >}}
#### 自定义脚本
diff --git a/docs/content.zh/docs/deployment/config.md
b/docs/content.zh/docs/deployment/config.md
index c5317bc..77545a3 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -309,9 +309,9 @@ See the [Queryable State Docs]({{< ref
"docs/dev/datastream/fault-tolerance/quer
# Debugging & Expert Tuning
-<div class="alert alert-warning">
- The options below here are meant for expert users and for fixing/debugging
problems. Most setups should not need to configure these options.
-</div>
+{{< hint warning >}}
+The options below here are meant for expert users and for fixing/debugging
problems. Most setups should not need to configure these options.
+{{< /hint >}}
### Class Loading
diff --git a/docs/content.zh/docs/deployment/memory/mem_migration.md
b/docs/content.zh/docs/deployment/memory/mem_migration.md
index 0ad44f5..91b3e29 100644
--- a/docs/content.zh/docs/deployment/memory/mem_migration.md
+++ b/docs/content.zh/docs/deployment/memory/mem_migration.md
@@ -35,11 +35,11 @@ under the License.
* toc
-<div class="alert alert-warning">
- <strong>注意:</strong> 请仔细阅读本篇升级指南。
- 使用原本的和新的内存配制方法可能会使内存组成部分具有截然不同的大小。
- 未经调整直接沿用 Flink 1.10 以前版本的 TaskManager 配置文件或 Flink 1.11 以前版本的 JobManager
配置文件,可能导致应用的行为、性能发生变化,甚至造成应用执行失败。
-</div>
+{{< hint warning >}}
+**注意:** 请仔细阅读本篇升级指南。
+使用原本的和新的内存配制方法可能会使内存组成部分具有截然不同的大小。
+未经调整直接沿用 Flink 1.10 以前版本的 TaskManager 配置文件或 Flink 1.11 以前版本的 JobManager
配置文件,可能导致应用的行为、性能发生变化,甚至造成应用执行失败。
+{{< /hint >}}
<span class="label label-info">提示</span>
在 *1.10/1.11* 版本之前,Flink 不要求用户一定要配置 TaskManager/JobManager
内存相关的参数,因为这些参数都具有默认值。
@@ -300,6 +300,6 @@ Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度
请参考[如何配置总内存]({{< ref "docs/deployment/memory/mem_setup"
>}}#configure-total-memory)。
-<div class="alert alert-warning">
- <strong>注意:</strong> 使用新的默认 `flink-conf.yaml` 可能会造成各内存部分的大小发生变化,从而产生性能变化。
-</div>
+{{< hint warning >}}
+**注意:** 使用新的默认 `flink-conf.yaml` 可能会造成各内存部分的大小发生变化,从而产生性能变化。
+{{< /hint >}}
diff --git a/docs/content.zh/docs/deployment/memory/mem_tuning.md
b/docs/content.zh/docs/deployment/memory/mem_tuning.md
index df70d89..6eed393 100644
--- a/docs/content.zh/docs/deployment/memory/mem_tuning.md
+++ b/docs/content.zh/docs/deployment/memory/mem_tuning.md
@@ -49,9 +49,9 @@ under the License.
<span class="label label-info">提示</span>
如果配置了 *Flink 总内存*,Flink 会自动加上 JVM 相关的内存部分,根据推算出的*进程总内存*大小申请容器。
-<div class="alert alert-warning">
- <strong>注意:</strong> 如果 Flink
或者用户代码分配超过容器大小的非托管的堆外(本地)内存,部署环境可能会杀掉超用内存的容器,造成作业执行失败。
-</div>
+{{< hint warning >}}
+**注意:** 如果 Flink 或者用户代码分配超过容器大小的非托管的堆外(本地)内存,部署环境可能会杀掉超用内存的容器,造成作业执行失败。
+{{< /hint >}}
请参考[容器内存超用]({{< ref "docs/deployment/memory/mem_trouble"
>}}#container-memory-exceeded)中的相关描述。
diff --git a/docs/content.zh/docs/dev/dataset/iterations.md
b/docs/content.zh/docs/dev/dataset/iterations.md
index aa56a0b..c563e1d 100644
--- a/docs/content.zh/docs/dev/dataset/iterations.md
+++ b/docs/content.zh/docs/dev/dataset/iterations.md
@@ -115,11 +115,9 @@ while (!terminationCriterion()) {
setFinalState(state);
```
-<div class="panel panel-default">
- <div class="panel-body">
- See the <strong><a href="index.html">Programming Guide</a> </strong>
for details and code examples.
- </div>
-</div>
+{{< hint info >}}
+See the **[Programming Guide]({{< ref "docs/dev/dataset/overview" >}})** for
details and code examples.
+{{< /hint >}}
### Example: Incrementing Numbers
@@ -179,11 +177,9 @@ while (!terminationCriterion()) {
setFinalState(solution);
```
-<div class="panel panel-default">
- <div class="panel-body">
- See the <strong><a href="index.html">programming guide</a></strong> for
details and code examples.
- </div>
-</div>
+{{< hint info >}}
+See the **[Programming Guide]({{< ref "docs/dev/dataset/overview" >}})** for
details and code examples.
+{{< /hint >}}
### Example: Propagate Minimum in Graph
diff --git
a/docs/content.zh/docs/dev/datastream/event-time/generating_watermarks.md
b/docs/content.zh/docs/dev/datastream/event-time/generating_watermarks.md
index 9eb0b9e..1f8d099 100644
--- a/docs/content.zh/docs/dev/datastream/event-time/generating_watermarks.md
+++ b/docs/content.zh/docs/dev/datastream/event-time/generating_watermarks.md
@@ -83,9 +83,9 @@ WatermarkStrategy
稍后我们将在[自定义 WatermarkGenerator](#writing-watermarkgenerators) 小节学习
WatermarkGenerator 接口。
-<div class="alert alert-warning">
-<strong>注意</strong>:时间戳和 watermark 都是从 1970-01-01T00:00:00Z 起的 Java
纪元开始,并以毫秒为单位。
-</div>
+{{< hint warning >}}
+**注意:** 时间戳和 watermark 都是从 1970-01-01T00:00:00Z 起的 Java 纪元开始,并以毫秒为单位。
+{{< /hint >}}
<a name="using-watermark-strategies"></a>
@@ -345,9 +345,9 @@ class PunctuatedAssigner extends
AssignerWithPunctuatedWatermarks[MyEvent] {
{{< /tab >}}
{{< /tabs >}}
-<div class="alert alert-warning">
-<strong>注意</strong>:可以针对每个事件去生成 watermark。但是由于每个 watermark 都会在下游做一些计算,因此过多的
watermark 会降低程序性能。
-</div>
+{{< hint warning >}}
+**注意:** 可以针对每个事件去生成 watermark。但是由于每个 watermark 都会在下游做一些计算,因此过多的 watermark
会降低程序性能。
+{{< /hint >}}
<a name="watermark-strategies-and-the-kafka-connector"></a>
diff --git
a/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
b/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
index 779f89b..33e81cc 100644
--- a/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
+++ b/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
@@ -91,9 +91,9 @@ QueryableStateStream asQueryableState(
```
-<div class="alert alert-info">
- <strong>注意:</strong> 没有可查询的 <code>ListState</code> sink,因为这种情况下 list
会不断增长,并且可能不会被清理,最终会消耗大量的内存。
-</div>
+{{< hint info >}}
+**注意:** 没有可查询的 `ListState` sink,因为这种情况下 list 会不断增长,并且可能不会被清理,最终会消耗大量的内存。
+{{< /hint >}}
返回的 `QueryableStateStream` 可以被视作一个sink,而且**不能再**被进一步转换。在内部实现上,一个
`QueryableStateStream` 被转换成一个 operator,使用输入的数据来更新 queryable state。state 如何更新是由
`asQueryableState` 提供的 `StateDescriptor` 来决定的。在下面的代码中, keyed stream 的所有数据将会通过
`ValueState.update(value)` 来更新状态:
@@ -117,9 +117,9 @@ ValueStateDescriptor<Tuple2<Long, Long>> descriptor =
descriptor.setQueryable("query-name"); // queryable state name
```
-<div class="alert alert-info">
- <strong>注意:</strong> 参数 <code>queryableStateName</code>
可以任意选取,并且只被用来进行查询,它可以和 state 的名称不同。
-</div>
+{{< hint info >}}
+**注意:** 参数 `queryableStateName` 可以任意选取,并且只被用来进行查询,它可以和 state 的名称不同。
+{{< /hint >}}
这种方式不会限制 state 类型,即任意的
`ValueState`、`ReduceState`、`ListState`、`MapState`、`AggregatingState` 以及已弃用的
`FoldingState`
均可作为 queryable state。
@@ -130,7 +130,6 @@ descriptor.setQueryable("query-name"); // queryable state
name
为了进行查询,可以使用辅助类 `QueryableStateClient`,这个类位于 `flink-queryable-state-client` 的
jar 中,在项目的 `pom.xml` 需要显示添加对 `flink-queryable-state-client` 和 `flink-core` 的依赖,
如下所示:
-<div data-lang="java" markdown="1">
```xml
<dependency>
<groupId>org.apache.flink</groupId>
@@ -143,7 +142,6 @@ descriptor.setQueryable("query-name"); // queryable state
name
<version>{{< version >}}</version>
</dependency>
```
-</div>
关于依赖的更多信息, 可以参考如何 [配置 Flink 项目]({{< ref
"docs/dev/datastream/project-configuration" >}}).
@@ -172,16 +170,16 @@ CompletableFuture<S> getKvState(
细心的读者会注意到返回的 future 包含类型为 `S` 的值,*即*一个存储实际值的 `State` 对象。它可以是Flink支持的任何类型的
state:`ValueState`、`ReduceState`、
`ListState`、`MapState`、`AggregatingState` 以及弃用的 `FoldingState`。
-<div class="alert alert-info">
- <strong>注意:</strong> 这些 state 对象不允许对其中的 state 进行修改。你可以通过
<code>valueState.get()</code> 获取实际的 state,
- 或者通过 <code>mapState.entries()</code> 遍历所有 <code><K,
V></code>,但是不能修改它们。举例来说,对返回的 list state 调用 <code>add()</code>
- 方法将会导致 <code>UnsupportedOperationException</code>。
-</div>
+{{< hint info >}}
+**注意:** 这些 state 对象不允许对其中的 state 进行修改。你可以通过 `valueState.get()` 获取实际的 state,
+或者通过 `mapState.entries()` 遍历所有 `<K, V>`,但是不能修改它们。举例来说,对返回的 list state 调用
`add()`
+ 方法将会导致 `UnsupportedOperationException`。
+{{< /hint >}}
-<div class="alert alert-info">
- <strong>注意:</strong> 客户端是异步的,并且可能被多个线程共享。客户端不再使用后需要通过
<code>QueryableStateClient.shutdown()</code>
- 来终止,从而释放资源。
-</div>
+{{< hint info >}}
+**注意:** 客户端是异步的,并且可能被多个线程共享。客户端不再使用后需要通过 `QueryableStateClient.shutdown()`
+ 来终止,从而释放资源。
+{{< /hint >}}
### 示例
diff --git
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization.md
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization.md
index 905e5f0..f7e4fa9 100644
---
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization.md
+++
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization.md
@@ -215,7 +215,7 @@ as your serializer's snapshot class:
- `TypeSerializerSchemaCompatibility.incompatible()`, if the new serializer
class is different then the previous one.
Below is an example of how the `SimpleTypeSerializerSnapshot` is used, using
Flink's `IntSerializer` as an example:
-<div data-lang="java" markdown="1">
+
```java
public class IntSerializerSnapshot extends
SimpleTypeSerializerSnapshot<Integer> {
public IntSerializerSnapshot() {
@@ -223,7 +223,6 @@ public class IntSerializerSnapshot extends
SimpleTypeSerializerSnapshot<Integer>
}
}
```
-</div>
The `IntSerializer` has no state or configurations. Serialization format is
solely defined by the serializer
class itself, and can only be read by another `IntSerializer`. Therefore, it
suits the use case of the
@@ -252,7 +251,7 @@ composite serializers. It deals with reading and writing
the nested serializer s
the final compatibility result taking into account the compatibility of all
nested serializers.
Below is an example of how the `CompositeTypeSerializerSnapshot` is used,
using Flink's `MapSerializer` as an example:
-<div data-lang="java" markdown="1">
+
```java
public class MapSerializerSnapshot<K, V> extends
CompositeTypeSerializerSnapshot<Map<K, V>, MapSerializer> {
@@ -284,7 +283,6 @@ public class MapSerializerSnapshot<K, V> extends
CompositeTypeSerializerSnapshot
}
}
```
-</div>
When implementing a new serializer snapshot as a subclass of
`CompositeTypeSerializerSnapshot`,
the following three methods must be implemented:
@@ -313,7 +311,6 @@ has outer snapshot information, then all three methods must
be implemented.
Below is an example of how the `CompositeTypeSerializerSnapshot` is used for
composite serializer snapshots
that do have outer snapshot information, using Flink's
`GenericArraySerializer` as an example:
-<div data-lang="java" markdown="1">
```java
public final class GenericArraySerializerSnapshot<C> extends
CompositeTypeSerializerSnapshot<C[], GenericArraySerializer> {
@@ -364,7 +361,6 @@ public final class GenericArraySerializerSnapshot<C>
extends CompositeTypeSerial
}
}
```
-</div>
There are two important things to notice in the above code snippet. First of
all, since this
`CompositeTypeSerializerSnapshot` implementation has outer snapshot
information that is written as part of the snapshot,
diff --git a/docs/content.zh/docs/dev/table/common.md
b/docs/content.zh/docs/dev/table/common.md
index 229131a..74b439a 100644
--- a/docs/content.zh/docs/dev/table/common.md
+++ b/docs/content.zh/docs/dev/table/common.md
@@ -851,7 +851,7 @@ print(table.explain())
{{< /tabs >}}
上述例子的结果是:
-<div>
+
```text
== Abstract Syntax Tree ==
LogicalUnion(all=[true])
@@ -871,7 +871,6 @@ Union(all=[true], union=[count, word])
: +- DataStreamScan(table=[[Unregistered_DataStream_1]], fields=[count, word])
+- DataStreamScan(table=[[Unregistered_DataStream_2]], fields=[count, word])
```
-</div>
以下代码展示了一个示例以及使用 `StatementSet.explain()` 的多 sink 计划的相应输出:
@@ -1012,7 +1011,7 @@ print(explanation)
{{< /tabs >}}
多 sink 计划的结果是:
-<div>
+
```text
== Abstract Syntax Tree ==
LogicalLegacySink(name=[`default_catalog`.`default_database`.`MySink1`],
fields=[count, word])
@@ -1049,7 +1048,6 @@
LegacySink(name=[`default_catalog`.`default_database`.`MySink2`], fields=[count,
+- LegacyTableSourceScan(table=[[default_catalog, default_database, MySource2,
source: [CsvTableSource(read fields: count, word)]]], fields=[count, word])
```
-</div>
{{< top >}}
diff --git a/docs/content.zh/docs/ops/state/checkpoints.md
b/docs/content.zh/docs/ops/state/checkpoints.md
index 0784b6a..0781e5c 100644
--- a/docs/content.zh/docs/ops/state/checkpoints.md
+++ b/docs/content.zh/docs/ops/state/checkpoints.md
@@ -64,9 +64,9 @@
config.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CAN
其中 **SHARED** 目录保存了可能被多个 checkpoint 引用的文件,**TASKOWNED** 保存了不会被 JobManager
删除的文件,**EXCLUSIVE** 则保存那些仅被单个 checkpoint 引用的文件。
-<div class="alert alert-warning">
- <strong>注意:</strong> Checkpoint 目录不是公共 API 的一部分,因此可能在未来的 Release 中进行改变。
-</div>
+{{< hint warning >}}
+**注意:** Checkpoint 目录不是公共 API 的一部分,因此可能在未来的 Release 中进行改变。
+{{< /hint >}}
#### 通过配置文件全局配置
diff --git a/docs/content.zh/docs/ops/state/savepoints.md
b/docs/content.zh/docs/ops/state/savepoints.md
index 74bda54..51785b6 100644
--- a/docs/content.zh/docs/ops/state/savepoints.md
+++ b/docs/content.zh/docs/ops/state/savepoints.md
@@ -31,9 +31,10 @@ under the License.
Savepoint 是依据 Flink [checkpointing 机制]({{< ref
"docs/learn-flink/fault_tolerance" >}})所创建的流作业执行状态的一致镜像。 你可以使用 Savepoint 进行
Flink 作业的停止与重启、fork 或者更新。 Savepoint 由两部分组成:稳定存储(列入 HDFS,S3,...)
上包含二进制文件的目录(通常很大),和元数据文件(相对较小)。 稳定存储上的文件表示作业执行状态的数据镜像。 Savepoint
的元数据文件以(相对路径)的形式包含(主要)指向作为 Savepoint 一部分的稳定存储上的所有文件的指针。
-<div class="alert alert-warning">
-<strong>注意:</strong> 为了允许程序和 Flink 版本之间的升级,请务必查看以下有关<a href="#分配算子-id">分配算子 ID
</a>的部分 。
-</div>
+{{< hint warning >}}
+**注意:** 为了允许程序和 Flink 版本之间的升级,请务必查看以下有关<a href="#分配算子-id">分配算子 ID </a>的部分 。
+{{< /hint >}}
+
从概念上讲, Flink 的 Savepoint 与 Checkpoint 的不同之处类似于传统数据库中的备份与恢复日志之间的差异。 Checkpoint
的主要目的是为意外失败的作业提供恢复机制。 Checkpoint 的生命周期由 Flink 管理,即 Flink 创建,管理和删除 Checkpoint -
无需用户交互。 作为一种恢复和定期触发的方法,Checkpoint 实现有两个设计目标:i)轻量级创建和 ii)尽可能快地恢复。
可能会利用某些特定的属性来达到这个,例如, 工作代码在执行尝试之间不会改变。 在用户终止作业后,通常会删除 Checkpoint(除非明确配置为保留的
Checkpoint)。
与此相反、Savepoint 由用户创建,拥有和删除。 他们的用例是计划的,手动备份和恢复。 例如,升级 Flink
版本,调整用户逻辑,改变并行度,以及进行红蓝部署等。 当然,Savepoint 必须在作业停止后继续存在。 从概念上讲,Savepoint
的生成,恢复成本可能更高一些,Savepoint 更多地关注可移植性和对前面提到的作业更改的支持。
@@ -82,9 +83,9 @@ mapper-id | State of StatefulMapper
当触发 Savepoint 时,将创建一个新的 Savepoint
目录,其中存储数据和元数据。可以通过[配置默认目标目录](#配置)或使用触发器命令指定自定义目标目录(参见[`:targetDirectory`参数](#触发-savepoint-1)来控制该目录的位置。
-<div class="alert alert-warning">
-<strong>注意:</strong>目标目录必须是 JobManager(s) 和 TaskManager(s)
都可以访问的位置,例如分布式文件系统(或者对象存储系统)上的位置。
-</div>
+{{< hint warning >}}
+**注意:** 目标目录必须是 JobManager(s) 和 TaskManager(s)
都可以访问的位置,例如分布式文件系统(或者对象存储系统)上的位置。
+{{< /hint >}}
以 `FsStateBackend` 或 `RocksDBStateBackend` 为例:
@@ -103,18 +104,21 @@ mapper-id | State of StatefulMapper
```
从 1.11.0 开始,你可以通过移动(拷贝)savepoint 目录到任意地方,然后再进行恢复。
-<div class="alert alert-warning">
-在如下两种情况中不支持 savepoint 目录的移动:1)如果启用了 *<a href="{{< ref
"docs/deployment/filesystems/s3"
>}}#entropy-injection-for-s3-file-systems">entropy
injection</a>:这种情况下,savepoint 目录不包含所有的数据文件,因为注入的路径会分散在各个路径中。
+
+{{< hint warning >}}
+在如下两种情况中不支持 savepoint 目录的移动:1)如果启用了 *<a href="{{< ref
"docs/deployment/filesystems/s3"
>}}#entropy-injection-for-s3-file-systems">entropy injection</a>*
:这种情况下,savepoint 目录不包含所有的数据文件,因为注入的路径会分散在各个路径中。
由于缺乏一个共同的根目录,因此 savepoint 将包含绝对路径,从而导致无法支持 savepoint 目录的迁移。2)作业包含了 task-owned
state(比如 `GenericWriteAhreadLog` sink)。
-</div>
+{{< /hint >}}
-<div class="alert alert-warning">
+{{< hint warning >}}
和 savepoint 不同,checkpoint 不支持任意移动文件,因为 checkpoint 可能包含一些文件的绝对路径。
-</div>
+{{< /hint >}}
+
如果你使用 `MemoryStateBackend` 的话,metadata 和 savepoint 的数据都会保存在 `_metadata`
文件中,因此不要因为没看到目录下没有数据文件而困惑。
-<div class="alert alert-warning">
- <strong>注意:</strong> 不建议移动或删除正在运行作业的最后一个 Savepoint
,因为这可能会干扰故障恢复。因此,Savepoint 对精确一次的接收器有副作用,为了确保精确一次的语义,如果在最后一个 Savepoint 之后没有
Checkpoint ,那么将使用 Savepoint 进行恢复。
-</div>
+
+{{< hint warning >}}
+**注意:** 不建议移动或删除正在运行作业的最后一个 Savepoint ,因为这可能会干扰故障恢复。因此,Savepoint
对精确一次的接收器有副作用,为了确保精确一次的语义,如果在最后一个 Savepoint 之后没有 Checkpoint ,那么将使用 Savepoint
进行恢复。
+{{< /hint >}}
#### 触发 Savepoint
@@ -179,9 +183,9 @@ state.savepoints.dir: hdfs:///flink/savepoints
如果既未配置缺省值也未指定自定义目标目录,则触发 Savepoint 将失败。
-<div class="alert alert-warning">
-<strong>注意:</strong>目标目录必须是 JobManager(s) 和 TaskManager(s)
可访问的位置,例如,分布式文件系统上的位置。
-</div>
+{{< hint warning >}}
+**注意:** 目标目录必须是 JobManager(s) 和 TaskManager(s) 可访问的位置,例如,分布式文件系统上的位置。
+{{< /hint >}}
## F.A.Q
diff --git a/docs/content.zh/docs/ops/state/state_backends.md
b/docs/content.zh/docs/ops/state/state_backends.md
index 161b7b9..1a733da 100644
--- a/docs/content.zh/docs/ops/state/state_backends.md
+++ b/docs/content.zh/docs/ops/state/state_backends.md
@@ -165,9 +165,9 @@ env.setStateBackend(new
FsStateBackend("hdfs://namenode:40010/flink/checkpoints"
</dependency>
```
-<div class="alert alert-info" markdown="span">
- <strong>注意:</strong> 由于 RocksDB 是 Flink 默认分发包的一部分,所以如果你没在代码中使用
RocksDB,则不需要添加此依赖。而且可以在 `flink-conf.yaml` 文件中通过 `state.backend` 配置 State
Backend,以及更多的 [checkpointing]({{< ref "docs/deployment/config"
>}}#checkpointing) 和 [RocksDB 特定的]({{< ref "docs/deployment/config"
>}}#rocksdb-state-backend) 参数。
-</div>
+{{< hint info >}}
+**注意:** 由于 RocksDB 是 Flink 默认分发包的一部分,所以如果你没在代码中使用 RocksDB,则不需要添加此依赖。而且可以在
`flink-conf.yaml` 文件中通过 `state.backend` 配置 State Backend,以及更多的
[checkpointing]({{< ref "docs/deployment/config" >}}#checkpointing) 和 [RocksDB
特定的]({{< ref "docs/deployment/config" >}}#rocksdb-state-backend) 参数。
+{{< /hint >}}
### 设置默认的(全局的) State Backend
@@ -256,9 +256,9 @@ Flink还提供了两个参数来控制*写路径*(MemTable)和*读路径*(
您可以选择使用 Flink 的监控指标系统来汇报 RocksDB 的原生指标,并且可以选择性的指定特定指标进行汇报。
请参阅 [configuration docs]({{< ref "docs/deployment/config"
>}}#rocksdb-native-metrics) 了解更多详情。
-<div class="alert alert-warning">
- <strong>注意:</strong> 启用 RocksDB 的原生指标可能会对应用程序的性能产生负面影响。
-</div>
+{{< hint warning >}}
+**注意:** 启用 RocksDB 的原生指标可能会对应用程序的性能产生负面影响。
+{{< /hint >}}
### 列族(ColumnFamily)级别的预定义选项
diff --git a/docs/content/docs/connectors/datastream/kinesis.md
b/docs/content/docs/connectors/datastream/kinesis.md
index f334033..474b521 100644
--- a/docs/content/docs/connectors/datastream/kinesis.md
+++ b/docs/content/docs/connectors/datastream/kinesis.md
@@ -436,13 +436,11 @@ to avoid the event time skew related problems described
in [Event time synchroni
To enable synchronization, set the watermark tracker on the consumer:
-<div data-lang="java" markdown="1">
```java
JobManagerWatermarkTracker watermarkTracker =
new JobManagerWatermarkTracker("myKinesisSource");
consumer.setWatermarkTracker(watermarkTracker);
```
-</div>
The `JobManagerWatermarkTracker` will use a global aggregate to synchronize
the per subtask watermarks. Each subtask
uses a per shard queue to control the rate at which records are emitted
downstream based on how far ahead of the global
diff --git a/docs/content/docs/deployment/advanced/external_resources.md
b/docs/content/docs/deployment/advanced/external_resources.md
index 54d0afc..ec01cf0 100644
--- a/docs/content/docs/deployment/advanced/external_resources.md
+++ b/docs/content/docs/deployment/advanced/external_resources.md
@@ -157,9 +157,9 @@ class ExternalResourceMapFunction extends
RichMapFunction[(String, String)] {
Each `ExternalResourceInfo` contains one or more properties with keys
representing the different dimensions of the resource.
You could get all valid keys by `ExternalResourceInfo#getKeys`.
-<div class="alert alert-info">
- <strong>Note:</strong> Currently, the information returned by
RuntimeContext#getExternalResourceInfos is available to all the operators.
-</div>
+{{< hint info >}}
+**Note:** Currently, the information returned by
RuntimeContext#getExternalResourceInfos is available to all the operators.
+{{< /hint >}}
# Implement a plugin for your custom resource type
@@ -240,9 +240,9 @@ Then, create a jar which includes `FPGADriver`,
`FPGADriverFactory`, `META-INF/s
Make a directory in `plugins/` of your Flink distribution with an arbitrary
name, e.g. "fpga", and put the jar into this directory.
See [Flink Plugin]({{< ref "docs/deployment/filesystems/plugins" >}}) for more
details.
-<div class="alert alert-info">
- <strong>Note:</strong> External resources are shared by all operators
running on the same machine. The community might add external resource
isolation in a future release.
-</div>
+{{< hint info >}}
+**Note:** External resources are shared by all operators running on the same
machine. The community might add external resource isolation in a future
release.
+{{< /hint >}}
# Existing supported external resource plugins
@@ -257,9 +257,9 @@ NVIDIA GPUs. You can also provide your custom script.
We provide [an example](https://github.com/apache/flink/blob/{{
site.github_branch
}}/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/gpu/MatrixVectorMul.java)
which shows how to use the GPUs to do matrix-vector multiplication in Flink.
-<div class="alert alert-info">
- <strong>Note:</strong> Currently, for all the operators,
RuntimeContext#getExternalResourceInfos returns the same set of resource
information. That means, the same set of GPU devices are always accessible to
all the operators running in the same TaskManager. There is no operator level
isolation at the moment.
-</div>
+{{< hint info >}}
+**Note:** Currently, for all the operators,
RuntimeContext#getExternalResourceInfos returns the same set of resource
information. That means, the same set of GPU devices are always accessible to
all the operators running in the same TaskManager. There is no operator level
isolation at the moment.
+{{< /hint >}}
### Pre-requisites
@@ -341,9 +341,9 @@ synchronize the allocation state of GPU devices and ensure
each GPU device can o
- `--coordination-file filePath`: The path of the coordination file used to
synchronize the allocation state of GPU resources. The default path is
`/var/tmp/flink-gpu-coordination`.
-<div class="alert alert-info">
- <strong>Note:</strong> The coordination mode only ensures that a GPU
device is not shared by multiple TaskManagers of the same Flink cluster. Please
be aware that another Flink cluster (with a different coordination file) or a
non-Flink application can still use the same GPU devices.
-</div>
+{{< hint info >}}
+**Note:** The coordination mode only ensures that a GPU device is not shared
by multiple TaskManagers of the same Flink cluster. Please be aware that
another Flink cluster (with a different coordination file) or a non-Flink
application can still use the same GPU devices.
+{{< /hint >}}
#### Custom Script
diff --git a/docs/content/docs/deployment/config.md
b/docs/content/docs/deployment/config.md
index 7eeb291..87ed75b 100644
--- a/docs/content/docs/deployment/config.md
+++ b/docs/content/docs/deployment/config.md
@@ -311,9 +311,9 @@ See the [Queryable State Docs]({{< ref
"docs/dev/datastream/fault-tolerance/quer
# Debugging & Expert Tuning
-<div class="alert alert-warning">
- The options below here are meant for expert users and for fixing/debugging
problems. Most setups should not need to configure these options.
-</div>
+{{< hint warning >}}
+The options below here are meant for expert users and for fixing/debugging
problems. Most setups should not need to configure these options.
+{{< /hint >}}
### Class Loading
diff --git a/docs/content/docs/deployment/memory/mem_migration.md
b/docs/content/docs/deployment/memory/mem_migration.md
index 974d5e6..15a8d8d 100644
--- a/docs/content/docs/deployment/memory/mem_migration.md
+++ b/docs/content/docs/deployment/memory/mem_migration.md
@@ -286,6 +286,6 @@ in the default `flink-conf.yaml`. The value increased from
1024Mb to 1600Mb.
See also [how to configure total memory now]({{< ref
"docs/deployment/memory/mem_setup" >}}#configure-total-memory).
-<div class="alert alert-warning">
- <strong>Warning:</strong> If you use the new default `flink-conf.yaml` it
can result in different sizes of memory components and can lead to performance
changes.
-</div>
+{{< hint warning >}}
+**Warning:** If you use the new default `flink-conf.yaml` it can result in
different sizes of memory components and can lead to performance changes.
+{{< /hint >}}
diff --git a/docs/content/docs/deployment/memory/mem_tuning.md
b/docs/content/docs/deployment/memory/mem_tuning.md
index b2c6cfe..d0ca607 100644
--- a/docs/content/docs/deployment/memory/mem_tuning.md
+++ b/docs/content/docs/deployment/memory/mem_tuning.md
@@ -50,10 +50,10 @@ It declares how much memory in total should be assigned to
the Flink *JVM proces
<span class="label label-info">Note</span> If you configure the *total Flink
memory* Flink will implicitly add JVM memory components
to derive the *total process memory* and request a container with the memory
of that derived size.
-<div class="alert alert-warning">
- <strong>Warning:</strong> If Flink or user code allocates unmanaged off-heap
(native) memory beyond the container size
- the job can fail because the deployment environment can kill the offending
containers.
-</div>
+{{< hint warning >}}
+**Warning:** If Flink or user code allocates unmanaged off-heap (native)
memory beyond the container size
+the job can fail because the deployment environment can kill the offending
containers.
+{{< /hint >}}
See also description of [container memory exceeded]({{< ref
"docs/deployment/memory/mem_trouble" >}}#container-memory-exceeded) failure.
diff --git a/docs/content/docs/dev/dataset/iterations.md
b/docs/content/docs/dev/dataset/iterations.md
index fbd95c7..10c3298 100644
--- a/docs/content/docs/dev/dataset/iterations.md
+++ b/docs/content/docs/dev/dataset/iterations.md
@@ -115,11 +115,9 @@ while (!terminationCriterion()) {
setFinalState(state);
```
-<div class="panel panel-default">
- <div class="panel-body">
- See the <strong><a href="index.html">Programming Guide</a> </strong>
for details and code examples.
- </div>
-</div>
+{{< hint info >}}
+See the **[Programming Guide]({{< ref "docs/dev/dataset/overview" >}})** for
details and code examples.
+{{< /hint >}}
### Example: Incrementing Numbers
@@ -179,11 +177,9 @@ while (!terminationCriterion()) {
setFinalState(solution);
```
-<div class="panel panel-default">
- <div class="panel-body">
- See the <strong><a href="index.html">programming guide</a></strong> for
details and code examples.
- </div>
-</div>
+{{< hint info >}}
+See the **[Programming Guide]({{< ref "docs/dev/dataset/overview" >}})** for
details and code examples.
+{{< /hint >}}
### Example: Propagate Minimum in Graph
diff --git
a/docs/content/docs/dev/datastream/event-time/generating_watermarks.md
b/docs/content/docs/dev/datastream/event-time/generating_watermarks.md
index 4a88aca..aa95ed9 100644
--- a/docs/content/docs/dev/datastream/event-time/generating_watermarks.md
+++ b/docs/content/docs/dev/datastream/event-time/generating_watermarks.md
@@ -402,12 +402,12 @@ class PunctuatedAssigner extends
WatermarkGenerator[MyEvent] {
{{< /tab >}}
{{< /tabs >}}
-<div class="alert alert-warning">
-<strong>Note</strong>: It is possible to
+{{< hint warning >}}
+**Note:** It is possible to
generate a watermark on every single event. However, because each watermark
causes some computation downstream, an excessive number of watermarks degrades
performance.
-</div>
+{{< /hint >}}
## Watermark Strategies and the Kafka Connector
diff --git
a/docs/content/docs/dev/datastream/fault-tolerance/queryable_state.md
b/docs/content/docs/dev/datastream/fault-tolerance/queryable_state.md
index b7f9472..c243c43 100644
--- a/docs/content/docs/dev/datastream/fault-tolerance/queryable_state.md
+++ b/docs/content/docs/dev/datastream/fault-tolerance/queryable_state.md
@@ -113,10 +113,10 @@ QueryableStateStream asQueryableState(
```
-<div class="alert alert-info">
- <strong>Note:</strong> There is no queryable <code>ListState</code> sink as
it would result in an ever-growing
- list which may not be cleaned up and thus will eventually consume too much
memory.
-</div>
+{{< hint info >}}
+**Note:** There is no queryable `ListState` sink as it would result in an
ever-growing
+list which may not be cleaned up and thus will eventually consume too much
memory.
+{{< /hint >}}
The returned `QueryableStateStream` can be seen as a sink and **cannot** be
further transformed. Internally, a
`QueryableStateStream` gets translated to an operator which uses all incoming
records to update the queryable state
@@ -144,10 +144,10 @@ ValueStateDescriptor<Tuple2<Long, Long>> descriptor =
descriptor.setQueryable("query-name"); // queryable state name
```
-<div class="alert alert-info">
- <strong>Note:</strong> The <code>queryableStateName</code> parameter may be
chosen arbitrarily and is only
- used for queries. It does not have to be identical to the state's own name.
-</div>
+{{< hint info >}}
+**Note:** The `queryableStateName` parameter may be chosen arbitrarily and is
only
+used for queries. It does not have to be identical to the state's own name.
+{{< /hint >}}
This variant has no limitations as to which type of state can be made
queryable. This means that this can be used for
any `ValueState`, `ReduceState`, `ListState`, `MapState`, and
`AggregatingState`.
@@ -205,19 +205,19 @@ The careful reader will notice that the returned future
contains a value of type
the actual value. This can be any of the state types supported by Flink:
`ValueState`, `ReduceState`, `ListState`, `MapState`,
and `AggregatingState`.
-<div class="alert alert-info">
- <strong>Note:</strong> These state objects do not allow modifications to the
contained state. You can use them to get
- the actual value of the state, <i>e.g.</i> using
<code>valueState.get()</code>, or iterate over
- the contained <code><K, V></code> entries, <i>e.g.</i> using the
<code>mapState.entries()</code>, but you cannot
- modify them. As an example, calling the <code>add()</code> method on a
returned list state will throw an
- <code>UnsupportedOperationException</code>.
-</div>
-
-<div class="alert alert-info">
- <strong>Note:</strong> The client is asynchronous and can be shared by
multiple threads. It needs
- to be shutdown via <code>QueryableStateClient.shutdown()</code> when unused
in order to free
- resources.
-</div>
+{{< hint info >}}
+**Note:** These state objects do not allow modifications to the contained
state. You can use them to get
+the actual value of the state, *e.g.* using `valueState.get()`, or iterate over
+the contained `<K, V>` entries, *e.g.* using the `mapState.entries()`, but you
cannot
+modify them. As an example, calling the `add()` method on a returned list
state will throw an
+`UnsupportedOperationException`.
+{{< /hint >}}
+
+{{< hint info >}}
+**Note:** The client is asynchronous and can be shared by multiple threads. It
needs
+to be shutdown via <code>QueryableStateClient.shutdown()</code> when unused in
order to free
+resources.
+{{< /hint >}}
### Example
diff --git a/docs/content/docs/ops/state/savepoints.md
b/docs/content/docs/ops/state/savepoints.md
index a24b814..0d7a3290 100644
--- a/docs/content/docs/ops/state/savepoints.md
+++ b/docs/content/docs/ops/state/savepoints.md
@@ -95,9 +95,9 @@ With Flink >= 1.2.0 it is also possible to *resume from
savepoints* using the we
When triggering a savepoint, a new savepoint directory is created where the
data as well as the meta data will be stored. The location of this directory
can be controlled by [configuring a default target directory](#configuration)
or by specifying a custom target directory with the trigger commands (see the
[`:targetDirectory` argument](#trigger-a-savepoint)).
-<div class="alert alert-warning">
-<strong>Attention:</strong> The target directory has to be a location
accessible by both the JobManager(s) and TaskManager(s) e.g. a location on a
distributed file-system or Object Store.
-</div>
+{{< hint warning >}}
+**Attention:** The target directory has to be a location accessible by both
the JobManager(s) and TaskManager(s) e.g. a location on a distributed
file-system or Object Store.
+{{< /hint >}}
For example with a `FsStateBackend` or `RocksDBStateBackend`: