This is an automated email from the ASF dual-hosted git repository.
chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git
The following commit(s) were added to refs/heads/master by this push:
new 2abfd40 [FLINK-23044][docs] Fix typos
2abfd40 is described below
commit 2abfd40ab2cc8d7fa6edbd3caef6be0c5f0a7d81
Author: hapihu <[email protected]>
AuthorDate: Wed Aug 18 17:53:00 2021 +0800
[FLINK-23044][docs] Fix typos
---
docs/README.md | 8 ++++----
docs/content.zh/docs/connectors/table/jdbc.md | 6 +++---
docs/content.zh/docs/connectors/table/upsert-kafka.md | 2 +-
docs/content.zh/docs/deployment/config.md | 2 +-
docs/content.zh/docs/deployment/elastic_scaling.md | 4 ++--
docs/content.zh/docs/deployment/filesystems/s3.md | 2 +-
docs/content.zh/docs/deployment/memory/mem_migration.md | 6 +++---
.../docs/deployment/resource-providers/standalone/docker.md | 2 +-
.../docs/deployment/resource-providers/standalone/kubernetes.md | 2 +-
docs/content.zh/docs/deployment/resource-providers/yarn.md | 2 +-
docs/content.zh/docs/dev/table/catalogs.md | 2 +-
docs/content.zh/docs/dev/table/data_stream_api.md | 2 +-
docs/content.zh/docs/dev/table/functions/systemFunctions.md | 2 +-
docs/content.zh/docs/dev/table/sql/queries/deduplication.md | 2 +-
docs/content.zh/docs/dev/table/sql/queries/window-agg.md | 2 +-
docs/content.zh/docs/libs/state_processor_api.md | 2 +-
docs/content.zh/docs/ops/metrics.md | 2 +-
docs/content.zh/release-notes/flink-1.8.md | 4 ++--
docs/content.zh/release-notes/flink-1.9.md | 2 +-
docs/content/docs/connectors/datastream/kafka.md | 2 +-
docs/content/docs/connectors/table/datagen.md | 2 +-
docs/content/docs/connectors/table/formats/avro-confluent.md | 2 +-
docs/content/docs/connectors/table/formats/raw.md | 2 +-
docs/content/docs/connectors/table/jdbc.md | 6 +++---
docs/content/docs/deployment/config.md | 2 +-
docs/content/docs/deployment/elastic_scaling.md | 4 ++--
docs/content/docs/deployment/filesystems/azure.md | 2 +-
docs/content/docs/deployment/ha/_index.md | 4 ++--
.../docs/deployment/resource-providers/standalone/docker.md | 2 +-
.../docs/deployment/resource-providers/standalone/overview.md | 2 +-
docs/content/docs/deployment/resource-providers/yarn.md | 2 +-
docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md | 2 +-
docs/content/docs/dev/datastream/operators/asyncio.md | 2 +-
docs/content/docs/dev/python/table/intro_to_table_api.md | 2 +-
docs/content/docs/dev/python/table/table_environment.md | 2 +-
docs/content/docs/dev/python/table_api_tutorial.md | 2 +-
docs/content/docs/dev/table/catalogs.md | 2 +-
docs/content/docs/dev/table/common.md | 2 +-
docs/content/docs/dev/table/data_stream_api.md | 2 +-
docs/content/docs/dev/table/functions/systemFunctions.md | 2 +-
docs/content/docs/dev/table/functions/udfs.md | 2 +-
docs/content/docs/dev/table/sql/queries/deduplication.md | 2 +-
docs/content/docs/dev/table/sql/queries/window-agg.md | 2 +-
docs/content/docs/internals/task_lifecycle.md | 2 +-
docs/content/docs/ops/metrics.md | 2 +-
docs/content/release-notes/flink-1.8.md | 4 ++--
docs/content/release-notes/flink-1.9.md | 2 +-
47 files changed, 61 insertions(+), 61 deletions(-)
diff --git a/docs/README.md b/docs/README.md
index 0e7d34c..0b806ca 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -77,7 +77,7 @@ Please stick to the "logical order" when using the headlines,
e.g. start with le
#### Table of Contents
Table of contents are added automatically to every page, based on heading
levels 2 - 4.
-The ToC can be ommitted by adding the following to the front matter of the
page:
+The ToC can be omitted by adding the following to the front matter of the page:
---
bookToc: false
@@ -90,13 +90,13 @@ to its documentation markdown. The following are available
for use:
#### Flink Artifact
- {{< artfiact flink-streaming-java withScalaVersion >}}
+ {{< artifact flink-streaming-java withScalaVersion >}}
This will be replaced by the maven artifact for flink-streaming-java that
users should copy into their pom.xml file. It will render out to:
```xml
<dependency>
- <groupdId>org.apache.flink</groupId>
+ <groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version><!-- current flink version --></version>
</dependency>
@@ -157,7 +157,7 @@ Interpolates the current Flink version
#### Scala Version
- {{< scala_verison >}}
+ {{< scala_version >}}
Interpolates the default scala version
diff --git a/docs/content.zh/docs/connectors/table/jdbc.md
b/docs/content.zh/docs/connectors/table/jdbc.md
index 792cfbe..774362b 100644
--- a/docs/content.zh/docs/connectors/table/jdbc.md
+++ b/docs/content.zh/docs/connectors/table/jdbc.md
@@ -429,9 +429,9 @@ catalogs:
{{< /tab >}}
{{< /tabs >}}
-#### PostgresSQL 元空间映射
+#### PostgreSQL 元空间映射
-除了数据库之外,postgresSQL 还有一个额外的命名空间 `schema`。一个 Postgres 实例可以拥有多个数据库,每个数据库可以拥有多个
schema,其中一个 schema 默认名为 “public”,每个 schema 可以包含多张表。
+除了数据库之外,postgreSQL 还有一个额外的命名空间 `schema`。一个 Postgres 实例可以拥有多个数据库,每个数据库可以拥有多个
schema,其中一个 schema 默认名为 “public”,每个 schema 可以包含多张表。
在 Flink 中,当查询由 Postgres catalog 注册的表时,用户可以使用 `schema_name.table_name` 或只有
`table_name`,其中 `schema_name` 是可选的,默认值为 “public”。
因此,Flink Catalog 和 Postgres 之间的元空间映射如下:
@@ -461,7 +461,7 @@ SELECT * FROM `custom_schema.test_table2`;
数据类型映射
----------------
-Flink 支持连接到多个使用方言(dialect)的数据库,如 MySQL、PostgresSQL、Derby 等。其中,Derby
通常是用于测试目的。下表列出了从关系数据库数据类型到 Flink SQL 数据类型的类型映射,映射表可以使得在 Flink 中定义 JDBC 表更加简单。
+Flink 支持连接到多个使用方言(dialect)的数据库,如 MySQL、PostgreSQL、Derby 等。其中,Derby
通常是用于测试目的。下表列出了从关系数据库数据类型到 Flink SQL 数据类型的类型映射,映射表可以使得在 Flink 中定义 JDBC 表更加简单。
<table class="table table-bordered">
<thead>
diff --git a/docs/content.zh/docs/connectors/table/upsert-kafka.md
b/docs/content.zh/docs/connectors/table/upsert-kafka.md
index d269e67..da2a935 100644
--- a/docs/content.zh/docs/connectors/table/upsert-kafka.md
+++ b/docs/content.zh/docs/connectors/table/upsert-kafka.md
@@ -240,7 +240,7 @@ CREATE TABLE KafkaTable (
### 主键约束
-Upsert Kafka 始终以 upsert 方式工作,并且需要在 DDL 中定义主键。在具有相同主键值的消息按序存储在同一个分区的前提下,在
changlog source 定义主键意味着 在物化后的 changelog 上主键具有唯一性。定义的主键将决定哪些字段出现在 Kafka 消息的 key
中。
+Upsert Kafka 始终以 upsert 方式工作,并且需要在 DDL 中定义主键。在具有相同主键值的消息按序存储在同一个分区的前提下,在
changelog source 定义主键意味着 在物化后的 changelog 上主键具有唯一性。定义的主键将决定哪些字段出现在 Kafka 消息的 key
中。
### 一致性保证
diff --git a/docs/content.zh/docs/deployment/config.md
b/docs/content.zh/docs/deployment/config.md
index 77545a3..eb6a3b7 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -341,7 +341,7 @@ Advanced options to tune RocksDB and RocksDB checkpoints.
**RocksDB Configurable Options**
-These options give fine-grained control over the behavior and resoures of
ColumnFamilies.
+These options give fine-grained control over the behavior and resources of
ColumnFamilies.
With the introduction of `state.backend.rocksdb.memory.managed` and
`state.backend.rocksdb.memory.fixed-per-slot` (Apache Flink 1.10), it should be
only necessary to use the options here for advanced performance tuning. These
options here can also be specified in the application program via
`RocksDBStateBackend.setRocksDBOptions(RocksDBOptionsFactory)`.
{{< generated/rocksdb_configurable_configuration >}}
diff --git a/docs/content.zh/docs/deployment/elastic_scaling.md
b/docs/content.zh/docs/deployment/elastic_scaling.md
index dab3bf7..24680ce 100644
--- a/docs/content.zh/docs/deployment/elastic_scaling.md
+++ b/docs/content.zh/docs/deployment/elastic_scaling.md
@@ -96,9 +96,9 @@ Note that such a high max parallelism might affect
performance of the job, since
When enabling Reactive Mode, the
[`jobmanager.adaptive-scheduler.resource-wait-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-wait-timeout)
configuration key will default to `-1`. This means that the JobManager will
run forever waiting for sufficient resources.
If you want the JobManager to stop after a certain time without enough
TaskManagers to run the job, configure
`jobmanager.adaptive-scheduler.resource-wait-timeout`.
-With Reactive Mode enabled, the
[`jobmanager.adaptive-scheduler.resource-stabilization-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-stabilization-timeout)
configuration key will default to `0`: Flink will start runnning the job, as
soon as there are sufficient resources available.
+With Reactive Mode enabled, the
[`jobmanager.adaptive-scheduler.resource-stabilization-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-stabilization-timeout)
configuration key will default to `0`: Flink will start running the job, as
soon as there are sufficient resources available.
In scenarios where TaskManagers are not connecting at the same time, but
slowly one after another, this behavior leads to a job restart whenever a
TaskManager connects. Increase this configuration value if you want to wait for
the resources to stabilize before scheduling the job.
-Additionally, one can configure
[`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase):
This configuration option specifices the minumum amount of additional,
aggregate parallelism increase before triggering a scale-up. For example if you
have a job with a source (parallelism=2) and a sink (parallelism=2), the
aggregate parallelism is 4. By default, the configuration key is set to 1, so
any in [...]
+Additionally, one can configure
[`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase):
This configuration option specifics the minimum amount of additional,
aggregate parallelism increase before triggering a scale-up. For example if you
have a job with a source (parallelism=2) and a sink (parallelism=2), the
aggregate parallelism is 4. By default, the configuration key is set to 1, so
any inc [...]
#### Recommendations
diff --git a/docs/content.zh/docs/deployment/filesystems/s3.md
b/docs/content.zh/docs/deployment/filesystems/s3.md
index 5068364..270bbad 100644
--- a/docs/content.zh/docs/deployment/filesystems/s3.md
+++ b/docs/content.zh/docs/deployment/filesystems/s3.md
@@ -128,7 +128,7 @@ s3.path.style.access: true
否则将完全删除文件路径中的 entropy key。更多细节请参见 [FileSystem.create(Path,
WriteOption)](https://ci.apache.org/projects/flink/flink-docs-release-1.6/api/java/org/apache/flink/core/fs/FileSystem.html#create-org.apache.flink.core.fs.Path-org.apache.flink.core.fs.FileSystem.WriteOptions-)。
{{< hint info >}}
-目前 Flink 运行时仅对 checkpoint 数据文件使用熵注入选项。所有其他文件包括 chekcpoint 元数据与外部 URI
都不使用熵注入,以保证 checkpoint URI 的可预测性。
+目前 Flink 运行时仅对 checkpoint 数据文件使用熵注入选项。所有其他文件包括 checkpoint 元数据与外部 URI
都不使用熵注入,以保证 checkpoint URI 的可预测性。
{{< /hint >}}
配置 *entropy key* 与 *entropy length* 参数以启用熵注入:
diff --git a/docs/content.zh/docs/deployment/memory/mem_migration.md
b/docs/content.zh/docs/deployment/memory/mem_migration.md
index 1cebf39..2c9951f 100644
--- a/docs/content.zh/docs/deployment/memory/mem_migration.md
+++ b/docs/content.zh/docs/deployment/memory/mem_migration.md
@@ -111,7 +111,7 @@ Flink 自带的[默认
flink-conf.yaml](#default-configuration-in-flink-confyaml
<td>
<ul>
<li><a href="{{< ref
"docs/deployment/resource-providers/standalone/overview" >}}">独立部署模式(Standalone
Deployment)</a>下:<a href="{{< ref "docs/deployment/config"
>}}#taskmanager-memory-flink-size">taskmanager.memory.flink.size</a></li>
- <li>容器化部署模式(Containerized Deployement)下:<a href="{{< ref
"docs/deployment/config"
>}}#taskmanager-memory-process-size">taskmanager.memory.process.size</a></li>
+ <li>容器化部署模式(Containerized Deployment)下:<a href="{{< ref
"docs/deployment/config"
>}}#taskmanager-memory-process-size">taskmanager.memory.process.size</a></li>
</ul>
请参考<a href="#total-memory-previously-heap-memory">如何升级总内存</a>。
</td>
@@ -152,7 +152,7 @@ Flink 自带的[默认
flink-conf.yaml](#default-configuration-in-flink-confyaml
如果配置了上述弃用的参数,同时又没有配置与之对应的新配置参数,那它们将按如下规则对应到新的配置参数。
* 独立部署模式(Standalone Deployment)下:Flink
总内存([`taskmanager.memory.flink.size`]({{< ref "docs/deployment/config"
>}}#taskmanager-memory-flink-size))
-* 容器化部署模式(Containerized
Deployement)下(Yarn):进程总内存([`taskmanager.memory.process.size`]({{< ref
"docs/deployment/config" >}}#taskmanager-memory-process-size))
+* 容器化部署模式(Containerized
Deployment)下(Yarn):进程总内存([`taskmanager.memory.process.size`]({{< ref
"docs/deployment/config" >}}#taskmanager-memory-process-size))
建议您尽早使用新的配置参数取代启用的配置参数,它们在今后的版本中可能会被彻底移除。
@@ -233,7 +233,7 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
这两个配置参数目前已被弃用。
如果配置了上述弃用的参数,同时又没有配置与之对应的新配置参数,那它们将按如下规则对应到新的配置参数。
* 独立部署模式(Standalone Deployment):JVM 堆内存([`jobmanager.memory.heap.size`]({{<
ref "docs/deployment/config" >}}#jobmanager-memory-heap-size))
-* 容器化部署模式(Containerized
Deployement)下(Kubernetes、Yarn):进程总内存([`jobmanager.memory.process.size`]({{< ref
"docs/deployment/config" >}}#jobmanager-memory-process-size))
+* 容器化部署模式(Containerized
Deployment)下(Kubernetes、Yarn):进程总内存([`jobmanager.memory.process.size`]({{< ref
"docs/deployment/config" >}}#jobmanager-memory-process-size))
建议您尽早使用新的配置参数取代启用的配置参数,它们在今后的版本中可能会被彻底移除。
diff --git
a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
index 11962cb..244731d 100644
--- a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
+++ b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
@@ -219,7 +219,7 @@ There are two distribution channels for the Flink Docker
images:
We recommend using the official images on Docker Hub, as they are reviewed by
Docker. The images on `apache/flink` are provided in case of delays in the
review process by Docker.
-Launching an image named `flink:latest` will pull the latest image from Docker
Hub. In order to use the images hosted in `apache/flink`, replace `flink` by
`apache/flink`. Any of the image tags (starting from Flink 1.11.3) are
avaialble on `apache/flink` as well.
+Launching an image named `flink:latest` will pull the latest image from Docker
Hub. In order to use the images hosted in `apache/flink`, replace `flink` by
`apache/flink`. Any of the image tags (starting from Flink 1.11.3) are
available on `apache/flink` as well.
### Image tags
diff --git
a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
index bf05fdcd..ffb5b86 100644
---
a/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
+++
b/docs/content.zh/docs/deployment/resource-providers/standalone/kubernetes.md
@@ -33,7 +33,7 @@ under the License.
## 入门
-本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Seesion 集群*。
+本 *入门* 指南描述了如何在 [Kubernetes](https://kubernetes.io) 上部署 *Flink Session 集群*。
<a name="introduction"></a>
diff --git a/docs/content.zh/docs/deployment/resource-providers/yarn.md
b/docs/content.zh/docs/deployment/resource-providers/yarn.md
index 18e135e..417f5c4 100644
--- a/docs/content.zh/docs/deployment/resource-providers/yarn.md
+++ b/docs/content.zh/docs/deployment/resource-providers/yarn.md
@@ -152,7 +152,7 @@ The Session Mode has two operation modes:
The session mode will create a hidden YARN properties file in
`/tmp/.yarn-properties-<username>`, which will be picked up for cluster
discovery by the command line interface when submitting a job.
-You can also **manually specifiy the target YARN cluster** in the command line
interface when submitting a Flink job. Here's an example:
+You can also **manually specify the target YARN cluster** in the command line
interface when submitting a Flink job. Here's an example:
```bash
./bin/flink run -t yarn-session \
diff --git a/docs/content.zh/docs/dev/table/catalogs.md
b/docs/content.zh/docs/dev/table/catalogs.md
index 1c76069..50de5af 100644
--- a/docs/content.zh/docs/dev/table/catalogs.md
+++ b/docs/content.zh/docs/dev/table/catalogs.md
@@ -271,7 +271,7 @@ catalog.dropDatabase("mydb", false);
// alter database
catalog.alterDatabase("mydb", new CatalogDatabaseImpl(...), false);
-// get databse
+// get database
catalog.getDatabase("mydb");
// check if a database exist
diff --git a/docs/content.zh/docs/dev/table/data_stream_api.md
b/docs/content.zh/docs/dev/table/data_stream_api.md
index 9a5666e..83259e5 100644
--- a/docs/content.zh/docs/dev/table/data_stream_api.md
+++ b/docs/content.zh/docs/dev/table/data_stream_api.md
@@ -201,7 +201,7 @@ In particular, the section discusses how to influence the
schema derivation with
and nested types. It also covers working with event-time and watermarks.
Depending on the kind of query, in many cases the resulting dynamic table is a
pipeline that does not
-only produce insert-only changes when coverting the `Table` to a `DataStream`
but also produces retractions
+only produce insert-only changes when converting the `Table` to a `DataStream`
but also produces retractions
and other kinds of updates. During table-to-stream conversion, this could lead
to an exception similar to
```
diff --git a/docs/content.zh/docs/dev/table/functions/systemFunctions.md
b/docs/content.zh/docs/dev/table/functions/systemFunctions.md
index 69a93ed..3a66018 100644
--- a/docs/content.zh/docs/dev/table/functions/systemFunctions.md
+++ b/docs/content.zh/docs/dev/table/functions/systemFunctions.md
@@ -107,7 +107,7 @@ Known Limitations:
### 辅助函数
-{{< sql_functions_zh "auxilary" >}}
+{{< sql_functions_zh "auxiliary" >}}
聚合函数
-------------------
diff --git a/docs/content.zh/docs/dev/table/sql/queries/deduplication.md
b/docs/content.zh/docs/dev/table/sql/queries/deduplication.md
index 78cf583..6ebe3ae 100644
--- a/docs/content.zh/docs/dev/table/sql/queries/deduplication.md
+++ b/docs/content.zh/docs/dev/table/sql/queries/deduplication.md
@@ -45,7 +45,7 @@ WHERE rownum = 1
- `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting
with one.
- `PARTITION BY col1[, col2...]`: Specifies the partition columns, i.e. the
deduplicate key.
-- `ORDER BY time_attr [asc|desc]`: Specifies the ordering column, it must be a
[time attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}).
Currently Flink supports [processing time attribute]({{< ref
"docs/dev/table/concepts/time_attributes" >}}#processing-time) and [event time
atttribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}#event-time).
Ordering by ASC means keeping the first row, ordering by DESC means keeping the
last row.
+- `ORDER BY time_attr [asc|desc]`: Specifies the ordering column, it must be a
[time attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}).
Currently Flink supports [processing time attribute]({{< ref
"docs/dev/table/concepts/time_attributes" >}}#processing-time) and [event time
attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}#event-time).
Ordering by ASC means keeping the first row, ordering by DESC means keeping the
last row.
- `WHERE rownum = 1`: The `rownum = 1` is required for Flink to recognize this
query is deduplication.
{{< hint info >}}
diff --git a/docs/content.zh/docs/dev/table/sql/queries/window-agg.md
b/docs/content.zh/docs/dev/table/sql/queries/window-agg.md
index cf6d2de..11246f3 100644
--- a/docs/content.zh/docs/dev/table/sql/queries/window-agg.md
+++ b/docs/content.zh/docs/dev/table/sql/queries/window-agg.md
@@ -314,7 +314,7 @@ The following examples show how to specify SQL queries with
group windows on str
```sql
CREATE TABLE Orders (
user BIGINT,
- product STIRNG,
+ product STRING,
amount INT,
order_time TIMESTAMP(3),
WATERMARK FOR order_time AS order_time - INTERVAL '1' MINUTE
diff --git a/docs/content.zh/docs/libs/state_processor_api.md
b/docs/content.zh/docs/libs/state_processor_api.md
index f7026d1..6997ec1 100644
--- a/docs/content.zh/docs/libs/state_processor_api.md
+++ b/docs/content.zh/docs/libs/state_processor_api.md
@@ -26,7 +26,7 @@ under the License.
# State Processor API
-Apache Flink's State Processor API provides powerful functionality to reading,
writing, and modifing savepoints and checkpoints using Flink’s batch DataSet
API.
+Apache Flink's State Processor API provides powerful functionality to reading,
writing, and modifying savepoints and checkpoints using Flink’s batch DataSet
API.
Due to the [interoperability of DataSet and Table
API](https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api),
you can even use relational Table API or SQL queries to analyze and process
state data.
For example, you can take a savepoint of a running stream processing
application and analyze it with a DataSet batch program to verify that the
application behaves correctly.
diff --git a/docs/content.zh/docs/ops/metrics.md
b/docs/content.zh/docs/ops/metrics.md
index d4fdc74..256c15b 100644
--- a/docs/content.zh/docs/ops/metrics.md
+++ b/docs/content.zh/docs/ops/metrics.md
@@ -1282,7 +1282,7 @@ Certain RocksDB native metrics are available but disabled
by default, you can fi
</tr>
<tr>
<td>debloatedBufferSize</td>
- <td>The desired buffer size (in bytes) calculated by the buffer
debloater. Buffer debloater is trying to reduce buffer size when the ammount of
in-flight data (after taking into account current throughput) exceeds the
configured target value.</td>
+ <td>The desired buffer size (in bytes) calculated by the buffer
debloater. Buffer debloater is trying to reduce buffer size when the amount of
in-flight data (after taking into account current throughput) exceeds the
configured target value.</td>
<td>Gauge</td>
</tr>
<tr>
diff --git a/docs/content.zh/release-notes/flink-1.8.md
b/docs/content.zh/release-notes/flink-1.8.md
index 5d7f251..3d74d39 100644
--- a/docs/content.zh/release-notes/flink-1.8.md
+++ b/docs/content.zh/release-notes/flink-1.8.md
@@ -39,12 +39,12 @@ allowed to clean up and make inaccessible keyed state
entries when accessing
them. In addition state would now also being cleaned up when writing a
savepoint/checkpoint.
-Flink 1.8 introduces continous cleanup of old entries for both the RocksDB
+Flink 1.8 introduces continuous cleanup of old entries for both the RocksDB
state backend
([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471)) and the heap
state backend
([FLINK-10473](https://issues.apache.org/jira/browse/FLINK-10473)). This means
-that old entries (according to the ttl setting) are continously being cleanup
+that old entries (according to the ttl setting) are continuously being cleanup
up.
#### New Support for Schema Migration when restoring Savepoints
diff --git a/docs/content.zh/release-notes/flink-1.9.md
b/docs/content.zh/release-notes/flink-1.9.md
index f0e49d9..be14cd3 100644
--- a/docs/content.zh/release-notes/flink-1.9.md
+++ b/docs/content.zh/release-notes/flink-1.9.md
@@ -225,7 +225,7 @@ Related issues:
### MapR dependency removed
Dependency on MapR vendor-specific artifacts has been removed, by changing the
MapR filesystem connector to work
-purely based on reflection. This does not introduce any regession in the
support for the MapR filesystem.
+purely based on reflection. This does not introduce any regression in the
support for the MapR filesystem.
The decision to remove hard dependencies on the MapR artifacts was made due to
very flaky access to the secure https
endpoint of the MapR artifact repository, and affected build stability of
Flink.
diff --git a/docs/content/docs/connectors/datastream/kafka.md
b/docs/content/docs/connectors/datastream/kafka.md
index 15232ad..a675577 100644
--- a/docs/content/docs/connectors/datastream/kafka.md
+++ b/docs/content/docs/connectors/datastream/kafka.md
@@ -139,7 +139,7 @@ used by default.
Kafka source is designed to support both streaming and batch running mode. By
default, the KafkaSource
is set to run in streaming manner, thus never stops until Flink job fails or
is cancelled. You can use
```setBounded(OffsetsInitializer)``` to specify stopping offsets and set the
source running in
-batch mode. When all partitions have reached their stoping offsets, the source
will exit.
+batch mode. When all partitions have reached their stopping offsets, the
source will exit.
You can also set KafkaSource running in streaming mode, but still stop at the
stopping offset by
using ```setUnbounded(OffsetsInitializer)```. The source will exit when all
partitions reach their
diff --git a/docs/content/docs/connectors/table/datagen.md
b/docs/content/docs/connectors/table/datagen.md
index f62b68b..aff8683 100644
--- a/docs/content/docs/connectors/table/datagen.md
+++ b/docs/content/docs/connectors/table/datagen.md
@@ -58,7 +58,7 @@ CREATE TABLE Orders (
)
```
-Often, the data generator connector is used in conjuction with the ``LIKE``
clause to mock out physical tables.
+Often, the data generator connector is used in conjunction with the ``LIKE``
clause to mock out physical tables.
```sql
CREATE TABLE Orders (
diff --git a/docs/content/docs/connectors/table/formats/avro-confluent.md
b/docs/content/docs/connectors/table/formats/avro-confluent.md
index 219b355..7ab4c98 100644
--- a/docs/content/docs/connectors/table/formats/avro-confluent.md
+++ b/docs/content/docs/connectors/table/formats/avro-confluent.md
@@ -122,7 +122,7 @@ CREATE TABLE user_created (
'value.avro-confluent.url' = 'http://localhost:8082',
'value.fields-include' = 'EXCEPT_KEY',
- -- subjects have a default value since Flink 1.13, though can be overriden:
+ -- subjects have a default value since Flink 1.13, though can be overridden:
'key.avro-confluent.subject' = 'user_events_example2-key2',
'value.avro-confluent.subject' = 'user_events_example2-value2'
)
diff --git a/docs/content/docs/connectors/table/formats/raw.md
b/docs/content/docs/connectors/table/formats/raw.md
index 4140063..995cfd4 100644
--- a/docs/content/docs/connectors/table/formats/raw.md
+++ b/docs/content/docs/connectors/table/formats/raw.md
@@ -136,7 +136,7 @@ The table below details the SQL types the format supports,
including details of
</tr>
<tr>
<td><code>TINYINT</code></td>
- <td>A single byte of the singed number value.</td>
+ <td>A single byte of the signed number value.</td>
</tr>
<tr>
<td><code>SMALLINT</code></td>
diff --git a/docs/content/docs/connectors/table/jdbc.md
b/docs/content/docs/connectors/table/jdbc.md
index a6635d1..e8a7abe 100644
--- a/docs/content/docs/connectors/table/jdbc.md
+++ b/docs/content/docs/connectors/table/jdbc.md
@@ -428,9 +428,9 @@ catalogs:
{{< /tab >}}
{{< /tabs >}}
-#### PostgresSQL Metaspace Mapping
+#### PostgreSQL Metaspace Mapping
-PostgresSQL has an additional namespace as `schema` besides database. A
Postgres instance can have multiple databases, each database can have multiple
schemas with a default one named "public", each schema can have multiple tables.
+PostgreSQL has an additional namespace as `schema` besides database. A
Postgres instance can have multiple databases, each database can have multiple
schemas with a default one named "public", each schema can have multiple tables.
In Flink, when querying tables registered by Postgres catalog, users can use
either `schema_name.table_name` or just `table_name`. The `schema_name` is
optional and defaults to "public".
Therefor the metaspace mapping between Flink Catalog and Postgres is as
following:
@@ -460,7 +460,7 @@ SELECT * FROM `custom_schema.test_table2`;
Data Type Mapping
----------------
-Flink supports connect to several databases which uses dialect like MySQL,
PostgresSQL, Derby. The Derby dialect usually used for testing purpose. The
field data type mappings from relational databases data types to Flink SQL data
types are listed in the following table, the mapping table can help define JDBC
table in Flink easily.
+Flink supports connect to several databases which uses dialect like MySQL,
PostgreSQL, Derby. The Derby dialect usually used for testing purpose. The
field data type mappings from relational databases data types to Flink SQL data
types are listed in the following table, the mapping table can help define JDBC
table in Flink easily.
<table class="table table-bordered">
<thead>
diff --git a/docs/content/docs/deployment/config.md
b/docs/content/docs/deployment/config.md
index 87ed75b..9143fde 100644
--- a/docs/content/docs/deployment/config.md
+++ b/docs/content/docs/deployment/config.md
@@ -343,7 +343,7 @@ Advanced options to tune RocksDB and RocksDB checkpoints.
**RocksDB Configurable Options**
-These options give fine-grained control over the behavior and resoures of
ColumnFamilies.
+These options give fine-grained control over the behavior and resources of
ColumnFamilies.
With the introduction of `state.backend.rocksdb.memory.managed` and
`state.backend.rocksdb.memory.fixed-per-slot` (Apache Flink 1.10), it should be
only necessary to use the options here for advanced performance tuning. These
options here can also be specified in the application program via
`RocksDBStateBackend.setRocksDBOptions(RocksDBOptionsFactory)`.
{{< generated/rocksdb_configurable_configuration >}}
diff --git a/docs/content/docs/deployment/elastic_scaling.md
b/docs/content/docs/deployment/elastic_scaling.md
index dab3bf7..aa4bbf82 100644
--- a/docs/content/docs/deployment/elastic_scaling.md
+++ b/docs/content/docs/deployment/elastic_scaling.md
@@ -96,9 +96,9 @@ Note that such a high max parallelism might affect
performance of the job, since
When enabling Reactive Mode, the
[`jobmanager.adaptive-scheduler.resource-wait-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-wait-timeout)
configuration key will default to `-1`. This means that the JobManager will
run forever waiting for sufficient resources.
If you want the JobManager to stop after a certain time without enough
TaskManagers to run the job, configure
`jobmanager.adaptive-scheduler.resource-wait-timeout`.
-With Reactive Mode enabled, the
[`jobmanager.adaptive-scheduler.resource-stabilization-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-stabilization-timeout)
configuration key will default to `0`: Flink will start runnning the job, as
soon as there are sufficient resources available.
+With Reactive Mode enabled, the
[`jobmanager.adaptive-scheduler.resource-stabilization-timeout`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-stabilization-timeout)
configuration key will default to `0`: Flink will start running the job, as
soon as there are sufficient resources available.
In scenarios where TaskManagers are not connecting at the same time, but
slowly one after another, this behavior leads to a job restart whenever a
TaskManager connects. Increase this configuration value if you want to wait for
the resources to stabilize before scheduling the job.
-Additionally, one can configure
[`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase):
This configuration option specifices the minumum amount of additional,
aggregate parallelism increase before triggering a scale-up. For example if you
have a job with a source (parallelism=2) and a sink (parallelism=2), the
aggregate parallelism is 4. By default, the configuration key is set to 1, so
any in [...]
+Additionally, one can configure
[`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref
"docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase):
This configuration option specifics the minumum amount of additional,
aggregate parallelism increase before triggering a scale-up. For example if you
have a job with a source (parallelism=2) and a sink (parallelism=2), the
aggregate parallelism is 4. By default, the configuration key is set to 1, so
any inc [...]
#### Recommendations
diff --git a/docs/content/docs/deployment/filesystems/azure.md
b/docs/content/docs/deployment/filesystems/azure.md
index 3e0f41d..1b94c6d 100644
--- a/docs/content/docs/deployment/filesystems/azure.md
+++ b/docs/content/docs/deployment/filesystems/azure.md
@@ -88,7 +88,7 @@ cp ./opt/flink-azure-fs-hadoop-{{< version >}}.jar
./plugins/azure-fs-hadoop/
Hadoop's WASB Azure Filesystem supports configuration of credentials via the
Hadoop configuration as
outlined in the [Hadoop Azure Blob Storage
documentation](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials).
For convenience Flink forwards all Flink configurations with a key prefix of
`fs.azure` to the
-Hadoop configuration of the filesystem. Consequentially, the azure blob
storage key can be configured
+Hadoop configuration of the filesystem. Consequently, the azure blob storage
key can be configured
in `flink-conf.yaml` via:
```yaml
diff --git a/docs/content/docs/deployment/ha/_index.md
b/docs/content/docs/deployment/ha/_index.md
index 3eb199e..3e19d57 100644
--- a/docs/content/docs/deployment/ha/_index.md
+++ b/docs/content/docs/deployment/ha/_index.md
@@ -1,5 +1,5 @@
---
-title: High Availablity
+title: High Availability
bookCollapseSection: true
weight: 6
---
@@ -20,4 +20,4 @@ software distributed under the License is distributed on an
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
--->
\ No newline at end of file
+-->
diff --git
a/docs/content/docs/deployment/resource-providers/standalone/docker.md
b/docs/content/docs/deployment/resource-providers/standalone/docker.md
index 8707f2a..1577cc3 100644
--- a/docs/content/docs/deployment/resource-providers/standalone/docker.md
+++ b/docs/content/docs/deployment/resource-providers/standalone/docker.md
@@ -219,7 +219,7 @@ There are two distribution channels for the Flink Docker
images:
We recommend using the official images on Docker Hub, as they are reviewed by
Docker. The images on `apache/flink` are provided in case of delays in the
review process by Docker.
-Launching an image named `flink:latest` will pull the latest image from Docker
Hub. In order to use the images hosted in `apache/flink`, replace `flink` by
`apache/flink`. Any of the image tags (starting from Flink 1.11.3) are
avaialble on `apache/flink` as well.
+Launching an image named `flink:latest` will pull the latest image from Docker
Hub. In order to use the images hosted in `apache/flink`, replace `flink` by
`apache/flink`. Any of the image tags (starting from Flink 1.11.3) are
available on `apache/flink` as well.
### Image tags
diff --git
a/docs/content/docs/deployment/resource-providers/standalone/overview.md
b/docs/content/docs/deployment/resource-providers/standalone/overview.md
index a684717..fb9393f 100644
--- a/docs/content/docs/deployment/resource-providers/standalone/overview.md
+++ b/docs/content/docs/deployment/resource-providers/standalone/overview.md
@@ -131,7 +131,7 @@ The log files are located in the `logs/` directory. There's
a `.log` file for ea
Alternatively, logs are available from the Flink web frontend (both for the
JobManager and each TaskManager).
By default, Flink is logging on the "INFO" log level, which provides basic
information for all obvious issues. For cases where Flink seems to behave
wrongly, reducing the log level to "DEBUG" is advised. The logging level is
controlled via the `conf/log4.properties` file.
-Setting `rootLogger.level = DEBUG` will boostrap Flink on the DEBUG log level.
+Setting `rootLogger.level = DEBUG` will bootstrap Flink on the DEBUG log level.
There's a dedicated page on the [logging]({{< ref
"docs/deployment/advanced/logging" >}}) in Flink.
diff --git a/docs/content/docs/deployment/resource-providers/yarn.md
b/docs/content/docs/deployment/resource-providers/yarn.md
index 0ded384..25bedd4 100644
--- a/docs/content/docs/deployment/resource-providers/yarn.md
+++ b/docs/content/docs/deployment/resource-providers/yarn.md
@@ -152,7 +152,7 @@ The Session Mode has two operation modes:
The session mode will create a hidden YARN properties file in
`/tmp/.yarn-properties-<username>`, which will be picked up for cluster
discovery by the command line interface when submitting a job.
-You can also **manually specifiy the target YARN cluster** in the command line
interface when submitting a Flink job. Here's an example:
+You can also **manually specify the target YARN cluster** in the command line
interface when submitting a Flink job. Here's an example:
```bash
./bin/flink run -t yarn-session \
diff --git a/docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
b/docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
index 7ddc431..a40691d 100644
--- a/docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
+++ b/docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
@@ -50,7 +50,7 @@ By default, checkpointing is disabled. To enable
checkpointing, call `enableChec
Other parameters for checkpointing include:
- - *checkpoint storage*: You can set the location where checkpoint snapshots
are made durable. By default Flink will use the JobManager's heap. For
production deployments it is recomended to instead use a durable filesystem.
See [checkpoint storage]({{< ref
"docs/ops/state/checkpoints#checkpoint-storage" >}}) for more details on the
available options for job-wide and cluster-wide configuration.
+ - *checkpoint storage*: You can set the location where checkpoint snapshots
are made durable. By default Flink will use the JobManager's heap. For
production deployments it is recommended to instead use a durable filesystem.
See [checkpoint storage]({{< ref
"docs/ops/state/checkpoints#checkpoint-storage" >}}) for more details on the
available options for job-wide and cluster-wide configuration.
- *exactly-once vs. at-least-once*: You can optionally pass a mode to the
`enableCheckpointing(n)` method to choose between the two guarantee levels.
Exactly-once is preferable for most applications. At-least-once may be
relevant for certain super-low-latency (consistently few milliseconds)
applications.
diff --git a/docs/content/docs/dev/datastream/operators/asyncio.md
b/docs/content/docs/dev/datastream/operators/asyncio.md
index 54f2edc..27840b2 100644
--- a/docs/content/docs/dev/datastream/operators/asyncio.md
+++ b/docs/content/docs/dev/datastream/operators/asyncio.md
@@ -44,7 +44,7 @@ A request is sent to the database and the `MapFunction` waits
until the response
makes up the vast majority of the function's time.
Asynchronous interaction with the database means that a single parallel
function instance can handle many requests concurrently and
-receive the responses concurrently. That way, the waiting time can be
overlayed with sending other requests and
+receive the responses concurrently. That way, the waiting time can be overlaid
with sending other requests and
receiving responses. At the very least, the waiting time is amortized over
multiple requests. This leads in most cased to much higher
streaming throughput.
diff --git a/docs/content/docs/dev/python/table/intro_to_table_api.md
b/docs/content/docs/dev/python/table/intro_to_table_api.md
index 2234073..fdbea6c 100644
--- a/docs/content/docs/dev/python/table/intro_to_table_api.md
+++ b/docs/content/docs/dev/python/table/intro_to_table_api.md
@@ -64,7 +64,7 @@ table_env.execute_sql("""
)
""")
-# 4. query from source table and perform caculations
+# 4. query from source table and perform calculations
# create a Table from a Table API query:
source_table = table_env.from_path("datagen")
# or create a Table from a SQL query:
diff --git a/docs/content/docs/dev/python/table/table_environment.md
b/docs/content/docs/dev/python/table/table_environment.md
index 374940a..300d69a 100644
--- a/docs/content/docs/dev/python/table/table_environment.md
+++ b/docs/content/docs/dev/python/table/table_environment.md
@@ -566,7 +566,7 @@ Please refer to the [Dependency Management]({{< ref
"docs/dev/python/dependency_
<td>
Returns the table config to define the runtime behavior of the Table
API.
You can find all the available configuration options in <a href="{{<
ref "docs/deployment/config" >}}">Configuration</a> and
- <a href="{{< ref "docs/dev/python/python_config" >}}">Python
Configuation</a>. <br> <br>
+ <a href="{{< ref "docs/dev/python/python_config" >}}">Python
Configuration</a>. <br> <br>
The following code is an example showing how to set the configuration
options through this API:
```python
# set the parallelism to 8
diff --git a/docs/content/docs/dev/python/table_api_tutorial.md
b/docs/content/docs/dev/python/table_api_tutorial.md
index 967efc1..1986a7c 100644
--- a/docs/content/docs/dev/python/table_api_tutorial.md
+++ b/docs/content/docs/dev/python/table_api_tutorial.md
@@ -130,7 +130,7 @@ This registers a table named `mySource` and a table named
`mySink` in the execut
The table `mySource` has only one column, word, and it consumes strings read
from file `/tmp/input`.
The table `mySink` has two columns, word and count, and writes data to the
file `/tmp/output`, with `\t` as the field delimiter.
-You can now create a job which reads input from table `mySource`, preforms
some transformations, and writes the results to table `mySink`.
+You can now create a job which reads input from table `mySource`, performs
some transformations, and writes the results to table `mySink`.
Finally you must execute the actual Flink Python Table API job.
All operations, such as creating sources, transformations and sinks are lazy.
diff --git a/docs/content/docs/dev/table/catalogs.md
b/docs/content/docs/dev/table/catalogs.md
index ff13156..2b193c6 100644
--- a/docs/content/docs/dev/table/catalogs.md
+++ b/docs/content/docs/dev/table/catalogs.md
@@ -257,7 +257,7 @@ tables = catalog.list_tables("mydb")
## Catalog API
-Note: only catalog program APIs are listed here. Users can achieve many of the
same funtionalities with SQL DDL.
+Note: only catalog program APIs are listed here. Users can achieve many of the
same functionalities with SQL DDL.
For detailed DDL information, please refer to [SQL CREATE DDL]({{< ref
"docs/dev/table/sql/create" >}}).
diff --git a/docs/content/docs/dev/table/common.md
b/docs/content/docs/dev/table/common.md
index 2dd33f4..26bc540 100644
--- a/docs/content/docs/dev/table/common.md
+++ b/docs/content/docs/dev/table/common.md
@@ -845,7 +845,7 @@ print(table.explain())
{{< /tab >}}
{{< /tabs >}}
-The result of the above exmaple is
+The result of the above example is
{{< expand "Explain" >}}
```text
diff --git a/docs/content/docs/dev/table/data_stream_api.md
b/docs/content/docs/dev/table/data_stream_api.md
index 56a928c..02e98e0 100644
--- a/docs/content/docs/dev/table/data_stream_api.md
+++ b/docs/content/docs/dev/table/data_stream_api.md
@@ -196,7 +196,7 @@ In particular, the section discusses how to influence the
schema derivation with
and nested types. It also covers working with event-time and watermarks.
Depending on the kind of query, in many cases the resulting dynamic table is a
pipeline that does not
-only produce insert-only changes when coverting the `Table` to a `DataStream`
but also produces retractions
+only produce insert-only changes when converting the `Table` to a `DataStream`
but also produces retractions
and other kinds of updates. During table-to-stream conversion, this could lead
to an exception similar to
```
diff --git a/docs/content/docs/dev/table/functions/systemFunctions.md
b/docs/content/docs/dev/table/functions/systemFunctions.md
index 57295de..02ef083 100644
--- a/docs/content/docs/dev/table/functions/systemFunctions.md
+++ b/docs/content/docs/dev/table/functions/systemFunctions.md
@@ -106,7 +106,7 @@ Known Limitations:
### Auxiliary Functions
-{{< sql_functions "auxilary" >}}
+{{< sql_functions "auxiliary" >}}
Aggregate Functions
-------------------
diff --git a/docs/content/docs/dev/table/functions/udfs.md
b/docs/content/docs/dev/table/functions/udfs.md
index 80f83ee..3bf411d 100644
--- a/docs/content/docs/dev/table/functions/udfs.md
+++ b/docs/content/docs/dev/table/functions/udfs.md
@@ -338,7 +338,7 @@ For a full list of classes that can be implicitly mapped to
a data type, see the
**`@DataTypeHint`**
-In many scenarios, it is required to support the automatic extraction _inline_
for paramaters and return types of a function
+In many scenarios, it is required to support the automatic extraction _inline_
for parameters and return types of a function
The following example shows how to use data type hints. More information can
be found in the documentation of the annotation class.
diff --git a/docs/content/docs/dev/table/sql/queries/deduplication.md
b/docs/content/docs/dev/table/sql/queries/deduplication.md
index 6afc9cb..375ea34 100644
--- a/docs/content/docs/dev/table/sql/queries/deduplication.md
+++ b/docs/content/docs/dev/table/sql/queries/deduplication.md
@@ -45,7 +45,7 @@ WHERE rownum = 1
- `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting
with one.
- `PARTITION BY col1[, col2...]`: Specifies the partition columns, i.e. the
deduplicate key.
-- `ORDER BY time_attr [asc|desc]`: Specifies the ordering column, it must be a
[time attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}).
Currently Flink supports [processing time attribute]({{< ref
"docs/dev/table/concepts/time_attributes" >}}#processing-time) and [event time
atttribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}#event-time).
Ordering by ASC means keeping the first row, ordering by DESC means keeping the
last row.
+- `ORDER BY time_attr [asc|desc]`: Specifies the ordering column, it must be a
[time attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}).
Currently Flink supports [processing time attribute]({{< ref
"docs/dev/table/concepts/time_attributes" >}}#processing-time) and [event time
attribute]({{< ref "docs/dev/table/concepts/time_attributes" >}}#event-time).
Ordering by ASC means keeping the first row, ordering by DESC means keeping the
last row.
- `WHERE rownum = 1`: The `rownum = 1` is required for Flink to recognize this
query is deduplication.
{{< hint info >}}
diff --git a/docs/content/docs/dev/table/sql/queries/window-agg.md
b/docs/content/docs/dev/table/sql/queries/window-agg.md
index 76a16db..e5d550b 100644
--- a/docs/content/docs/dev/table/sql/queries/window-agg.md
+++ b/docs/content/docs/dev/table/sql/queries/window-agg.md
@@ -314,7 +314,7 @@ The following examples show how to specify SQL queries with
group windows on str
```sql
CREATE TABLE Orders (
user BIGINT,
- product STIRNG,
+ product STRING,
amount INT,
order_time TIMESTAMP(3),
WATERMARK FOR order_time AS order_time - INTERVAL '1' MINUTE
diff --git a/docs/content/docs/internals/task_lifecycle.md
b/docs/content/docs/internals/task_lifecycle.md
index a448f47..2af6732 100644
--- a/docs/content/docs/internals/task_lifecycle.md
+++ b/docs/content/docs/internals/task_lifecycle.md
@@ -86,7 +86,7 @@ where the UDF's logic is invoked, *e.g.* the `map()` method
of your `MapFunction
Finally, in the case of a normal, fault-free termination of the operator
(*e.g.* if the stream is
finite and its end is reached), the `finish()` method is called to perform any
final bookkeeping
action required by the operator's logic (*e.g.* flush any buffered data, or
emit data to mark end of
-procesing), and the `close()` is called after that to free any resources held
by the operator
+processing), and the `close()` is called after that to free any resources held
by the operator
(*e.g.* open network connections, io streams, or native memory held by the
operator's data).
In the case of a termination due to a failure or due to manual cancellation,
the execution jumps directly to the `close()`
diff --git a/docs/content/docs/ops/metrics.md b/docs/content/docs/ops/metrics.md
index 9c6a700..68ba75e 100644
--- a/docs/content/docs/ops/metrics.md
+++ b/docs/content/docs/ops/metrics.md
@@ -1281,7 +1281,7 @@ Certain RocksDB native metrics are available but disabled
by default, you can fi
</tr>
<tr>
<td>debloatedBufferSize</td>
- <td>The desired buffer size (in bytes) calculated by the buffer
debloater. Buffer debloater is trying to reduce buffer size when the ammount of
in-flight data (after taking into account current throughput) exceeds the
configured target value.</td>
+ <td>The desired buffer size (in bytes) calculated by the buffer
debloater. Buffer debloater is trying to reduce buffer size when the amount of
in-flight data (after taking into account current throughput) exceeds the
configured target value.</td>
<td>Gauge</td>
</tr>
<tr>
diff --git a/docs/content/release-notes/flink-1.8.md
b/docs/content/release-notes/flink-1.8.md
index 5d7f251..3d74d39 100644
--- a/docs/content/release-notes/flink-1.8.md
+++ b/docs/content/release-notes/flink-1.8.md
@@ -39,12 +39,12 @@ allowed to clean up and make inaccessible keyed state
entries when accessing
them. In addition state would now also being cleaned up when writing a
savepoint/checkpoint.
-Flink 1.8 introduces continous cleanup of old entries for both the RocksDB
+Flink 1.8 introduces continuous cleanup of old entries for both the RocksDB
state backend
([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471)) and the heap
state backend
([FLINK-10473](https://issues.apache.org/jira/browse/FLINK-10473)). This means
-that old entries (according to the ttl setting) are continously being cleanup
+that old entries (according to the ttl setting) are continuously being cleanup
up.
#### New Support for Schema Migration when restoring Savepoints
diff --git a/docs/content/release-notes/flink-1.9.md
b/docs/content/release-notes/flink-1.9.md
index f0e49d9..be14cd3 100644
--- a/docs/content/release-notes/flink-1.9.md
+++ b/docs/content/release-notes/flink-1.9.md
@@ -225,7 +225,7 @@ Related issues:
### MapR dependency removed
Dependency on MapR vendor-specific artifacts has been removed, by changing the
MapR filesystem connector to work
-purely based on reflection. This does not introduce any regession in the
support for the MapR filesystem.
+purely based on reflection. This does not introduce any regression in the
support for the MapR filesystem.
The decision to remove hard dependencies on the MapR artifacts was made due to
very flaky access to the secure https
endpoint of the MapR artifact repository, and affected build stability of
Flink.