zentol closed pull request #6942: [hotfix] [docs] Fixed small typos in the 
documentation.
URL: https://github.com/apache/flink/pull/6942
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/batch/hadoop_compatibility.md 
b/docs/dev/batch/hadoop_compatibility.md
index c665b5d99a0..4e481822cf2 100644
--- a/docs/dev/batch/hadoop_compatibility.md
+++ b/docs/dev/batch/hadoop_compatibility.md
@@ -75,7 +75,7 @@ if you only want to use your Hadoop data types. See the
 
 To use Hadoop `InputFormats` with Flink the format must first be wrapped
 using either `readHadoopFile` or `createHadoopInput` of the
-`HadoopInputs` utilty class. 
+`HadoopInputs` utility class.
 The former is used for input formats derived
 from `FileInputFormat` while the latter has to be used for general purpose
 input formats.
diff --git a/docs/dev/connectors/cassandra.md b/docs/dev/connectors/cassandra.md
index c4c3e3a4751..2a2acb3bc38 100644
--- a/docs/dev/connectors/cassandra.md
+++ b/docs/dev/connectors/cassandra.md
@@ -77,7 +77,7 @@ The following configuration methods can be used:
     * Allows exactly-once processing for non-deterministic algorithms.
 6. _setFailureHandler([CassandraFailureHandler failureHandler])_
     * An __optional__ setting
-    * Sets the custom failur handler.
+    * Sets the custom failure handler.
 7. _build()_
     * Finalizes the configuration and constructs the CassandraSink instance.
 
diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index b413390615a..5f1112706ce 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -83,7 +83,7 @@ The **connector** describes the external system that stores 
the data of a table.
 
 Some systems support different **data formats**. For example, a table that is 
stored in Kafka or in files can encode its rows with CSV, JSON, or Avro. A 
database connector might need the table schema here. Whether or not a storage 
system requires the definition of a format, is documented for every 
[connector](connect.html#table-connectors). Different systems also require 
different [types of formats](connect.html#table-formats) (e.g., column-oriented 
formats vs. row-oriented formats). The documentation states which format types 
and connectors are compatible.
 
-The **table schema** defines the schema of a table that is exposed to SQL 
queries. It describes how a source maps the data format to the table schema and 
a sink vice versa. The schema has access to fields defined by the connector or 
format. It can use one or more fields for extracting or inserting [time 
attributes](streaming/time_attributes.html). If input fields have no 
determinstic field order, the schema clearly defines column names, their order, 
and origin.
+The **table schema** defines the schema of a table that is exposed to SQL 
queries. It describes how a source maps the data format to the table schema and 
a sink vice versa. The schema has access to fields defined by the connector or 
format. It can use one or more fields for extracting or inserting [time 
attributes](streaming/time_attributes.html). If input fields have no 
deterministic field order, the schema clearly defines column names, their 
order, and origin.
 
 The subsequent sections will cover each definition part 
([connector](connect.html#table-connectors), 
[format](connect.html#table-formats), and [schema](connect.html#table-schema)) 
in more detail. The following example shows how to pass them:
 
@@ -113,7 +113,7 @@ schema: ...
 
 The table's type (`source`, `sink`, or `both`) determines how a table is 
registered. In case of table type `both`, both a table source and table sink 
are registered under the same name. Logically, this means that we can both read 
and write to such a table similarly to a table in a regular DBMS.
 
-For streaming queries, an [update mode](connect.html#update-mode) declares how 
to communicate between a dynamic table and the storage system for continous 
queries.
+For streaming queries, an [update mode](connect.html#update-mode) declares how 
to communicate between a dynamic table and the storage system for continuous 
queries.
 
 The following code shows a full example of how to connect to Kafka for reading 
Avro records.
 
@@ -673,7 +673,7 @@ connector:
                                 #   (only MB granularity is supported)
       interval: 60000           # optional: bulk flush interval (in 
milliseconds)
       back-off:                 # optional: backoff strategy ("disabled" by 
default)
-        type: ...               #   valid strategis are "disabled", 
"constant", or "exponential"
+        type: ...               #   valid strategies are "disabled", 
"constant", or "exponential"
         max-retries: 3          # optional: maximum number of retries
         delay: 30000            # optional: delay between each backoff attempt 
(in milliseconds)
 
diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 2b1c08927e3..d4eb228c7bf 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -490,7 +490,7 @@ tables:
       WHERE MyField2 > 200
 {% endhighlight %}
 
-Similar to table sources and sinks, views defined in a session environment 
file have highest precendence.
+Similar to table sources and sinks, views defined in a session environment 
file have highest precedence.
 
 Views can also be created within a CLI session using the `CREATE VIEW` 
statement:
 
diff --git a/docs/dev/table/streaming/temporal_tables.md 
b/docs/dev/table/streaming/temporal_tables.md
index 2dd6ed722d9..b45052790b5 100644
--- a/docs/dev/table/streaming/temporal_tables.md
+++ b/docs/dev/table/streaming/temporal_tables.md
@@ -114,7 +114,7 @@ Each query to `Rates(timeAttribute)` would return the state 
of the `Rates` for t
 **Note**: Currently, Flink doesn't support directly querying the temporal 
table functions with a constant time attribute parameter. At the moment, 
temporal table functions can only be used in joins.
 The example above was used to provide an intuition about what the function 
`Rates(timeAttribute)` returns.
 
-See also the [joining page for continous queries](joins.html) for more 
information about how to join with a temporal table.
+See also the [joining page for continuous queries](joins.html) for more 
information about how to join with a temporal table.
 
 ### Defining Temporal Table Function
 
diff --git a/docs/dev/table/tableApi.md b/docs/dev/table/tableApi.md
index 6c4e1be0cf7..76bd5b28ef2 100644
--- a/docs/dev/table/tableApi.md
+++ b/docs/dev/table/tableApi.md
@@ -1317,7 +1317,7 @@ val table = input
 </div>
 </div>
 
-Window properties such as the start, end, or rowtime timestamp of a time 
window can be added in the select statement as a property of the window alias 
as `w.start`, `w.end`, and `w.rowtime`, respectively. The window start and 
rowtime timestamps are the inclusive lower and uppper window boundaries. In 
contrast, the window end timestamp is the exclusive upper window boundary. For 
example a tumbling window of 30 minutes that starts at 2pm would have 
`14:00:00.000` as start timestamp, `14:29:59.999` as rowtime timestamp, and 
`14:30:00.000` as end timestamp.
+Window properties such as the start, end, or rowtime timestamp of a time 
window can be added in the select statement as a property of the window alias 
as `w.start`, `w.end`, and `w.rowtime`, respectively. The window start and 
rowtime timestamps are the inclusive lower and upper window boundaries. In 
contrast, the window end timestamp is the exclusive upper window boundary. For 
example a tumbling window of 30 minutes that starts at 2pm would have 
`14:00:00.000` as start timestamp, `14:29:59.999` as rowtime timestamp, and 
`14:30:00.000` as end timestamp.
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to